- Cloud Security Newsletter
- Posts
- Why Building Your Own Cloud Security AI Agent May Not Be the Answer Today!
Why Building Your Own Cloud Security AI Agent May Not Be the Answer Today!
This week's newsletter examines the sobering reality behind AI agent development for vulnerability management in cloud, featuring insights from Harry Wetherald on why the "build vs buy" decision for AI cloud security tools requires more careful consideration than most organizations realize. We also cover critical supply chain attacks, the latest Chrome zero-day, and strategic acquisition trends reshaping the security landscape.
Hello from the Cloud-verse!
This week’s Cloud Security Newsletter Topic is - Why Building Your Own Cloud Security AI Agent May Not Be the Answer Today! (continue reading)

This image was generated by AI. It's still experimental, so it might not be a perfect match!
Incase, this is your 1st Cloud Security Newsletter! You are in good company!
You are reading this issue along with your friends and colleagues from companies like Netflix, Citi, JP Morgan, Linkedin, Reddit, Github, Gitlab, CapitalOne, Robinhood, HSBC, British Airways, Airbnb, Block, Booking Inc & more who subscribe to this newsletter, who like you want to learn what’s new with Cloud Security each week from their industry peers like many others who listen to Cloud Security Podcast & AI Security Podcast every week.
Welcome to this week's edition of the Cloud Security Newsletter!
The cybersecurity industry is experiencing an AI awakening, but not the kind most vendors are promoting. While marketing teams flood the market with "AI agent" solutions, the reality of building and maintaining enterprise-grade AI security tools reveals a complexity that few organizations are prepared to handle. This week, we dive deep into the vulnerability management crisis and explore why the promise of DIY AI agents may be more mirage than reality.
Our featured expert, Harry Wetherald, co-founder and CEO of Maze, brings a unique perspective from both the machine learning and security worlds. Having led product management at Tessian (acquired by Proofpoint) and now building AI agents specifically for vulnerability triage, Harry offers candid insights into what it actually takes to deploy reliable AI in security operations.
📰 THIS WEEK'S SECURITY NEWS
🚚 UNFI Cyberattack Disrupts Whole Foods Supply Chain
United Natural Foods, Inc. (UNFI), the primary distributor for Whole Foods and supplier to over 30,000 retail locations, took critical systems offline after detecting unauthorized activity on June 5th. The attack has created ongoing disruptions to the company's ability to fulfill and distribute customer orders, with ripple effects throughout the grocery supply chain.
Why This Matters: This incident demonstrates how cyberattacks on critical supply chain partners can create cascading effects throughout entire industries. For cloud security professionals, UNFI's disruption underscores the need for comprehensive third-party risk assessment and continuity planning. Organizations should evaluate their dependencies on cloud-connected distribution platforms and ensure robust segmentation between partner networks and internal systems.
Sources: CNN Business, BleepingComputer
🏥 Blue Shield of California Confirms Largest Healthcare Breach of 2025
What Happened: Blue Shield of California has confirmed the largest health care data breach of 2025, exposing sensitive patient information, including names and medical services, due to a misconfigured Google Analytics setup. With 4.7 million patients potentially affected, this is the largest health care-related data breach of 2025 so far.
Why It Matters: This breach highlights the hidden privacy risks in cloud analytics integrations. Healthcare organizations and their cloud security teams must audit all third-party analytics, advertising, and tracking code for potential data leakage. The misconfiguration led to protected health information being shared with Google Ads, demonstrating how seemingly benign integrations can violate HIPAA compliance.
Source: TheStreet
Check Point Acquires Veriti for >$100M to Boost Virtual Patching
What Happened: Check Point announced its acquisition of Veriti—a threat-exposure and mitigation startup—for over $100 million. Veriti's cross-vendor virtual-patching and threat-intel platform will be embedded into Check Point Infinity's Threat Exposure and Risk Management suite.
Why It Matters: Virtual patching is critical for cloud workloads that can't always be immediately remediated. Integrating this capability enhances inline protection without disrupting SaaS deployments or containers. Cloud security teams should evaluate how to adapt similar virtual patch controls to their CI/CD pipelines and assess the growing trend toward preemptive exposure management platforms that can automatically remediate vulnerabilities across multi-vendor environments.
Source: CTech
F5 Acquires Fletch to Infuse Agentic AI into App-Security
What Happened: F5 acquired Fletch, a San Francisco AI firm, to integrate agentic AI-driven threat detection and proactive security into its Application Delivery & Security Platform. Fletch's technology analyzes massive amounts of threat intelligence data and remediates the most severe vulnerabilities in real time.
Why It Matters: For cloud app protection, AI-driven context-aware detection can reduce blind spots and accelerate response. Teams should prepare to adopt AI-enhanced controls while also planning for novel risks from agentic models (e.g., false positives, adversarial evasion). This acquisition represents the evolution toward autonomous security operations that can take proactive action without human intervention.
Source: Dark Reading
🔓 Google Account Recovery Vulnerability Exposed Phone Numbers
Security researcher "Brutecat" disclosed a vulnerability in Google's account recovery system that allowed attackers to brute-force the phone numbers of any Google user. The flaw exploited a deprecated JavaScript-disabled recovery form that lacked modern anti-automation controls, enabling systematic phone number enumeration attacks through a combination of display name leakage and partial phone number hints.
Why This Matters: Identity recovery flows represent a often-overlooked attack vector, especially for cloud identity providers. The ability to extract phone numbers could enable targeted SIM swap attacks against high-value enterprise users. Cloud security teams should audit their IAM recovery workflows, implement rate limiting, CAPTCHA, and MFA protections, and monitor for anomalous recovery attempts across all identity systems.
Sources: SecurityWeek, TechCrunch
Microsoft Defender for Cloud Expands AI Security Coverage to GCP
What Happened: Defender for Cloud's AI security posture management features now support AI workloads in Google Cloud Platform (GCP) Vertex AI (Preview). Modern AI application Discovery: Automatically discover and catalog AI application components, data, and AI artifacts deployed in GCP Vertex AI. Security Posture Strengthening: Detect misconfigurations and receive built-in recommendations and remediation actions to enhance the security posture of your AI applications.
Why It Matters: The expansion of AI security posture management to multi-cloud environments reflects the reality that enterprises are deploying AI workloads across multiple cloud providers. This capability provides centralized visibility and security controls for AI applications, addressing the unique risks of machine learning pipelines and model deployment infrastructure.
Source: Microsoft Learn
📦 Massive NPM Supply Chain Attack Hits GlueStack Packages
Cybersecurity researchers identified a supply chain attack targeting over a dozen packages associated with GlueStack, collectively accounting for nearly 1 million weekly downloads. The malware, injected via changes to "lib/commonjs/index.js," provides attackers with remote access capabilities including shell command execution, screenshot capture, and file upload functionality.
Why This Matters: This attack demonstrates the massive blast radius of modern package ecosystem compromises. For cloud security teams, this incident underscores the critical need for software composition analysis, dependency scanning, and secure software supply chain practices. The remote access trojan capabilities highlight how compromised development dependencies can lead to production system compromise, making this a priority for DevSecOps teams.
Source: The Hacker News
CLOUD SECURITY TOPIC OF THE WEEK
The AI Agent Reality Check: Why Building Your Own Security AI Agent May Not Be the Answer Today!
The cybersecurity industry is experiencing an "AI agent" gold rush, with vendors racing to slap the label on everything from basic automation to sophisticated threat detection systems. But beneath the marketing hype lies a sobering truth: building reliable, enterprise-grade AI security tools requires far more engineering investment than most organizations anticipate.
Featured Experts This Week 🎤
Harry Wetherald - Co-founder and CEO, Maze
Ashish Rajan - CISO | Host, Cloud Security Podcast
Definitions and Core Concepts 📚
Before diving into our insights, let's clarify some key terms:
AI Agents vs. LLMs: Think of using an LLM (like ChatGPT) as running a script, while using an agent is like running a whole program. Agents can access multiple data sources, make decisions, and take actions autonomously.
False Positive Rate: In vulnerability management, the percentage of identified vulnerabilities that cannot actually be exploited in the current environment. Industry estimates range from 90-99%.
Virtual Patching: Security controls that provide protection against vulnerabilities without modifying the vulnerable system itself, often implemented at the network or application layer.
Agentic AI: AI systems that can take autonomous actions and make decisions based on their analysis, rather than simply responding to prompts or following predefined rules.
CVSS (Common Vulnerability Scoring System): A standardized method for rating the severity of security vulnerabilities, ranging from 0.0 to 10.0.
This week's issue is sponsored by Material Security
Protect Your Google Workspace with Purpose-Built Security
Your Google Workspace is the backbone of your business, yet most teams use security tools that weren’t designed to protect it.
Material Security changes that. Built specifically for Google Workspace, Material is a detection and response platform that protects Gmail, Google Drive, and accounts by proactively eliminating security gaps, stopping misconfigurations, and preventing shadow IT before they turn into costly problems.
With real-time monitoring and automatic fixes, Material keeps your workspace secure with minimal effort, reducing human error and freeing up your team to focus on work that matters.
💡Our Insights from this Practitioner 🔍
The Vulnerability Management Crisis is Real
Harry Wetherald's research into vulnerability management reveals a perfect storm brewing in vulnerability management. The statistics are sobering: 40% year-over-year growth in the number of vulnerabilities, combined with a dramatic reduction in the time attackers need to exploit them—from 30+ days just a few years ago to 3-5 days today.
"You've got this exponential increase in number of vulnerabilities, exponential decrease in the time you have available to fix it, but then all the security teams have the same number of people available to fix them," Harry explains. This isn't just a scaling problem. It's a fundamental mismatch between the rate of change in the threat landscape and organizational capacity to respond.
For cloud security professionals, this means traditional approaches to vulnerability management are no longer sustainable. The volume of findings from cloud security posture management (CSPM) and cloud-native application protection platforms (CNAPP) can overwhelm even well-staffed teams, leading to what Harry calls "a re-sorted list of false positives."
Why Current Tools like CSPM, CNAPP etc Fall Short
Despite widespread adoption of CNAPPs and advanced vulnerability scanners, Harry's analysis reveals a critical gap: "Based on our analysis, something like 90% of the findings that they give you are false positives. So they actually cannot be exploited in any way, shape, or form in your environment."
This isn't necessarily a failure of the tools themselves, but rather a limitation of rule-based systems trying to understand complex, contextual security scenarios. Traditional vulnerability scanners excel at identifying potential issues but struggle with the nuanced analysis required to determine actual exploitability in specific environments.
The result is what security teams know all too well: the "Christmas tree lighting up" effect where deployment of new security tools creates visibility into thousands of potential issues without providing the context needed to prioritize effectively.
The AI Agent Promise and Reality
The appeal of AI agents for security automation is obvious e.g imagine having thousands of expert security engineers available to analyze every vulnerability in detail. But Harry warns against underestimating the engineering complexity involved.
"The thing I'd always say to people if they're embarking on doing this internally, if you have lots of engineering resources and you can commit those resources to the project for 1, 2, 3, 4 years, you'll probably get to something decent," he notes. "But getting from that quite cool thing to something that you can rely on for enterprise security team is maddeningly long."
The gap between a compelling proof of concept and production-ready enterprise software represents months or years of unglamorous engineering work: addressing hallucinations, ensuring cost efficiency, building reliability at scale, and handling edge cases. For most organizations, this represents a significant opportunity cost.
The Build vs. Buy Decision Framework
Harry advocates for a pragmatic approach to the build vs. buy decision: "Anyone that talks to me about it, I always say the same thing, which is just go and do it this weekend. It's so quick to get it up and running. You can get a sense of what it can do really quickly."
The weekend prototype approach serves multiple purposes:
Rapid validation of whether AI can address your specific use case
Understanding of complexity as you encounter real-world edge cases
Requirements clarification for evaluating commercial solutions
Team education on AI capabilities and limitations
However, he emphasizes the importance of realistic expectations: "If you really want to rely on it doing end-to-end automation of quite complex tasks, just be ready for some pain basically. And be ready to commit the level of engineering resources needed to make it successful."
Designing for AI Success
For organizations that do choose to build internally, Harry recommends focusing on the scaffolding rather than the models themselves. "The smartest place to be is right on the edge of what's possible. And then as that next model comes out, 'Oh cool, this is now working that bit better now.'"
This approach requires thinking about AI products differently:
Model-agnostic architecture that can swap in new models as they improve
Systematic evaluation frameworks to quickly assess how model changes affect product performance
Focus on the problem being solved rather than the specific AI technology being used
The Human-AI Collaboration Model
Rather than replacing human expertise, the most effective AI security tools augment human capabilities. Harry describes their approach as giving AI agents "access to what your best security engineer would have access to."
This philosophy has important implications for tool design:
Comprehensive data access allowing agents to investigate like human experts would
Transparent reasoning so security teams can understand and validate AI conclusions
Iterative improvement based on human feedback and domain expertise
The goal isn't to eliminate human judgment but to scale expert-level analysis to match the volume of modern security challenges.
Question for you? (Reply to this email)
Would you rather build your own Security AI Agent or buy an AI Security Agent?
Next week, we'll explore another critical aspect of cloud security. Stay tuned!
We would love to hear from you📢 for a feature or topic request or if you would like to sponsor an edition of Cloud Security Newsletter.
Thank you for continuing to subscribe and Welcome to the new members in tis newsletter community💙
Peace!
Was this forwarded to you? You can Sign up here, to join our growing readership.
Want to sponsor the next newsletter edition! Lets make it happen
Have you joined our FREE Monthly Cloud Security Bootcamp yet?
checkout our sister podcast AI Security Podcast