- Cloud Security Newsletter
- Posts
- π¨ AI Discovers Thousands of Zero-Days: Lessons from Catching What EDR Can't See
π¨ AI Discovers Thousands of Zero-Days: Lessons from Catching What EDR Can't See
Microsoft's record-breaking April Patch Tuesday (167 CVEs), Anthropic's Claude Mythos autonomously discovering thousands of critical zero-days, the ShinyHunters breach of Anodot and Snowflake customer environments via stolen SaaS tokens, and the TeamPCP open-source supply chain attack stealing 10,000+ cloud credentials.
Hello from the Cloud-verse!
This weekβs Cloud Security Newsletter topic: The Behavioral Blind Spot: Why Your Security Stack Can't See What Users Actually Do With AI (continue reading)
Incase, this is your 1st Cloud Security Newsletter! You are in good company!
You are reading this issue along with your friends and colleagues from companies like Netflix, Citi, JP Morgan, Linkedin, Reddit, Github, Gitlab, CapitalOne, Robinhood, HSBC, British Airways, Airbnb, Block, Booking Inc & more who subscribe to this newsletter, who like you want to learn whatβs new with Cloud Security each week from their industry peers like many others who listen to Cloud Security Podcast & AI Security Podcast every week.
Welcome to this weekβs Cloud Security Newsletter
The week of April 8β13 delivered a signal that the security industry has been anticipating with dread: autonomous AI vulnerability discovery at scale. Anthropic's disclosure of Claude Mythos Preview, a model that independently found thousands of high- and critical-severity zero-days across every major operating system and browser is not a research curiosity. It is a forcing function for every security program that still operates on human-speed threat models.
At the same time, attackers remained methodical and thoroughly mundane. ShinyHunters did not need a novel exploit to pivot from Anodot's SaaS platform into over a dozen Snowflake customer environments, they walked in with stolen integration tokens. TeamPCP did not compromise a zero-day; they compromised a maintainer's credentials and waited for CI/CD pipelines to distribute the payload automatically.
Threading these stories together this week is Brandon Dixon, Co-founder of Ent.ai and one of the architects of Microsoft Defender Threat Intelligence and Microsoft Security Copilot. Brandon joined Cloud Security Podcast host Ashish Rajan to discuss why the control surfaces enterprises have built EDR, DLP, UEBA, SSPM are structurally blind to the behavioral signals that matter most: what a user actually intends to do, across every application, in real time. [Listen to the episode]
β‘ TL;DR for Busy Readers
π΄ Patch NOW: SharePoint (active exploit), Ivanti EPMM (KEV), Adobe Reader (5 months undetected), Windows IKE RCE (CVSS 9.8, wormable)
π SaaS tokens = your weakest link: ShinyHunters breached Snowflake customers without a single vulnerability β just stolen integration tokens
π¦ Your security tools can betray you: TeamPCP compromised Trivy, KICS, Axios, LiteLLM β 10,000+ cloud creds stolen via CI/CD
π€ AI offense just scaled: Anthropicβs Claude Mythos found thousands of zero-days autonomously
ποΈ EDR blind spot is real: It sees processes, not intent β and attackers are now operating in that gap
π WHAT TO DO THIS WEEK
Patch internet-facing systems immediately (SharePoint, Ivanti, Windows services)
Rotate ALL third-party SaaS tokens (especially Snowflake integrations)
Audit CI/CD pipelines β lock dependencies + enforce signed artifacts
Review AI usage inside sanctioned apps (Teams, Slack, WhatsApp, etc.)
π° THIS WEEK'S TOP 8 SECURITY HEADLINES
Each story includes why it matters and what to do next β no vendor fluff.
π¨ 1. Microsoft Patch Tuesday (167 CVEs, Active Exploits)
What happened:
167 vulnerabilities patched (largest of 2026)
Active SharePoint zero-day (CVE-2026-32201)
Wormable Windows IKE RCE (CVSS 9.8)
Adobe Reader zero-day active for 5 months
Why it matters:
This is Exploit Wednesday territory. Attackers are already reverse-engineering patches.
π Priority order:
SharePoint (active exploitation)
Ivanti EPMM (KEV)
Windows IKE / TCP-IP RCE
Adobe Reader
Sources: BleepingComputer | Security Affairs | Zero Day Initiative
π¨ 2. Ivanti EPMM Zero-Day (KEV, 4,400+ Exposed)
What happened:
Third critical zero-day in Ivanti platform
Active exploitation confirmed
4,400+ internet-exposed instances
Why it matters:
This is no longer a vulnerability issue β itβs a vendor risk signal.
π Treat as:
Emergency patch
Potential breach (assume compromise mindset)
Sources: BleepingComputer | Palo Alto Unit 42 | Cybersecurity Dive | Rapid7
βοΈ 3. β‘ Adobe Zero-Day (5 Months Undetected)
What happened:
Silent exploitation via PDF for ~5 months
No user interaction required beyond opening
Targeting energy sector
Why it matters:
Your telemetry likely showed:
π βAcrobat.exe is runningβ β nothing else.
This is exactly the detection gap modern attacks exploit.
Sources: OpenVPN Blog | Zero Day Initiative
π₯ 4. ShinyHunters: SaaS Token Breach β Snowflake Customers
Beginning April 4, ShinyHunters compromised Anodot β an AI-powered cloud cost monitoring
What happened:
Anodot breached β integration tokens stolen
Attackers accessed multiple Snowflake environments
78.6M records leaked
Why it matters:
No vulnerability. No exploit.
π Just trusted SaaS tokens doing their job
This mirrors the 2025 OAuth attacks β and most orgs still havenβt fixed it.
Action:
Rotate ALL SaaS tokens
Enforce short-lived credentials
Audit vendor integrations post-acquisition
Sources: TechCrunch | BleepingComputer | The Record
5 β TeamPCP Supply Chain Attack (10,000+ Orgs Impacted)
What happened:
Trivy, KICS, Axios, LiteLLM compromised
Malicious packages exfiltrated cloud credentials
Spread via CI/CD pipelines
Why it matters:
Your security scanners are now part of the attack surface.
π If your pipeline auto-pulls latest:
Youβre running unverified code in production by design
Action:
Lock dependency versions
Use signed artifacts
Add release cooldown windows
Sources: SANS Institute | Zscaler ThreatLabz | The Register | InfoQ
6 β Anthropic's Claude Mythos Preview Autonomously Discovers Thousands of Zero-Days Across Every Major OS and Browser; Project Glasswing Launched
What happened:
Claude Mythos discovered thousands of vulnerabilities
72% success rate in exploit development
Posted exploits autonomously in testing
Why it matters:
This is a category shift.
π Threat model is now:
Faster than human response
Cheaper to execute
Scalable by design
Implication:
Detection-led security will fail.
π You need:
Automated containment
Identity-aware segmentation
Sub-30 min response capability
7 β Microsoft Defender for Cloud β AI Model Scanning
What happened:
Scans models for malware, secrets, unsafe formats
Supports .pkl, .onnx, .pt, etc.
Integrated into CI/CD
Why it matters:
AI models = new software supply chain
π Pickle files = built-in RCE risk
Action:
Scan all models before deployment
Prefer SafeTensors over Pickle
Sources: Microsoft Learn β AI Model Security | Microsoft Tech Community Blog | Defender for Cloud Release Notes
8 β Databricks Enters SIEM Market with "Lakewatch" via Dual Acquisition of Antimatter and SiftD.ai
What happened:
Acquires Antimatter + SiftD.ai
Launches AI-native SIEM
Why it matters:
Security is merging with data platforms.
π This creates a new problem:
Who owns detection security or data teams?
π― Cloud Security Topic of the Week:
The Behavioral Blind Spot: Why Your Security Stack Can't See What Users Actually Do With AI
Every major security incident discussed in this week's news brief shares a common thread: attackers and risky insiders are operating inside the legitimate workflows of sanctioned enterprise tools. ShinyHunters looked like a trusted SaaS integration. The Adobe PDF exploit ran inside a fully patched Reader process. TeamPCP's malicious Axios release ran through the same npm dependency resolution your developers use daily. The Zoom remote-control exfiltration path is invisible to EDR because EDR sees the process, not the intent.
Brandon Dixon's work at Ent.ai addresses the fundamental question this creates for enterprise security programs: if adversaries and risky insiders have moved into the behavioral layer β into the clicks, clipboard operations, drag-and-drop file transfers, and AI prompt submissions that happen inside sanctioned applications β what does your detection surface actually cover?
The answer, increasingly, is less than you think. This is the topic that binds together AI security governance, insider risk, supply chain compromise, and the post-EDR detection problem. And it is the lens through which senior cloud security leaders should be evaluating both their current tooling and the programs they are building for the AI era.
Featured Experts This Week π€
Brandon Dixon Co-founder & CTO , Ent.ai
Ashish Rajan - CISO | Co-Host AI Security Podcast , Host of Cloud Security Podcast
Definitions and Core Concepts π
Before diving into our insights, let's clarify some key terms:
Behavioral Layer / Endpoint Behavioral Intelligence A detection approach that sits above the operating system process level and observes what users and agents actually do within and across applications β clicks, clipboard operations, file drags, AI prompt submissions, cross-app data movement β rather than simply which processes are running. This is distinct from EDR (which monitors file system and process activity) and UEBA (which typically analyzes aggregated log-derived signals).
Living Off the Land (LotL) An attack technique in which threat actors use legitimate, pre-installed enterprise software (PowerShell, WMI, Zoom remote control, etc.) to conduct malicious activity, making detection harder because the tools themselves are not inherently suspicious. TeamPCP's use of compromised legitimate package releases is a supply-chain variant of this technique.
CVE / CVSS / KEV CVE (Common Vulnerabilities and Exposures): the standardized identifier for a specific vulnerability. CVSS (Common Vulnerability Scoring System): a 0β10 severity score; 9.0+ is Critical. KEV (CISA's Known Exploited Vulnerabilities catalog): a list of vulnerabilities confirmed to be actively exploited in the wild, with mandatory patch deadlines for federal agencies under Binding Operational Directive 22-01.
This week's issue is sponsored by Tines
Move Past AI Hype to Build Secure Scalable Workflows
Hear how leaders at HubSpot, Asana, Jamf, ASOS and Riot Games are scaling AI and automation in real security and operations workflows. AIβs real impact doesnβt happen in isolation. It comes from experimentation and learning from teams already leading the way.
Join Workflow, Tinesβ flagship virtual event streaming live from New York on May 6. Discover how teams are moving beyond AI paralysis, scaling automation responsibly, and building workflows that eliminate busywork.
Discover how teams are moving beyond AI paralysis, scaling automation responsibly, and reducing manual security and operational work.
π‘Our Insights from this Practitioner π
1. The EDR Gap Is Structural, Not a Configuration Problem
One of the most practically useful points Brandon makes β and one that directly informs how you should interpret the attack patterns in this week's news β is that EDR's visibility gap is not a tuning problem or a missed rule. It is architectural. EDR was designed in an era when malware manipulated the file system. That design decision shapes what EDR can and cannot see:
"The adversary is now using the same software as the enterprise, and they're trying to look like an employee specifically, so they don't get detected. All we know is that Zoom's running. We don't understand why they gave remote control over and what they did after that happened." β Brandon Dixon, Co-founder, Ent.ai
This is not a theoretical gap. Brandon's incident response colleagues can confirm that Zoom remote control was activated β they know from SaaS logs β but they have no way to determine whether the resulting session was malicious or a legitimate help-desk handoff. That ambiguity is operationally disabling: you cannot act on an alert you cannot classify.
For cloud security architects, the practical implication is that any detection program that relies exclusively on process-level telemetry (EDR) and network-level telemetry (CASB/proxy) is structurally blind to the behavioral middle layer where the most consequential decisions are made. That middle layer β drag-and-drop, clipboard contents, UI interaction within an application, cross-app data movement β is precisely where AI adoption is creating new risk vectors.
2. Unsanctioned AI Inside Sanctioned Applications Is Your Next Undetectable HIPAA Violation
The WhatsApp/Meta AI example Brandon describes from a Fortune 500 deployment illustrates three converging risk trends simultaneously:
AI is being embedded directly into sanctioned communication tools β WhatsApp, Zoom, Slack, Teams β without requiring a separate application install or any deliberate opt-in by the enterprise.
Users are not being malicious. The HR employee who pasted patient records into Meta AI to get a summary was trying to do their job efficiently. The violation was unintentional β but it was still a HIPAA violation.
Traditional DLP cannot catch it. As Brandon notes, DLP runs pattern-matching against content it can observe. If you drag a file rather than copy-paste, if the pattern doesn't match a defined regex, if the AI feature is embedded within encrypted application traffic β DLP misses it.
For CISOs currently building AI governance programs, this creates a classification problem that is not solvable through policy alone. You can enumerate the authorized AI tools in your AUP. You cannot enumerate every AI feature that will be silently added to every SaaS application your users rely on. WhatsApp embedded Meta AI. Microsoft embedded Copilot into Teams and Word. Google embedded Gemini into Workspace. The AI surface within sanctioned applications is growing faster than any policy review cycle can track.
3. Context-Aware Anomaly Detection: The Lessons of Failed UEBA Programs
Many senior security professionals carry institutional scar tissue from UEBA deployments that generated high false-positive rates and were eventually deprioritized or abandoned. Brandon's analysis of why those programs struggled is directly relevant to evaluating the next generation of behavioral detection platforms:
"Where those [UEBA systems] struggled is they simply tried to model independent variables or anomalies. Working late at night is potentially not a big deal, but working late at night and giving remote control of your system to someone else outside the business β well, you know, that's kind of odd. That context has historically been missing." β Brandon Dixon, Co-founder, Ent.ai
The distinction Brandon draws is between modeling independent behavioral variables (login time, geo-location, process list) versus modeling behavioral context β understanding what a user was doing immediately before, during, and after an action in order to evaluate whether the combination of behaviors constitutes risk. This is a fundamentally different technical approach from statistical anomaly detection, and it requires a different data source: behavioral telemetry captured at the endpoint in real time, not reconstructed from logs after the fact.
4. Endpoint Behavior as the Foundation for AI Agent Oversight
For cloud security leaders currently building AI governance frameworks, Brandon introduces a detection surface that most programs have not yet addressed: AI agents running locally on endpoints. As local model inference becomes more practical on modern hardware, AI agents that perform autonomous tasks β file access, web browsing, application control, code execution β are increasingly running at the endpoint level rather than exclusively in the cloud.
Brandon's approach at Ent.ai is to proxy that agent traffic at the endpoint layer, enabling visibility into the handoff between human instruction, AI execution, and the resulting system actions. This creates a detection capability that cloud-only monitoring misses entirely: you can observe what instruction the user gave the agent, what the agent did with it, and whether the resulting system interaction (file writes, process spawns, network calls) is consistent with the stated instruction and role-appropriate behavior.
For DevSecOps teams deploying AI coding assistants (GitHub Copilot, Cursor, Claude Code) and enterprise AI workflow tools, this framing suggests a near-term requirement: you need observability not just into which AI tools are authorized, but into what those tools do on the endpoint when they execute.
5. What to Actually Build First: Brandon's Program Guidance
When Ashish pressed Brandon on what a security leader should actually do before investing in behavioral detection tooling, his answer reframes the question in a way that is immediately actionable for senior practitioners:
"Do you actually understand who your risky users are? And more importantly, why are they risky? Because most people can't answer the question: what's actually happening in their business. They have these draconian policies that they draft... and it's only as good as its ability to enforce it at a control point. And the problem is, those policies are overly broad and the control points are not sufficiently deep, and they miss context." β Brandon Dixon, Co-founder, Ent.ai
The practical starting point this suggests for programs at different maturity levels:
Foundational (no behavioral layer yet): Start by mapping your highest-risk user populations β finance, HR, legal, engineering with production access β and documenting what "normal" looks like for each role. This baseline work is necessary regardless of what tooling you adopt.
Intermediate (EDR + CASB deployed): Identify the behavioral gaps between what your current stack can observe and what a risky action in each high-risk role actually looks like. The WhatsApp/Meta AI example is a useful test case β can your current stack catch it? If not, you have a documented gap to drive tooling requirements.
Advanced (evaluating next-generation behavioral detection): Evaluate platforms that provide real-time behavioral context at the endpoint β not just process telemetry or SaaS logs β with AI agent traffic visibility and cross-application behavioral correlation. Brandon's framing is that without the actual behavioral layer, even a data lake plus AI cannot reconstruct sufficient context to make informed decisions at the speed attacks now move.
Ent.ai β Brandon Dixon's endpoint behavioral intelligence platform
CISA KEV Catalog Known Exploited Vulnerabilities list including CVE-2026-1340 β CISA KEV Catalog
Podcast Episode
Cloud Security Podcast -Full Episode with Brandon Dixon- Complete transcript and audio for this week's featured conversation
Question for you? (Reply to this email)
π€ Can your current stack tell the difference between a user legitimately using an AI assistant and one leaking sensitive data through it β and if not, what's your first step to close that gap?
Next week, we'll explore another critical aspect of cloud security. Stay tuned!
π¬ Want weekly expert takes on AI & Cloud Security? [Subscribe here]β
We would love to hear from youπ’ for a feature or topic request or if you would like to sponsor an edition of Cloud Security Newsletter.
Thank you for continuing to subscribe and Welcome to the new members in tis newsletter communityπ
Peace!
Was this forwarded to you? You can Sign up here, to join our growing readership.
Want to sponsor the next newsletter edition! Lets make it happen
Have you joined our FREE Monthly Cloud Security Bootcamp yet?
checkout our sister podcast AI Security Podcast

