- Cloud Security Newsletter
- Posts
- ๐จ AI Agents Are Now the Attack Surface & Building an AI Security Blueprint Before It's Too Late
๐จ AI Agents Are Now the Attack Surface & Building an AI Security Blueprint Before It's Too Late
This week's brief covers the Cline npm supply chain attack weaponising prompt injection against CI/CD pipelines, BeyondTrust CVE-2026-1731 now confirmed in active ransomware campaigns across 11,000+ exposed instances. Alongside the Cisco State of AI Security 2026 report and Microsoft's new Security Dashboard for AI, Trend Micro's Shannon Murphy outlines a pragmatic AI security blueprint centred on data governance, agent identity, and cross-functional ownership for organisations at every stage of AI adoption. Key themes: agentic AI security, AI asset inventory, DSPM, supply chain risk, and enterprise AI governance frameworks.
Hello from the Cloud-verse!
This weekโs Cloud Security Newsletter topic: The AI Security Blueprint: A Maturity-Staged Framework for Enterprise AI Governance (continue reading)
Incase, this is your 1st Cloud Security Newsletter! You are in good company!
You are reading this issue along with your friends and colleagues from companies like Netflix, Citi, JP Morgan, Linkedin, Reddit, Github, Gitlab, CapitalOne, Robinhood, HSBC, British Airways, Airbnb, Block, Booking Inc & more who subscribe to this newsletter, who like you want to learn whatโs new with Cloud Security each week from their industry peers like many others who listen to Cloud Security Podcast & AI Security Podcast every week.
Welcome to this weekโs Cloud Security Newsletter
If this week had a single unifying signal it was this: the AI systems your organisation is deploying faster than ever are becoming the attack surface. From a weaponised npm package silently installing an autonomous AI agent on developer machines, to a Russian nation-state actor using legitimate SaaS webhooks to exfiltrate data without touching a single CVE, to ransomware operators now confirmed exploiting a CVSS 9.9 pre-auth RCE in one of the enterprise's most privileged remote access tools the threat actors are not waiting for your AI governance programme to catch up.
This week's guest, Shannon Murphy, Senior Researcher and AI Security Strategist at TrendAI, has spent the last five years working directly with CISOs, CTOs, and cloud security architects on exactly this problem. In a wide-ranging conversation with Cloud Security Podcast host Ashish Rajan, Shannon lays out a clear-eyed AI security blueprint grounded not in theory but in the patterns she observes across enterprise field engagements covering data governance, agent identity, shift-left for AI, and how to build a cross-functional governance committee that actually holds.
We also cover the Cisco State of AI Security 2026 report revealing that 71% of enterprises are deploying agentic AI they cannot secure, and Microsoft's new Security Dashboard for AI now in public preview. The news this week is not background noise, it is a live demonstration of every risk Shannon describes.[Listen to the episode]
๐ฏ Two Actions to Take This Week
๐ Patch or isolate BeyondTrust immediately
๐ Audit every AI agent in CI/CD and restrict token scope
The AI readiness gap is no longer theoretical.
Itโs operational risk.
๐ฐ TL;DR for Busy Readers
BeyondTrust CVE-2026-1731 (CVSS 9.9) is confirmed
Patch to RS 25.3.2 / PRA 25.1.1 immediately or isolate from internet exposure.Cline npm supply chain attack
Prompt injection used to steal publish credentials.
โ Enforce 48-hour npm version hold.
โ Audit AI agent permissions in CI/CD.
Cisco's 2026 AI Security report:
83% deploying agentic AI. Only 29% ready.
โ Treat the readiness gap as a funded backlog item.
Microsoft's Security Dashboard for AI (public preview)
First Unified AI asset inventory across Defender, Entra, Purview.
โ Enable this week and export your first AI asset register.
๐ฐ THIS WEEK'S TOP 4 SECURITY HEADLINES
Each story includes why it matters and what to do next โ no vendor fluff.
1. BeyondTrust CVE-2026-1731 (CVSS 9.9)
What Happened: What began as a critical disclosure on February 6 escalated this week into confirmed ransomware exploitation across multiple sectors. BeyondTrust's own telemetry indicates active exploitation started January 31 a full week before public disclosure making CVE-2026-1731 a zero-day in retrospect. The flaw is an OS command injection vulnerability in the thin-scc-wrapper component of BeyondTrust Remote Support (RS) and Privileged Remote Access (PRA), exposed via WebSocket and exploitable without authentication. A public PoC dropped February 10; GreyNoise observed mass scanning within 24 hours. CISA added it to the KEV catalog on February 13 with a 72-hour remediation mandate for federal agencies and updated the KEV entry on February 19 to activate the ransomware exploitation flag.
Palo Alto Networks Unit 42 confirmed active exploitation this week across finance, legal, healthcare, higher education, and retail in the US, France, Germany, Australia, and Canada. Observed post-exploitation activity includes VShell and SparkRAT deployment, web shell installation, PostgreSQL database exfiltration, and lateral movement.
Why it matters to you: BeyondTrust RS and PRA are privileged access tools by design they carry SYSTEM-level authority over every managed endpoint. An unauthenticated RCE on these appliances is effectively a master key to your entire managed estate. With 11,000+ internet-exposed instances confirmed and ransomware actors now actively pre-positioning, treat this as an active incident response situation, not a patch management queue item.
Sources: BleepingComputer ยท SecurityWeek ยท SC Media
2. Cline CLI npm Supply Chain Attack Prompt Injection Weaponised to Steal Publish Credentials and Deploy OpenClaw
What happened: On February 17, a threat actor used a stolen npm publish token to release [email protected] a poisoned update to Cline CLI, a popular AI-powered coding assistant with approximately 90,000 weekly npm downloads. A single postinstall script silently ran npm install -g openclaw@latest on any machine installing the package. The malicious version was live for approximately eight hours. StepSecurity estimated roughly 4,000 downloads of the compromised version.
What makes this attack structurally significant is the initial access vector: security researcher Adnan Khan had disclosed on February 9 that Cline's Claude-powered GitHub issue-triage workflow was vulnerable to prompt injection. A crafted GitHub issue could cause the AI agent to execute a malicious payload, poison the GitHub Actions cache, and pivot to steal the npm publish token. Cline patched the triage workflow within 30 minutes but rotated the wrong token. Eight days later, the still-valid token was used to publish the malicious package. The payload, OpenClaw, is a legitimate open-source AI agent with broad system access (full disk, terminal, persistent WebSocket daemon) and a known critical CVE (CVE-2026-25253, CVSS 8.8) in versions prior to 2026.1.29 allowing unauthenticated operator access.
Why it matters to you: This attack introduces a materially new threat model: prompt injection against AI agents in CI/CD pipelines as an initial access technique for credential theft. The entry point was not a phishing email or a code vulnerability, it was a GitHub issue. Any organisation using LLM-powered bots to automate repository triage, PR review, or release workflows with access to production secrets is now a viable target for this attack pattern. This connects directly to Shannon Murphy's warning that agentic AI is creating new blind spots that existing DLP and AppSec tooling cannot cover.
๐ If your AI agent can push code, it must be governed like a privileged identity.
Sources: The Hacker News ยท Dark Reading ยท Snyk Deep-Dive ยท StepSecurity Detection Report
3. Cisco "State of AI Security 2026"- 71% of Enterprises Are Deploying Agentic AI They Cannot Secure
What happened: Cisco's AI Threat Intelligence & Security Research team released its flagship annual report on February 19, with Help Net Security publishing a practitioner-focused analysis on February 23. The report documents three compounding risks: rapid agentic AI deployment outpacing security readiness; a fragile AI supply chain with documented tool poisoning and MCP ecosystem vulnerabilities; and adversarial techniques particularly prompt injection and jailbreaks maturing from research concepts into documented real-world exploits. Key statistic: 83% of surveyed organisations plan to deploy agentic AI into business functions; only 29% feel ready to secure those deployments.
Documented incidents include a GitHub MCP server compromise in which a malicious issue injected hidden instructions that hijacked an agent and exfiltrated private repository data. The report also covers a fake npm package mimicking an email integration that silently forwarded outbound messages to attacker infrastructure a pattern strikingly consistent with the Cline incident reported in the same week. Cisco's researchers demonstrated that open-weight models remain susceptible to multi-turn jailbreaks at significantly higher success rates than single-turn attacks.
Why it matters to you: The report crystallises what security leaders are observing operationally: AI agents are being granted authority to execute tasks, query databases, modify code, and interact with external services often without the controls that would be non-negotiable for a human performing the same actions. The agent-to-agent trust problem is particularly acute. For cloud security teams, the MCP attack surface deserves immediate attention; Cisco has released open-source scanners for MCP, A2A, and agentic skill files as companion tooling. The 71% readiness gap is not a statistic to present to leadership it is a project backlog. This data is the empirical foundation for every strategic recommendation Shannon Murphy makes in this week's feature.
๐ Use this data in your next board update and tie it to funded remediation.
Sources: Cisco AI Security Blog (Primary) ยท Help Net Security ยท Cisco Report
4. Microsoft Launches Security Dashboard for AI in Public Preview - Unified CISO Visibility Across the Enterprise AI Estate
What happened: Microsoft released the Security Dashboard for AI into public preview on February 16, available across enterprise tenants with eligible Defender, Entra, and Purview subscriptions at no additional cost. Accessible at ai. security. microsoft. com, the dashboard aggregates real-time risk signals from all three platforms into a single governance interface designed for CISOs and AI risk leaders. Core capabilities include: a comprehensive AI asset inventory spanning Microsoft 365 Copilot agents, Copilot Studio agents, Azure AI Foundry deployments, MCP servers, and third-party AI applications including OpenAI, Google Gemini, and ChatGPT tenant integrations; an AI risk scorecard with posture drift tracking; correlated risk views linking Purview data sensitivity signals with Entra identity context and Defender threat alerts; and delegated remediation actions. Security Copilot is embedded for natural-language investigation.
Why it matters to you: This announcement directly addresses the shadow AI problem Shannon Murphy identifies as the critical first milestone in any AI security programme: you cannot govern what you cannot see. The dashboard's AI inventory discovery function is the operationalisation of that principle and for organisations already invested in the Microsoft security stack, it is immediately actionable. The dashboard also directly addresses the data leakage risk Cisco independently flags in this week's AI Security report: oversharing detection in Purview integration targets agents with overly broad data permissions, one of the most prevalent enterprise AI exposure patterns observed in 2025. For organisations not on the Microsoft stack, this announcement raises the competitive bar for what a mature CNAPP or CSPM vendor must now offer in AI security posture management.
Enable it. Export inventory. Start governance.
๐ฏ Cloud Security Topic of the Week:
The AI Security Blueprint: A Maturity-Staged Framework for Enterprise AI Governance
One of the clearest themes emerging from Shannon Murphy's conversation is that most organisations are attempting to govern AI deployments using frameworks and tool stacks designed for a deterministic, pre-AI world and that gap is not theoretical. It is showing up in the Cisco report's 71% readiness gap, in the Cline supply chain attack, and in the data leakage scenarios Shannon describes from real enterprise field engagements.
The AI security blueprint she outlines is structured around three maturity stages adopter, builder, and scaler each with distinct risk profiles and corresponding security requirements. What makes this framework practically valuable is that Shannon explicitly states the underlying philosophy remains consistent across all three stages: discover, assess, prioritise, mitigate. The level of security capability scales with the attack surface; the methodology does not change.
Stage-1 - Adopter: Organisations in productivity-gain mode face their highest risk from data governance failures and over-permissioned AI access.
Primary Risk: Shadow AI & Data Exposure
Objective: Real-time AI asset visibility
Deliverable:
Continuously updated AI inventory not a spreadsheet.
Stage-2 - Builder: Development teams building internal AI tools or going to market with AI-powered products face all of the adopter risks plus the application security and supply chain risks illustrated by the Cline attack this week.
Primary Risk: Supply chain & application security
Add:
AI-specific vulnerability scanning
Container security
Runtime monitoring
Agent identity governance
Shift-left is necessary.
Runtime monitoring is mandatory.
Stage-3- Scaler: Organisations investing in AI factories and enterprise-wide automation are operating in what Shannon describes as an inferencing security paradigm: continuous monitoring of live AI systems for behavioural drift, adversarial manipulation, and agent-to-agent trust failures.
Primary Risk: Inferencing Security & Agent-to-Agent Trust
Objective:Treat agents as identities:
Scoped permissions
Short-lived credentials
Access governance
Continuous behavioural monitoring
DSPM becomes foundational here.
Featured Experts This Week ๐ค
Shannon Murphy- Senior Researcher & AI Security Strategist | TrendAI
Ashish Rajan - CISO | Co-Host AI Security Podcast , Host of Cloud Security Podcast
Definitions and Core Concepts ๐
Before diving into our insights, let's clarify some key terms:
Agentic AI
Autonomous systems executing multi-step tasks via tool calls.Prompt Injection
Malicious instructions embedded in data processed by AI agents.Model Context Protocol (MCP)
Standard defining how agents discover and call tools.
This week's issue is sponsored by AI Security Podcast
๐กOur Insights from this Practitioner ๐
How to Build an AI Security Program from Scratch. (Full Episode here)
1. Why 95% of AI Projects Fail and What Security Owns in the Answer
Shannon opens with a striking data point from an MIT study that circulated widely in the security community: 95% of AI projects are failing. Her diagnosis is direct:
"Any security leader who attempts to drive an AI governance strategy in a silo will fail. 95% of AI projects are failing because we're not having all the stakeholders at the table." Shannon Murphy, Trend Micro
The failure pattern she describes is recognisable to anyone who has watched a well-intentioned AI governance initiative stall: business units move fast under top-down pressure to adopt AI, security is brought in late or not at all, and the resulting programme has policy gaps that surface as incidents. The structural fix she advocates is a cross-functional governance committee legal, compliance, engineering, and security with board-level sponsorship that distributes risk ownership rather than concentrating it in the security team alone.
For cloud security leaders, this is both a risk management and a career positioning insight. Shannon notes that AI is creating the conditions for security to have a genuine seat at strategic decision-making tables for the first time because business leaders now understand they have knowledge gaps that require security intelligence to navigate. The opportunity to shift from reactive incident responder to proactive governance partner is real, but it requires showing up with scenario-based risk framing ("here is what a data exfiltration incident looks like in our AI environment and here is what it costs") rather than technical jargon.
2. Your Existing Stack Has a Blind Spot the Size of Your AI Deployment
One of the most practically important points Shannon makes concerns the false sense of coverage that a mature security stack can create when AI enters the picture:
"AI is embedded in every single SaaS application and every single tool that your team is using. You need to know what people are using, you need to know what content is going into that experience and what content is going out." Shannon Murphy, Trend Micro
She illustrates the blind spot with a deceptively simple example: an employee asks their corporate AI copilot for a colleague's salary. Traditional DLP, tuned to flag sensitive data leaving via email or file transfer, has no visibility into this interaction. The data exposure happens entirely within what the organisation considers a sanctioned, secured application and no alert is generated. Scale this to thousands of employees across dozens of AI-enabled SaaS tools, and the aggregate data risk is substantial.
Her prescription is not to abandon the existing stack but to recognise it as table stakes that now requires an AI-specific visibility layer on top. The key principle: context is everything. Risk that exists in isolation an AI query here, a model access there becomes actionable and prioritisable only when it can be seen in relation to the identity making the request, the data being accessed, and the threat signals already in your environment. This is precisely what Microsoft's Security Dashboard for AI announced this week attempts to operationalise.
3. The Three-Milestone AI Security Roadmap
Shannon provides the clearest practical roadmap in the conversation for how a security leader should sequence their AI security programme. It maps directly to the three building blocks her blueprint prioritises:
Milestone 1 Real-Time AI Asset Visibility: "Shadow AI is absolutely massive and you need to be able to wrap your arms around it." This is the non-negotiable foundation. Shannon is explicit that a static inventory is insufficient: "What we have today in place is not what we have tomorrow literally tomorrow." The first deliverable is a continuously updated, real-time inventory of every AI application, agent, model, and integration in your environment. Tools exist today to make this tractable the question is whether the programme has been prioritised.
Milestone 2 Identity and Access Governance for AI: Once you have inventory, the next question is access. Who and what gets access to which data and tools? Shannon's recommendation to treat agents as identities is strategically important: "Maybe we wanna start treating them a little bit like identities. Taking an identity risk management approach to those agents." The tooling for identity governance is mature; applying it systematically to AI agents requires discipline and the right agent inventory to work from.
Milestone 3 Data Governance and Provenance: Shannon identifies this as "your biggest project" and the one most security teams are furthest behind on. DSPM understanding where your data lives, who has access to it in AI contexts, and what happens to it during model fine-tuning or inference is the pillar she describes as "the central bingo card conversation in every CISO engagement over the last two years." For organisations fine-tuning open-weight models on proprietary data, this is particularly acute: the data used for fine-tuning must be governed with the same rigour as production data.
4. Shift Left for AI - More Critical Than Ever, and More Incomplete
Ashish Rajan asks Shannon directly whether shift-left DevSecOps is still relevant in an AI-first world. Her answer is nuanced and worth the full framing:
"Shift left is more needed than ever before. But it is what is going to keep you out of trouble from a quality perspective and when we layer in things like an AI scanner for vulnerability, that's what's going to keep you out of trouble even when we're live in runtime." Shannon Murphy, Trend Micro
The key addition she makes is that shift-left for AI does not end at the pipeline gate. Unlike traditional deterministic software, AI applications continue to change after deployment through model drift, fine-tuning updates, and the inherent non-determinism of LLM outputs. This means that runtime monitoring for hallucination, for adversarial prompt injection, for novel zero-day vulnerabilities in live inference stacks is a distinct and mandatory complement to pre-deployment scanning. The Cline attack this week is a live demonstration: a supply chain compromise in the pre-deployment phase that delivered a runtime-persistent agent with ongoing system access. Both vectors required coverage; neither alone was sufficient.
5. The Model Card as Enterprise Trust Infrastructure
Shannon surfaces an emerging practice that deserves broader adoption among organisations building AI-powered products: the model card used as customer-facing transparency documentation.
"Some organizations are doing something really great using a model card that I call a license to thrive where they show here are the models we use, here are the safety precautions we take, this is how we use a Zero Trust approach to protect your data." She expects standardisation of this practice to accelerate through 2026 as regulated industries (healthcare, financial services) begin demanding it from AI-powered vendors as a due diligence requirement.
For security leaders at organisations building or evaluating AI-powered products, the model card framework serves a dual purpose: externally, it builds customer trust without disclosing IP; internally, it creates the documentation discipline that forces clarity about which models are in production, what data they have been trained on, and what controls are in place. That internal clarity is also the foundation of a defensible DSPM programme.
Trend Micro AI Security Blueprint Whitepaper Framework for adopters, builders, and scalers
Cloud Security Podcast
Question for you? (Reply to this email)
๐ค If your AI agent can read GitHub issues and push production codeโฆ Does it have more access than your junior engineer?
Because in many enterprises โ it does.
Next week, we'll explore another critical aspect of cloud security. Stay tuned!
๐ฌ Want weekly expert takes on AI & Cloud Security? [Subscribe here]โ
We would love to hear from you๐ข for a feature or topic request or if you would like to sponsor an edition of Cloud Security Newsletter.
Thank you for continuing to subscribe and Welcome to the new members in tis newsletter community๐
Peace!
Was this forwarded to you? You can Sign up here, to join our growing readership.
Want to sponsor the next newsletter edition! Lets make it happen
Have you joined our FREE Monthly Cloud Security Bootcamp yet?
checkout our sister podcast AI Security Podcast


