- Cloud Security Newsletter
- Posts
- 🚨 OpenClaw AI Agents Are Now Infostealer Targets: Using OpenSource for Securing the Cloud-AI Stack!
🚨 OpenClaw AI Agents Are Now Infostealer Targets: Using OpenSource for Securing the Cloud-AI Stack!
This week: infostealers begin targeting AI agent credentials (OpenClaw), Palo Alto acquires Koi Security to define Agentic Endpoint Security, and Microsoft 365 Copilot's DLP bypass exposes critical governance gaps. Toni de la Fuente, creator of Prowler, joins Ashish Rajan to unpack the shared responsibility gap in AI workloads, MCP architecture risks, and how open-source security tooling must evolve to meet the cloud-AI convergence challenge.
Hello from the Cloud-verse!
This week’s Cloud Security Newsletter topic: The Shared Responsibility Gap in AI Workloads: Why Cloud Security Assumptions Break at the LLM Layer (continue reading)
Incase, this is your 1st Cloud Security Newsletter! You are in good company!
You are reading this issue along with your friends and colleagues from companies like Netflix, Citi, JP Morgan, Linkedin, Reddit, Github, Gitlab, CapitalOne, Robinhood, HSBC, British Airways, Airbnb, Block, Booking Inc & more who subscribe to this newsletter, who like you want to learn what’s new with Cloud Security each week from their industry peers like many others who listen to Cloud Security Podcast & AI Security Podcast every week.
Welcome to this week’s Cloud Security Newsletter
We are at an inflection point. The same attack vectors that have haunted cloud security teams for a decade misconfiguration, credential exposure, shadow usage, and blurry shared responsibility are now being applied with surgical precision to AI workloads. This week's news cycle makes that unmistakably clear: from the first confirmed infostealer theft of an AI agent's cryptographic keys to Microsoft's Copilot silently bypassing DLP controls on confidential email for weeks.
To help make sense of what this means for practitioners building and securing AI systems on cloud infrastructure, this edition features Toni de la Fuente, founder and CEO of Prowler one of the most widely deployed open-source cloud security platforms in the industry, with a decade of checks built around real-world misconfiguration patterns. In conversation with Cloud Security Podcast host Ashish Rajan, Toni delivers a grounded, practitioner-first framework for understanding where cloud security ends and AI security begins and why that distinction matters more than ever. [Listen to the episode]
📰 TL;DR for Busy Readers
AI agents are credential stores: Infostealers now exfiltrate gateway tokens, cryptographic keys, and behavioral memory from tools like OpenClaw treat agent configs like privileged secrets.
Agentic Endpoint Security is now a category: Palo Alto's ~$400M acquisition of Koi signals every SOC must now inventory AI agents running on endpoints as privileged processes.
Copilot can silently nullify your DLP: M365 Copilot bypassed sensitivity labels on Sent Items and Drafts for weeks to audit your Purview logs for Jan 21–early Feb now.
The shared responsibility gap has expanded: Bedrock, Vertex, and Azure AI services inherit cloud's responsibility ambiguity and add new layers infra, LLM configuration, and shadow AI all need explicit ownership.
📰 THIS WEEK'S TOP 5 SECURITY HEADLINES
Each story includes why it matters and what to do next — no vendor fluff.
1. Infostealers Begin Targeting AI Agents: First Confirmed Theft of OpenClaw Credentials and Cryptographic Keys
Hudson Rock disclosed on February 13 that a Vidar-variant infostealer successfully exfiltrated an OpenClaw AI agent's entire configuration environment including its gateway authentication token, public and private cryptographic keys, and soul.md behavioral memory files. The malware didn't need a purpose-built AI module: it used a broad file-grabbing routine targeting extensions like .openclaw. OpenClaw has over 200,000 GitHub stars and is increasingly integrated into professional and enterprise workflows.
Why it matters to you: This isn't a one-off curiosity it's a proof-of-concept that the infostealer ecosystem will follow the value. Just as they built dedicated modules for Chrome credentials and Telegram sessions, dedicated AI agent parsers are coming. The stolen gateway token in this case could allow an attacker to remotely impersonate the victim's client in authenticated API requests. CISOs should act now: inventory agentic AI tool usage enterprise-wide, enforce encrypted storage of all agent configuration files and tokens, restrict port exposure for locally-running agents, and ensure DLP tooling covers .openclaw, .json, and similar agent file paths.
Sources: The Hacker News | BleepingComputer | Infosecurity Magazine
2. Palo Alto Networks to Acquire Koi Security: 'Agentic Endpoint' Emerges as a Formal Security Category (~$400M)
Palo Alto Networks announced a definitive agreement to acquire Koi Security a one-year-old Israeli startup already protecting 500,000 endpoints on February 17, 2026. Koi's platform gives enterprises visibility and control over AI agents, browser extensions, IDE plugins, MCP servers, and model artifacts that carry deep permissions but bypass traditional endpoint controls. Koi's capabilities will be folded into Prisma AIRS and Cortex XDR post-close.
Why it matters to you: This acquisition formally names a new product category and confirms what security architects have been quietly worried about: your endpoint perimeter now includes AI agents. Palo Alto's own announcement cited 135,000 exposed OpenClaw instances and 800+ malicious skills in its marketplace discovered within days of launch alongside documentation of the first malicious MCP server in the wild. For defenders: treat AI agents running on your estate (browser copilots, desktop assistants, IDE plugins, local LLM tools) as privileged processes requiring the same conditional access and least-privilege controls you apply to cloud admin sessions.
3. Proofpoint Acquires Acuvity: Email Giant Bets on AI Governance and MCP Server Visibility
Proofpoint announced on February 12 the acquisition of Acuvity, whose platform provides unified visibility and enforcement across AI usage from endpoints and browsers to MCP servers, locally installed tools like OpenClaw and Ollama, and AI-powered workflows embedded in Microsoft 365 and other enterprise SaaS. Financial terms were not disclosed.
Why it matters to you: Email and collaboration platforms are where AI copilots acquire their richest enterprise context contracts, calendar data, internal policy docs, financial records. That makes M365, Teams, and SharePoint the highest-value data path for AI-driven exfiltration and shadow AI risk. Proofpoint is positioning Acuvity as the control plane that spans human email behavior, sensitive data governance, and AI agent activity in a single policy layer. Alongside the Palo Alto/Koi deal, the market signal is loud: AI governance is an active buying priority now, not a 2027 roadmap item. Defenders should immediately define an AI usage policy covering approved models, permitted data classifications, required logging of prompts and responses, and guardrails around all SaaS connectors your agents are authorized to touch.
Sources: Proofpoint Press Release | CyberScoop | Help Net Security
4. Microsoft 365 Copilot Bug Bypassed DLP Controls, Summarizing Confidential Emails for Weeks (CW1226324)
Microsoft confirmed that a bug tracked as CW1226324 caused M365 Copilot to summarize confidential emails from users' Sent Items and Drafts folders from January 21 through early February bypassing sensitivity labels explicitly configured to restrict automated tool access. The root cause was a code issue in the Copilot work tab that incorrectly picked up labeled items in those folders. Items in other folders were not affected. A server-side fix began rolling out in early February; no final remediation timeline has been provided and the number of impacted tenants has not been disclosed.
Why it matters to you: This is a governance stress test for every organization running Copilot. Server-side AI processing can nullify tenant controls without any action from administrators or end users and Sent Items and Drafts routinely hold content subject to attorney-client privilege, regulatory protections, and contractual confidentiality. Immediate actions: verify your tenant received the CW1226324 remediation; review Purview Copilot activity logs for January 21 through early February; use Restricted Content Discovery to harden SharePoint exclusions; and temporarily restrict Copilot Chat for high-risk user groups (legal, HR, executive, finance) until you can validate DLP is functioning as expected. Strategically, this incident demands that AI feature releases be added to your change management process with a mandatory DLP regression test before rollout.
Sources: BleepingComputer | TechCrunch | The Register | Office 365 IT Pros
🎯 Cloud Security Topic of the Week:
If you spent the last decade internalizing AWS's shared responsibility model, your data, your configuration, your IAM; their hypervisor, and their physical infrastructure, prepare to rethink those clean lines. Managed AI services like Amazon Bedrock, Google Vertex AI, and Azure AI introduce layered dependencies (and layered ambiguity) that make the cloud SRM look elegantly simple by comparison.
The central question practitioners must now answer isn't just "what does the vendor secure?" it's "which of the five or six parties in my AI stack has responsibility for what, and does any of them actually know?" As Toni de la Fuente explains in this week's conversation, this isn't a theoretical gap. It's the exact same disorientation cloud teams experienced in the early days of RDS and Lambda amplified across LLMs, agent runtimes, MCP servers, and the prompt-to-response pipeline that now touches your most sensitive enterprise data.
Featured Experts This Week 🎤
Toni de la Fuente- CEO, Prowler Security
Ashish Rajan - CISO | Co-Host AI Security Podcast , Host of Cloud Security Podcast
Definitions and Core Concepts 📚
Before diving into our insights, let's clarify some key terms:
MCP (Model Context Protocol) An open protocol that allows AI agents and language models to connect to external tools and data sources databases, APIs, file systems in a standardized way. MCP enables agents to "do things" beyond generating text: query live data, execute functions, and interact with enterprise systems. Because MCP servers often carry broad permissions, an insecure MCP implementation (e.g., one that speaks directly to a production database without RBAC) represents a significant attack surface.
Agentic Endpoint Security A new security category, formalized this week by Palo Alto's Koi acquisition, focused on providing visibility and control over AI agents and related software (browser extensions, IDE plugins, MCP servers, local LLM tools) running on endpoints. These processes operate with high privileges and deep data access but bypass traditional endpoint security controls designed for human-interactive applications.
Shadow AI Unauthorized or unmonitored use of AI tools within an organization employees using consumer ChatGPT, DeepSeek, or other models to process enterprise data without IT or security awareness. Shadow AI creates data governance and regulatory exposure, particularly when users upload contracts, PII, or confidential internal documents to models that may use that data for training.
Promptfoo An open-source LLM assessment framework that supports red-teaming, vulnerability scanning, and OWASP/ATLAS threat mapping for language models. Prowler has integrated Promptfoo to allow security teams to assess LLMs as part of their existing cloud security scanning workflows.
DLP (Data Loss Prevention) Security controls that detect and prevent unauthorized transmission or exposure of sensitive data. In M365, DLP relies on sensitivity labels to restrict automated tool access to protected content. This week's Copilot incident exposed a critical assumption gap: that server-side AI pipelines respect tenant-configured DLP policies they may not.
This week's issue is sponsored by Push Security
Stop browser-based attacks with Push Security
Major breaches are increasingly originating in the browser, where traditional security controls can’t detect them. This is a conscious shift, with both criminal and nation-state actors adopting browser-native TTPs into their standard toolkit.
Push Security brings real-time detection and response into every browser — where today’s work and attacks actually happen. Push gives security teams visibility into modern threats, proactive control over user risk, and powerful telemetry to detect, investigate, and stop attacks fast.
Frictionless deployment. Instant protection.
💡Our Insights from this Practitioner 🔍
1. Cloud Security and AI Security Are Overlapping, Not Identical and That Distinction Has Real Architectural Consequences
One of the most clarifying frameworks Toni offers is a simple structural decomposition: AI systems have two security domains, and they require different thinking. The first is the infrastructure of AI GPUs, storage, the cloud services (S3, SageMaker, Bedrock, Vertex) that host the data pipelines and model training workloads. This domain largely inherits cloud security controls, and practitioners with cloud backgrounds are well-equipped to address it. The second is the AI itself the model configuration, the guardrails, the prompt injection controls, the behavioral parameters that determine what the model can do, what it knows, and what it will refuse.
"The same way that we distinguish between cloud infrastructure and application infrastructure, we can distinguish between AI infrastructure or AI configuration itself." Toni
The practical implication for security architects: when reviewing a new AI workload for production readiness, you need two distinct checklists. One that treats it like a cloud workload (IAM, network exposure, encryption, logging) and one that treats it like a model deployment (guardrail configuration, prompt injection testing, data classification of training inputs, model access control).
Toni draws a direct parallel between the confusion early cloud adopters felt about services like RDS and what practitioners are now experiencing with Bedrock and similar managed AI services. The difference is that AI services add multiple new configuration surfaces guardrails, model access policies, prompt filtering that vendors are still publishing documentation for as they go.
"If a company like AWS launches AI service like Bedrock, you could expect that it's following all the best practices. But what about your side or what about your expectations? I mean, that is very blurry. Sometimes it's not even clear for anybody not for the customer, not for integrators, and of course not for the CSP." - Toni
Ashish adds a critical structural observation: "I used to be the first party with just me and things I manage on my virtual machine. Then I had a third party with me and Amazon managing my workload. Now I have a fourth party, fifth party as well because now my Bedrock is the access to my Claude or OpenAI."
This isn't an abstraction. Each party in that chain carries its own configuration surface, its own security posture, and its own data handling behaviors. Security teams that have mapped shared responsibility for IaaS are now working with dependency chains where the blast radius of a misconfiguration is significantly harder to contain.
Practical guidance: Map your AI stack completely. For every AI service in your environment, document which configuration options belong to your responsibility, which are vendor-managed, and which are simply undisclosed. Build your security controls around the API endpoints you can see and modify that is your accountability boundary.
3. MCP Security: The New Frontier Where Most AI Applications Are Getting It Wrong
One of the most operationally actionable insights in this conversation comes from Toni's direct experience building Prowler's own MCP implementation. The team observed a consistent pattern across community-built AI architectures: MCP servers deployed with direct database access, no RBAC enforcement, and no authentication layer beneath the tool interface. This is the AI equivalent of exposing an admin console directly to the internet without a jump server.
"We have seen many how insecure the default configuration or default AI architecture with MCP can be. Never compute an MCP talking to a database directly. Put your MCP on top of your RBAC, and the RBAC is below the API." Toni de la Fuente
The architectural pattern Prowler enforces in its own implementation: MCP → RBAC → API → Database. Each layer has a defined access control boundary. The MCP server has no direct datastore access; it operates within permissions granted through the RBAC layer. This pattern translates directly to any enterprise building production agentic applications.
For teams currently evaluating or deploying AI agents connected to internal systems: treat every MCP endpoint as a privileged interface. Apply the same conditional access controls you'd require for a cloud admin session MFA, session logging, least-privilege permission scoping, and regular access reviews.
4. AI Doesn't Know Everything and Assuming It Does Is a Security Risk
Toni describes an illuminating experiment the Prowler team ran with Claude Code: they asked it to identify any S3 buckets open to the public internet using only AWS credentials, a check Toni himself wrote nearly ten years ago as one of Prowler's first detections. Claude Code worked through several incorrect approaches before eventually identifying Prowler as the appropriate tool for the task.
"We think that AI is going to know everything magically. This is not magic. You have to measure between AI-created detections or using AI to take advantage of rule-based detections. At the end of the day, that is what we truly believe is needed AI around everything, but sometimes you have to tell the AI: no, this is the A, B, C that you have to take into account."
The security implication extends well beyond tooling: AI-driven detection systems deployed as autonomous agents carry real risk if their limitations aren't acknowledged and guarded against. The failure mode isn't dramatic it's subtle. An agent that confidently executes the wrong check or scans the wrong region doesn't trigger an alert; it just leaves a gap. The corrective is to use AI to augment and accelerate rule-based detection engines not replace them.
5. The SDLC Has Changed and Security Teams Need to Catch Up
Perhaps the most strategically significant observation in this conversation is about how AI is changing who can deploy cloud infrastructure. Tools like Claude Code and Lovable enable developers with limited infrastructure experience to provision cloud resources, generate Terraform, configure databases, and deploy to production all through conversational prompts.
Toni frames this directly: "When you create a new application with Claude Code, it's going to generate the Terraform code, ask you for credentials in AWS, and then you deploy something there with your storage, database, everything. Now what?"
The 'now what' is exactly the security gap. Applications are being generated and deployed faster than security review processes can follow. The answer is continuous scanning across both the infrastructure-as-code layer and the running cloud environment and AI-powered tooling is now fast enough to make this feasible at developer velocity. The security team's role is to ensure those scans are running, that findings are routed back into the development pipeline, and that AI-generated infrastructure doesn't escape into production carrying the same misconfigurations that have been exploitable for years.
6. Three Security Controls Every AI-on-Cloud Workload Needs Before Production
When pressed by Ashish for a concise framework for practitioners moving AI-on-cloud workloads to production, Toni offered three pillars:
Secure the infrastructure: Treat the cloud workload running the AI application with the same rigor as any production cloud environment IAM least privilege, network segmentation, encryption at rest and in transit, runtime monitoring. This is table stakes, and AI doesn't change it.
Secure the LLM: Know where your model lives. Is it a shared multi-tenant service? Does the vendor use your prompts and data to improve the model? Assess your LLM configuration using tools like Promptfoo and map findings to the OWASP Top 10 for LLMs and MITRE ATLAS.
Know who has access: Shadow AI is not a theoretical problem, it's happening across your organization today. Establish an AI usage policy, enforce it through technical controls (DLP, web filtering, identity governance), and build visibility into which AI tools are accessing which enterprise data through which connectors.
This framework maps cleanly onto this week's news. The Microsoft Copilot DLP failure is a pillar two and three failure. The OpenClaw infostealer theft is a pillar of three failure credentials for an AI agent treated like user data instead of privileged secrets. The Koi acquisition is the market responding to the pillar one gap on the endpoint.
CISA Known Exploited Vulnerabilities Catalog - Authoritative source for actively exploited CVEs requiring remediation
Anthropic's Model Context Protocol Documentation - Technical specifications and security considerations for MCP
Cloud Security Alliance: AI Security Guidance - Enterprise frameworks for AI governance
Cloud Security Podcast
Question for you? (Reply to this email)
🤔 Are you currently:
A) Blocking AI tools
B) Allowing everytrhing
C) Trying to build decision-point governance
Reply and tell me where you are.
Next week, we'll explore another critical aspect of cloud security. Stay tuned!
📬 Want weekly expert takes on AI & Cloud Security? [Subscribe here]”
We would love to hear from you📢 for a feature or topic request or if you would like to sponsor an edition of Cloud Security Newsletter.
Thank you for continuing to subscribe and Welcome to the new members in tis newsletter community💙
Peace!
Was this forwarded to you? You can Sign up here, to join our growing readership.
Want to sponsor the next newsletter edition! Lets make it happen
Have you joined our FREE Monthly Cloud Security Bootcamp yet?
checkout our sister podcast AI Security Podcast


