- Cloud Security Newsletter
- Posts
- 🚨 Salesforce & Microsoft Hit by Prompt Injection (CVSS 9+):Red Teamers Expose AI Reality
🚨 Salesforce & Microsoft Hit by Prompt Injection (CVSS 9+):Red Teamers Expose AI Reality
The security industry has reached an inflection point: AI security is no longer theoretical. This week covers the maturation of AI security threats in production environments, featuring insights from offensive security leaders Jason Haddix (Arcanum Information Security), Daniel Miessler (Unsupervised Learning), and Caleb Sima from AI Security Podcast on prompt injection attacks, the current state of automated vulnerability discovery, and strategic implications of AI agents accessing enterprise systems plus critical zero-days in Cisco ASA/FTD devices and major industry consolidation.
Hello from the Cloud-verse!
This week’s Cloud Security Newsletter Topic we cover - The Reality Check: What AI Can Actually Do in Offensive Security (And What It Can't)(continue reading)
Incase, this is your 1st Cloud Security Newsletter! You are in good company!
You are reading this issue along with your friends and colleagues from companies like Netflix, Citi, JP Morgan, Linkedin, Reddit, Github, Gitlab, CapitalOne, Robinhood, HSBC, British Airways, Airbnb, Block, Booking Inc & more who subscribe to this newsletter, who like you want to learn what’s new with Cloud Security each week from their industry peers like many others who listen to Cloud Security Podcast & AI Security Podcast every week.
Welcome to this week’s Cloud Security Newsletter
This week's newsletter examines how prompt injection has evolved from academic research to active exploitation in enterprise platforms like Salesforce Agentforce and Microsoft 365 Copilot, the reality of AI-powered offensive security capabilities demonstrated in competitions like DARPA's AIxCC, and what cloud security leaders must understand about defending increasingly complex AI-integrated systems. Meanwhile, CISA's emergency directive on Cisco firewall zero-days and strategic acquisitions in the AI security space signal the urgency of modernizing both traditional and AI-era defenses.
📰 TL;DR for Busy Readers
🚨 Salesforce Agentforce (CVSS 9.4) & Microsoft Copilot (CVSS 9.3): Prompt injection is now real-world exploitation.
AI red teams show LLMs can already solve CTFs & find vulns, but can’t handle complex business logic.
Cisco ASA/FTD zero-days enable ROM-level persistence,CISA mandates immediate patching.
Microsoft launches Security Store for Copilot agents.
UK’s JLR cyberattack halts manufacturing, triggers £1.5B loan guarantee.
👉 Takeaway for You: Treat AI agents as privileged users, not SaaS add-ons.
📰 THIS WEEK'S SECURITY HEADLINES
1 - 🔴 CISA Issues Emergency Directive for Cisco ASA Zero-Days Under Active Exploitation
Cisco disclosed two actively exploited zero-day vulnerabilities (CVE-2025-20333 and CVE-2025-20362) affecting Cisco Secure Firewall ASA and FTD software on September 25, 2025. CISA issued Emergency Directive 25-03 requiring federal agencies to account for all affected devices, collect forensic data, and upgrade systems by September 26, 2025. The campaign involves exploiting zero-days to gain unauthenticated remote code execution and manipulating ROM to persist through reboots and system upgrades.
Why This Matters: This widespread campaign demonstrates sophisticated adversaries' ability to achieve persistence at the firmware level, bypassing traditional security controls. For cloud security teams, the ROM manipulation capability is particularly concerning as it enables attackers to maintain access even after patches are applied. Organizations using Cisco ASA/FTD devices at cloud ingress points or for site-to-site VPN connections to cloud environments face potential long-term compromise. The emergency directive's 24-hour compliance window reflects the severity and active exploitation.
2 - 🚨 Critical Prompt Injection Vulnerabilities Expose Salesforce and Microsoft AI Platforms
Cybersecurity researchers disclosed ForcedLeak (CVSS 9.4), a critical vulnerability in Salesforce Agentforce that allows attackers to exfiltrate sensitive CRM data through indirect prompt injection attacks. The vulnerability was discovered on July 28, 2025, with Salesforce implementing Trusted URLs Enforcement on September 8, 2025. Additionally, Microsoft patched CVE-2025-32711 affecting Microsoft 365 Copilot in June a vulnerability with a CVSS score of 9.3 involving AI command injection that could allow attackers to steal sensitive data over a network, named ShadowLeak
Why This Matters: These vulnerabilities demonstrate that prompt injection has transitioned from theoretical research to real-world exploitation in enterprise environments. Unlike traditional web vulnerabilities, prompt injection attacks exploit the fundamental architecture of how LLMs process instructions and data together, making them extremely difficult to prevent. As our featured expert Jason Haddix explains: "The LLM becomes a delivery system to attack the ecosystem. We call it attacking the ecosystem. And I just see no one talking about it right now." Cloud security teams deploying AI agents with access to sensitive systems CRM platforms, databases, email, or internal documentation face a vastly expanded attack surface.
3 - 🛡️ Microsoft Launches Security Store with Agentic Security Copilot Updates
Microsoft unveiled a Security Store to procure security SaaS and customizable Security Copilot agents, integrated with Defender, Sentinel, Entra, and Purview. The announcement highlights new AI-era controls including task adherence, PII guardrails, and prompt-shielding capabilities designed to address emerging risks in agentic AI deployments.
Why This Matters: Centralized procurement plus agentic AI controls could accelerate security tool rollouts, but also introduce new supply-chain risks around marketplace vetting and agent permission scopes. Copilot agents interacting with tenants heighten prompt-injection and data-loss risk if guardrails are misconfigured. Cloud security teams must treat Copilot agents like applications enforcing app consent policies, granular scopes, and monitored egress. The introduction of these controls validates concerns about AI security risks while providing a framework for governance. Organizations should require vendor SBOMs and data-handling attestations for Store apps, pre-production red-team agents for prompt-injection vulnerabilities, and map new telemetry to Sentinel analytics.
Sources: The Verge, Microsoft Security Blog
4 - 🏭 UK Critical Manufacturing: JLR Cyberattack Triggers £1.5B Government Loan Guarantee
After a late-August cyberattack halted Jaguar Land Rover (JLR) production for weeks, the UK government announced a £1.5 billion loan guarantee on September 27, 2025, to stabilize the auto supply chain. JLR is now preparing a controlled, phased restart of operations.
Why This Matters: This incident demonstrates real-world macro-scale risk from OT/IT ransomware: national supply-chain disruption, emergency public financing, and cascading vendor insolvency risks. The government intervention underscores how critical infrastructure cyberattacks now require sovereign-level economic response. For cloud security teams managing OT/plant-adjacent enterprises or supply chain integrations, this validates the need for robust network segmentation, immutable backups, and incident response tabletop exercises that include finance and treasury scenarios. Organizations should validate cyber insurance clauses for business interruption and simulate extended ERP/PLM outages that might drive logistics cloud failovers.
Sources: Reuters, SecurityWeek
5 - 📊 Google Patches Gemini AI Vulnerabilities Enabling Data Theft
Cybersecurity researchers disclosed three patched vulnerabilities in Google's Gemini AI assistant that could have exposed users to privacy risks and data theft, including search-injection attacks on the Search Personalization Model, log-to-prompt injection against Gemini Cloud Assist, and exfiltration via the Gemini Browsing Tool. The vulnerabilities, collectively named the "Gemini Trifecta" by Tenable, have been patched by Google.
Why This Matters: These vulnerabilities highlight the security challenges of integrating AI assistants into cloud platforms and productivity tools. Gemini Cloud Assist, designed to help users manage cloud resources and troubleshoot issues, could have been exploited to compromise cloud environments through manipulated logs a particularly concerning vector as many organizations are implementing AI-powered cloud management tools.
Source: The Hacker News
🎯 Cloud Security Topic of the Week:
The Reality Check: What AI Can Actually Do in Offensive Security (And What It Can't)
As AI security tools flood the market and vendors promise autonomous penetration testing, our featured experts provide a critical reality check on current capabilities versus hype. The discussion reveals a nuanced landscape where AI excels at certain tasks while remaining fundamentally limited in others insights that every cloud security leader needs when evaluating AI-powered security tools or defending against AI-enabled attacks.
Featured Experts This Week 🎤
Jason Haddix - Founder, Arcanum Information Security
Daniel Miessler - Founder, Unsupervised Learning
Caleb Sima - Builder, WhiteRabbit, Co-Host AI Security Podcast
Ashish Rajan - CISO | Co-Host AI Security Podcast, Host of Cloud Security Podcast
Definitions and Core Concepts 📚
Before diving into our insights, let's clarify some key terms:
Prompt Injection: A vulnerability in Large Language Models where malicious instructions embedded in user input cause the model to bypass safety controls or execute unintended actions. Unlike SQL injection, prompt injection exploits the non-deterministic nature of LLMs, making it extremely difficult to prevent completely.
MCP (Model Context Protocol): A standardized protocol introduced by Anthropic for enabling AI systems to interact with external tools and data sources. MCP servers allow AI agents to access databases, APIs, and other resources in a structured way, but also expand the attack surface significantly.
Agentic AI: AI systems that can autonomously plan, make decisions, and execute multi-step workflows without constant human guidance. These systems represent a paradigm shift from traditional prompt-response models to autonomous agents that can interact with enterprise systems.
RAG (Retrieval Augmented Generation): A technique that provides LLMs with additional context by retrieving relevant information from external knowledge bases before generating responses. While RAG improves accuracy, it doesn't solve prompt injection vulnerabilities.
Context Engineering: The practice of carefully structuring and organizing information provided to AI systems to maximize output quality. Our experts emphasize this is currently more important than model selection for achieving reliable results.
Scaffolding: The architecture and integration layers that connect AI models to tools, data sources, and workflows. Daniel Miessler notes: "The intelligence of the model and the intelligence of the system are like two separate things. I believe the intelligence of the system is likely to win that competition."
💡Our Insights from this Practitioner 🔍
The Prompt Injection Problem Isn't Going Away It's Getting Worse
One of the most sobering revelations from this week's discussion is the consensus that prompt injection ranked #1 in OWASP's LLM Top 10 has proven to be a more persistent and severe problem than anticipated. Jason Haddix recounts a revealing exchange at an OpenAI security summit where Sam Altman was asked about his prediction from five years ago that prompt injection would be solved:
"We had about an hour with Sam Altman and Dan asked him like, Hey, you know, five years ago you made this statement that prompt injection would be solved in future models of youth. Still think that. And he was like, really? Yeah."
The fundamental issue is architectural: LLMs are designed to provide helpful responses, not to maintain security boundaries. Daniel Miessler explains: "They're literally designed to just give you answers. Like that's, and they're non-deterministic. Like it's not really designed to have barriers. It's designed to do the opposite."
For cloud security teams, this has profound implications. Organizations implementing AI agents with access to sensitive systems whether Salesforce CRM, internal databases, or cloud infrastructure must accept that prompt injection is an inherent risk that cannot be completely eliminated through guardrails or filters. Each additional control layer slows inference time and reduces accuracy, creating a fundamental trade-off between security and functionality.
What This Means for Your Organization: Treat AI agents like privileged users. Implement strict scoping for API access (read-only where possible), monitor all AI-initiated actions, and maintain human-in-the-loop workflows for high-stakes operations. Don't rely solely on prompt firewalls or classifiers; they can be bypassed, and they introduce performance penalties.
The Real Attack Surface: It's Not Just the Model
One of Jason Haddix's most important insights challenges how most organizations think about AI security:
"I think the big disconnect, at least for me, and something that I'm talking about all week at like Defcon Black Hat, is also a giant disconnect in testing a model in isolation, and then testing actually an implementation of an app that uses a model. You have, not only do you have the model that you're trying to red team and get to do things, but you also have things like its agents and tools and its protocols. But then you also have like six other systems that are hoisting these thing up."
This ecosystem-based perspective fundamentally changes how we should approach AI security assessments. In real enterprise deployments, AI systems integrate with:
Logging and observability platforms
Prompt libraries and management systems
Guardrails and classifiers running inline
Data stores and vector databases
Multiple APIs and integration points
Jason describes attacks his team has executed that demonstrate this expanded attack surface: "We've done attacks similar to blind cross site scripting where the model just passes along attacks and we hit internal developers and we're able to attack them through JavaScript attacks."
The lesson here is critical: the LLM becomes a delivery mechanism to attack the broader ecosystem. An attacker doesn't need to jailbreak the model itself if they can inject malicious content that gets processed by downstream systems.
What This Means for Your Organization: When conducting AI security assessments, map the entire architecture of every system that touches AI inputs or outputs. Test not just the model's responses but how those responses flow through logging systems, analytics platforms, and integration points. Consider second-order effects where malicious content might be stored and later executed.
The Current State of AI in Offensive Security: Capability vs. Hype
The discussion provides valuable ground truth on what AI can actually accomplish in offensive security today versus vendor marketing claims. Jason Haddix offers this assessment based on testing OpenAI's new open source model:
"I hooked that up to our offensive agenting framework and it was able to solve like four CTFs I threw at it with just some RAG and just access to puppeteer and playwright MCPs and that model. I mean, that's a junior pen tester right there."
However, the experts are clear about current limitations. When it comes to complex business logic flaws and multi-stage attacks, Caleb Sima observes:
"I think it's easy to find a cross-site scripting vulnerability. I think it's easy to find SQL. Like these things, I think you can because there's enough knowledge about the space. It is. They're very distinct, very, very clear attacks. It's the multi-stage stuff that's really [challenging]."
The key differentiator for successful AI offensive tools isn't just model intelligence, it's architecture. Jason explains that leading companies fragment workflows into multiple specialized agents:
"The architecture for almost every company, including XBOW and including anybody else who's doing this, is like overseer, planner, agent, and then an agent for XSS and agent for SSTI and Agent forever. All those have RAG, which enrich their ability to do that type of testing."
Daniel Miessler emphasizes the importance of this system-level intelligence: "The intelligence of the model and the intelligence of the system are like two separate things. And I believe the intelligence of the system is likely to win that competition."
What This Means for Your Organization: When evaluating AI-powered security tools, look beyond model claims. Ask vendors about their architecture, how they fragment tasks, maintain context, handle complex multi-step workflows, and incorporate domain expertise. For defensive purposes, understand that attackers with good architecture can achieve significant scale even with commodity models.
The Browser is Becoming the AI Interface And It Changes Everything
One of the most fascinating strategic insights from this discussion concerns the race toward AI-integrated browsers. Jason Haddix identifies what he believes is the "killer feature" of the next generation of the web:
"The killer feature of the next generation of the web is just in time GUIs. This is the killer feature. So you go to any site you like, let's say Reddit. You're a Reddit, you love Reddit, but you hate the GUI. You just tell the browser, Hey, I wanna rewrite the GUI like this, and this is what I want to see. And it's in the browser as an overlay."
This shift has profound implications for both user experience and security. As Caleb Sima points out:
"All of the context for a person is in the browser. 98% of your work is in this browser. That you, if you own that browser, have all of the context of [what they do]. And you have the ability to take actions in that browser using the same state and authentication of what you normally can use."
For enterprises, this creates both opportunities and risks. On the opportunity side, companies can shift focus from perfecting GUIs to providing high-quality APIs and data, letting AI browsers handle presentations. On the risk side, AI agents with browser-level access and authentication represent an enormous attack surface for prompt injection and credential theft.
What This Means for Your Organization: Start planning for AI-integrated browsers in your threat model. Consider how authentication flows, session management, and data sensitivity change when AI agents can navigate your applications with user credentials. For product teams, begin thinking about API-first approaches that enable AI browser integration while maintaining security controls.
Why Observability and Logging Are More Complex Than You Think
The conversation reveals a critical challenge that many organizations haven't fully considered: in some geographies, prompts exchanged between employees and AI systems are considered private communications that cannot be logged. Jason explains:
"Prompts between employees and AI and some geolocations in the world are considered private, so you can't even do observability or logging on them. And so, you know, how do you monitor for a breach or malicious activity or something like that from an employee or a partner or something like that, or anyone who's using your feature, you know, when you can't log."
This creates a fundamental tension between security monitoring and privacy regulations. Organizations may receive alerts about "malicious intent" from AI guardrails and classifiers but cannot examine the actual prompts that triggered those alerts.
What This Means for Your Organization: Consult with legal and compliance teams now about logging requirements and constraints for AI interactions across your global operations. Design monitoring that can operate effectively with limited visibility, focusing on behavioral anomalies, API access patterns, and downstream effects rather than prompt content analysis.
The Reality of Incident Detection and Response in AI Systems
When asked about defining and detecting incidents in AI systems, the experts converge on an important insight: the fundamental impacts remain the same, but the attack paths differ. Daniel Miessler frames it clearly:
"I think I'm likely to say that they're largely gonna be the same. Because I feel like the impacts are still gonna be very similar. I think it's more the attack path that changes. Did you lose data? Was something stolen? Was there IP theft?"
However, Jason adds a crucial complication around forensics and detection:
"You can write prompts that say don't execute this attack until like a week later. So now how do you go back once the attack does execute? One of the techniques right now is variable expansion. So you define a variable, a prompt injection as a variable in one prompt today and then tomorrow or the next day, I go back and I call the variable to the system and it detonates basically."
This time-delayed execution pattern similar to malware detonation makes traditional incident response significantly more complex. Without proper logging and correlation of AI interactions over time, forensic investigation becomes nearly impossible.
What This Means for Your Organization: Extend your incident response playbooks to cover AI-specific scenarios. Build detection for unusual patterns in AI agent behavior, unexpected data access, and anomalous tool usage. Consider implementing retention policies for AI interaction logs that balance privacy requirements with investigation needs.
AI Security Resources:
Threat Intelligence:
Prompt Injection Resources:
Question for you? (Reply to this email)
How are you scoping AI agent access in your org - least privilege like a user, or broad SaaS-wide?
Next week, we'll explore another critical aspect of cloud security. Stay tuned!
📬 Want weekly expert takes on AI & Cloud Security? [Subscribe here]”
We would love to hear from you📢 for a feature or topic request or if you would like to sponsor an edition of Cloud Security Newsletter.
Thank you for continuing to subscribe and Welcome to the new members in tis newsletter community💙
Peace!
Was this forwarded to you? You can Sign up here, to join our growing readership.
Want to sponsor the next newsletter edition! Lets make it happen
Have you joined our FREE Monthly Cloud Security Bootcamp yet?
checkout our sister podcast AI Security Podcast