An AI gateway exploited in 36 hours

This week's Cloud Security Newsletter unpacks the AI gateway exploitation pattern (CVE-2026-42208) that turned LiteLLM into a cloud-account-class risk, Wiz's GitHub disclosure (CVE-2026-3854), and Google Cloud Next '26's agentic defense pivot, alongside Shawn Hays of Varonis on the eight pillars of an enterprise AI security program, why visibility and AISPM alone leave the biggest gaps, and how to apply zero trust across agents, prompts, identities, and the cloud architects sitting behind the data. Topics: AI security program, AISPM, agentic AI, agent identity, AI bill of materials, third-party AI risk, copilot governance, multi-AI enterprise, zero trust for agents

Hello from the Cloud-verse!

This week’s Cloud Security Newsletter topic: What's Missing From Most AI Security Programs (continue reading) 

This image was generated by AI. It's still experimental, so it might not be a perfect match!

Incase, this is your 1st Cloud Security Newsletter! You are in good company!
You are reading this issue along with your friends and colleagues from companies like Netflix, Citi, JP Morgan, Linkedin, Reddit, Github, Gitlab, CapitalOne, Robinhood, HSBC, British Airways, Airbnb, Block, Booking Inc & more who subscribe to this newsletter, who like you want to learn what’s new with Cloud Security each week from their industry peers like many others who listen to Cloud Security Podcast & AI Security Podcast every week.

Welcome to this week’s Cloud Security Newsletter

Two stories defined this week, and both expose the same gap. On the AI side, the LiteLLM SQL injection (CVE-2026-42208) was exploited in the wild within 36 hours of disclosure, the AI gateway turned out to be a credential vault holding OpenAI, Anthropic, and AWS Bedrock keys in a single PostgreSQL row. On the platform side, Wiz disclosed a GitHub RCE (CVE-2026-3854) reachable via a single git push, with cross-tenant blast radius on shared storage. Different bug classes, same underlying signal: the security perimeter for cloud workloads has moved up the stack into AI gateways, agent identities, and the platforms between developers and production, and most enterprise AI security programs were scoped before any of this was on the map.

To unpack the gap and what to do about it, we sat down with Shawn Hays, Product Marketing Manager for Microsoft Applications and AI Security Solutions at Varonis. Shawn spent six years configuring CMMC environments for defense industrial base customers, three years inside Microsoft on the Purview/Defender/Sentinel go-to-market, and is now driving Varonis's AI security platform, Atlas. His central argument, that we've entered a "multi-AI era" analogous to the multi-cloud explosion of fifteen years ago, and that the market has over-pivoted on AISPM while leaving guardrails, pen-testing, and runtime enforcement underbuilt, is the lens this newsletter uses to read the news. [Listen to the episode]

⚡ TL;DR for Busy Readers

This week’s attacks didn’t break systems — they used them


🔑 AI gateways are Tier-0 secrets stores. LiteLLM's litellm_credentials table holds enterprise cloud provider keys, treat every AI proxy as you would your IAM root, and rotate now if you ran a vulnerable build.  

🧬 "Agent identity" just became a procurement category. Google Cloud Next '26 introduced Agent Identity, Agent Gateway, and Model Armor primitives, IAM roadmaps without scoped non-human identity will fall behind in 2026.  

🏗️ AISPM alone is not an AI security program. Shawn Hays argues the market has over-pivoted on posture and visibility while leaving guardrails, pen-testing, and runtime monitoring underbuilt, close the gap before regulators do.  

🔗 Vendor-environment access is the breach pattern of the quarter. Anthropic Mythos, Citizens/Frost, and the Anthropic contractor incident all share the same root cause, third-party identity hygiene that doesn't match the sensitivity of what those vendors can reach. 

🛠️ Edge persistence outlasts patching. FIRESTARTER on Cisco firewalls and the April 24 KEV additions (SimpleHelp, Samsung MagicINFO, D-Link) prove that "we patched, so we're clean" is no longer a defensible posture for hybrid cloud environments.

Where's your biggest AI security gap right now?

Most popular selection would be covered in a future podcast episode

Login or Subscribe to participate in polls.

📰 THIS WEEK'S TOP SECURITY HEADLINES

Each story includes why it matters and what to do next — no vendor fluff.

1. LiteLLM SQL Injection (CVE-2026-42208) Exploited Within 36 Hours: AI Gateway Becomes Cloud-Account-Class Risk

What Happened. A pre-authentication SQL injection in BerriAI's LiteLLM (CVSS 9.3) was indexed in the GitHub Advisory Database on April 24 and saw its first observed exploitation attempt on April 26 at 16:17 UTC, roughly 36 hours later. The flaw concatenates the Authorization Bearer value into a query without parameterization, letting unauthenticated attackers run arbitrary SQL against the PostgreSQL backend. Sysdig observed targeted UNION-based payloads from German-hosted IPs (AS200373) hitting precisely the three highest-value tables: LiteLLM_VerificationToken (virtual API keys + master key), litellm_credentials (stored OpenAI/Anthropic/Bedrock provider credentials), and litellm_config (environment variables). Affected versions: 1.81.16 through 1.83.6. Fixed in 1.83.7-stable.

Why It Matters. LiteLLM has 45,000+ GitHub stars and is widely deployed as the AI gateway in front of multi-provider LLM architectures. A single litellm_credentials row can hold an OpenAI org key with five-figure monthly spend, an Anthropic console key with workspace admin rights, and an AWS Bedrock IAM credential, meaning the blast radius is closer to a cloud account compromise than a typical web SQLi. Three takeaways: (1) inventory every AI gateway, proxy, and middleware tier and treat them as Tier-0 secrets stores, not developer convenience tooling; (2) any internet-facing LiteLLM instance running a vulnerable version during the exposure window should be assumed compromised; rotate every key and audit upstream provider billing; (3) the operator-grade exploitation (Prisma schema awareness, schema-aware column-count enumeration) means GHSA-only critical advisories now warrant KEV-level urgency.

🔎 Sources: Sysdig analysis

2. GitHub RCE via Single Git Push (CVE-2026-3854): Wiz Discloses Cross-Tenant Risk

What Happened. On April 28, GitHub and Wiz coordinated disclosure on CVE-2026-3854, a CVSS 8.7 command injection in GitHub's internal git push pipeline. By chaining three injections through unsanitized push option values, an authenticated user with push access could override the rails environment, redirect the custom hooks directory, and trigger path traversal via repo_pre_receive_hooks to execute arbitrary commands as the git user (with cross-tenant blast radius on shared storage). GitHub.com was patched within two hours of Wiz's report; per CISO Alexis Wales, ~88% of GHES instances were vulnerable at disclosure. The bug was discovered using AI-assisted reverse engineering (IDA MCP).

Why It Matters. Two threads to track. Operationally, any GitHub Enterprise Server instance must be on 3.19.3 or later. Wiz called the exploit "remarkably easy." Architecturally, the lesson is that when multiple services in different languages pass data through a shared internal protocol, the assumptions each service makes about that data become a critical attack surface. It's the same pattern that has haunted ingress-nginx and other shared-data systems. This is also the third notable GitHub incident in a single week (alongside the merge queue regression of April 22–23 and an April 27 search outage), which is putting platform-dependency assumptions under stress for compliance teams.

🔎 Sources: Wiz Research

3. Google Cloud Next '26: Wiz Integration Goes Deep, Agentic Defense Lands

What Happened. Google Cloud Next '26 ran April 22 in Las Vegas with a security agenda that, post-Wiz acquisition, finally looked unified. Headline announcements: three new Google Security Operations agents (Threat Hunting, Detection Engineering, Third-Party Context); Wiz Defend detections natively forwarded to Google SecOps and Mandiant Threat Defense; expanded Wiz coverage to Databricks, AWS AgentCore, Azure Copilot Studio, Salesforce Agentforce, Cloudflare AI Security for Apps, and Vercel; agent-governance primitives (Agent Identity, Agent Gateway, Model Armor integration); reCAPTCHA reborn as Google Cloud Fraud Defense; and KMS Quantum Safe Key Imports in preview. Google's M-Trends 2026 data claims initial-access-to-handoff time has collapsed from 8 hours three years ago to 22 seconds today.

Why It Matters. Two structural takeaways. First, "agent identity" has moved from concept to procurement category: Agent Identity, Agent Gateway, and Model Armor sketch the primitives every enterprise will need as autonomous agents proliferate inside production. Second, the Wiz/Google SecOps integration is meaningful but more incremental than secondary coverage suggests. Google's own language is careful ("updated how we integrate") rather than fully native. Custom parsing, normalization, and SOAR content sitting between Wiz and Chronicle UDM today is not automatically obsolete. CISOs should ask vendors specifically what changes versus what's marketing gloss. Third, the SCC Standard tier now bundles posture, compliance, and vulnerability management; worth a hard look for teams paying separately today.

🔎 Sources: Google Cloud Blog

4. ServiceNow Closes $7.75B Armis Acquisition, Reshaping Asset-Centric Security

What Happened. ServiceNow completed its all-cash $7.75 billion acquisition of cyber exposure management vendor Armis on April 20, six months ahead of the originally guided H2 2026 close. Together with the pending Veza identity acquisition, ServiceNow says the combination will more than triple its addressable market for security and risk solutions, embedding real-time asset discovery across IT, OT, IoT, medical devices, "physical AI," and cloud directly into the ServiceNow platform.

Why It Matters. This is a structural bet that the next decade of enterprise security runs through asset-and-identity context, not more detection. For cloud security leaders: (1) ITSM-native CMDBs are about to absorb cyber asset intelligence, which will pressure standalone CAASM tooling and reshape how exposures get prioritized; (2) the OT/IoT/medical visibility coming with Armis pulls non-IT assets into the same pane of glass as cloud workloads (meaningful for healthcare, manufacturing, and CNI buyers running hybrid estates); (3) for CISOs running ServiceNow as the system of record, the integration roadmap is now the ceiling on how fast you can collapse asset, vulnerability, and exposure tools. Plan for a 12–18 month integration window before depth catches up to the marketing.

🔎 Sources: ServiceNow Newsroom

5. Anthropic Investigates Unauthorized Access to "Mythos" Cyber Model via Vendor Environment

What Happened. Bloomberg reported on April 21 that a small Discord group of AI enthusiasts gained unauthorized access to Anthropic's Claude Mythos Preview, the vulnerability-discovery model restricted to Project Glasswing partners (Apple, Microsoft, Cisco, Amazon, Mozilla, several major banks, and reportedly the NSA). The group leveraged credentials from a third-party Anthropic contractor and guessed the model's endpoint URL based on naming-convention knowledge, gaining access on April 7, the same day Glasswing was publicly announced. Anthropic confirmed the investigation and characterized the access as scoped to a third-party vendor environment.

Why It Matters. Strip away the AI framing and this is a textbook contractor-credential-meets-predictable-naming-convention failure. Lessons: (1) third-party vendor environments holding access to your most sensitive systems need the same identity rigor as your own production: scoped credentials, short-lived tokens, no shared environments; (2) predictable naming conventions for staging, preview, and unreleased resources are an under-appreciated reconnaissance surface; (3) controlled-distribution governance for dual-use AI capability will keep failing in similar ways unless the access control layer matches the model's sensitivity. With OpenAI's GPT-5.4-Cyber and Google's Big Sleep operating in similar territory, expect more of these incidents, and expect frontier-AI access controls to become a board-level question for any organization participating in these partner programs.

🔎 Sources: TechCrunch

6. CISA & NCSC: FIRESTARTER Implant Survives Patches on Cisco Firewalls

What Happened. On April 23, CISA and the UK NCSC published a joint malware analysis report on FIRESTARTER, a custom Linux ELF backdoor found on a U.S. federal civilian agency's Cisco Firepower device running ASA software. Tracked to UAT-4356 (the same cluster behind ArcaneDoor), the implant was deployed in September 2025 via CVE-2025-20333 and CVE-2025-20362, and crucially persisted through the patches the agency later applied. CISA updated Emergency Directive 25-03 the same day, requiring federal agencies to collect device core dumps. Cisco recommends full reimaging; only a hard power cycle clears the persistence mechanism.

Why It Matters. "We patched, so we're clean" no longer holds for any organization that ran an internet-exposed ASA between September 2025 and patching. FIRESTARTER hooks into LINA, modifies the boot file, and re-launches itself on signal. For hybrid cloud architects specifically, these devices typically terminate site-to-site VPNs into AWS/Azure/GCP and house the credentials, certificates, and routing trust that connect on-prem to cloud workloads. A compromised firewall is also a compromised cloud egress path. Concrete actions: (1) treat any device exposed during the September 2025 window as compromised regardless of patch state; (2) plan reimaging, not patching, and rotate every credential, certificate, and key that touched the box, including cloud-side IAM roles or service account credentials accessible from those tunnels.

🔎 Sources: CISA AR26-113A

🎯 Cloud Security Topic of the Week:

What's Missing From Most AI Security Programs

The dominant question Shawn Hays hears at RSA from CISOs is some version of "I bought one AI security tool, why doesn't it cover my whole estate?" His answer is uncomfortable: most enterprises are already in the multi-AI era, and most AI security purchases were made for a single-vendor world that no longer exists. Copilot was the beta. Then Copilot Studio agents. Then Atlassian shipped Jira agents. Then Salesforce Agentforce. Then a business unit picked Bedrock for a specific use case, another picked Foundry for another, and somewhere a developer wired in an MCP server pointing at a Hugging Face model. Now the AISPM tool that scopes only to Microsoft prompts and responses sees a fraction of the surface.

Shawn frames this as a direct parallel to the multi-cloud transition fifteen years ago: every booth at RSA 2010 was selling multi-cloud security because organizations had lifted-and-shifted to "this place and that place and that place" without knowing how to protect it. We are now living the same pattern with AI, and the program design that worked for a single-stack AI strategy is not going to carry forward. But the bigger gap, in Shawn's view, is not breadth. It's depth. Most programs have visibility and posture management, and almost nothing else. The rest of this newsletter walks through the eight pillars he uses to frame an enterprise-grade AI security program, and why AISPM and visibility alone are the wrong place to stop.

Featured Experts This Week 🎤

Definitions and Core Concepts 📚

Before diving into our insights, let's clarify some key terms:

  • AISPM (AI Security Posture Management). Continuous discovery, posture assessment, and risk prioritization across AI components such as models, agents, MCP servers, code repos, and datasets. Analogous to CSPM but for AI estates.

  • DSPM (Data Security Posture Management). Continuous discovery and risk assessment of sensitive data across cloud and SaaS, including who has access and how it's classified.

  • ITDR (Identity Threat Detection and Response). Behavioral monitoring of identities (both human and non-human) for compromise indicators like privilege escalation, anomalous logins, and lateral movement.

  • CIEM (Cloud Infrastructure Entitlement Management). Visibility and right-sizing of permissions for identities accessing cloud resources.

  • AI Bill of Materials (AI BoM). A manifest of components inside an AI system (models, datasets, MCP servers, prompts, tools, dependencies). Analogous to SBOM for software supply chain.

  • MCP (Model Context Protocol). The emerging standard for how agents call external tools and data sources. An MCP server exposes capabilities (e.g., "read this database," "call this API") that an agent can invoke at runtime.

  • RAG AI (Retrieval-Augmented Generation). AI systems like Copilot that ground responses in data the prompting user already has access to, via token-exchange checks at query time. Permissions are inherited from the user.

  • Guardrails. Runtime controls that block an agent from taking specific actions or producing specific outputs. Input guardrails (e.g., "reject prompt-injection attempts"), output guardrails (e.g., "never emit PHI").

This week's issue is sponsored by Orca Security

Orca Security is hosting Cloud Security LIVE, a half-day virtual summit on Tuesday, May 12th. Join CISOs, security co-founders, and practitioners for unfiltered insight real stories and strategies from people securing the world's most complex cloud environments.

Sessions include:

  • The new standard for resilience: zero-breach to zero-impact

  • AI on both sides: securing models and APIs while using AI to defend your cloud

  • Mastering 3rd-party and supply chain risk

  • Security leadership panel on AI, risk, and driving change


    Join for a chance to win* a 64GB Beelink AI PC. *US-based attendees only.

💡Our Insights from this Practitioner 🔍

1. The "Multi-AI Era": Why It Changes Program Design

Shawn opens with a frame that anyone who lived through 2010-era multi-cloud chaos will recognize immediately:

"I think we are now entering this multi-AI era where no longer is an organization, an enterprise, sophisticated organization, just using copilot. They're using all these different pieces." Shawn Hays, Varonis

The implication for security architecture is direct. A program that scopes only to Microsoft Copilot's prompts and responses will not see Jira agents, Agentforce, Bedrock pro-code agents, or MCP servers pulling in third-party models. Shawn describes a recurring conversation with enterprises who bought a single-vendor AI security tool early, and then discovered, as they matured, that "the entirety of the AI that they have, or maybe the AI they're going to have" exceeds what one tool can cover. This is the AISPM equivalent of early CSPM tools that only saw AWS: useful, but incomplete the moment a second cloud showed up.

What to do about it. When evaluating an AI security platform, Shawn's recommended buyer's question is: "Can it protect all the AI I've built today, all the AI I plan to build tomorrow, and all the AI I don't even know about?" If the answer scopes to a single hyperscaler or a single AI vendor, the tool is solving a 2024 problem.

2. The Eight Pillars: AISPM Is Necessary, Not Sufficient

Shawn argues the market has over-pivoted on AISPM and visibility. It's the same pattern Varonis saw in DSPM five years ago, where customers bought posture tools and then realized they had no enforcement layer. The full program he describes covers eight areas; the pillars he emphasizes most:

  1. Inventory and observability across every layer of the AI stack, models, MCP servers, agents, services, code repos, even Jupyter Notebooks (where he's seen developers stash secrets for convenience). Continuous, not point-in-time.

  2. AISPM, misconfigurations, vulnerabilities, posture drift across that inventory.

  3. AI Bill of Materials for both internal-built systems and third-party AI services. If a model has a CVE in NIST's NVD, you need to know it's in your stack, and you need to know it's in Grammarly's stack too.

  4. Pen-testing of agents before they go live. As Shawn puts it, you "need to put it through the ringer… both from a jailbreaking, poisoning [perspective] but also very run-of-the-mill interactions to see how it's gonna behave."

  5. Runtime guardrails that block specific behaviors, input guardrails for prompt injection, output guardrails for sensitive-data emission.

  6. Compliance monitoring mapped to frameworks like NIST AI RMF that re-evaluate as the agent changes.

  7. Third-party AI risk management including AI BoM ingestion from vendors.

  8. Continuous monitoring across the full lifecycle, not just deployment.

The strategic insight underneath this list:

"They have great visibility, they have inventory, they know every piece of their AI system, but they really have no way of preventing that agent or AI system from going off the rails." - Shawn Hays, Varonis

This is the most actionable critique in the conversation. Many enterprises in 2026 will pass an internal audit of their AI security program, they have the dashboards, they have the inventory, they have CVE alerting on models, and still have nothing in front of an agent that would stop it from doing something stupid in production. The pen-testing-and-guardrails layer is where most programs are thinnest.

3. Identity Is Woven Through Everything: Why Native Tools Won't Carry You

Shawn is precise about how identity sprawls across an agentic estate:

"You've got identities for the folks that can access data in the cloud store… you have the identity of the agent, you have identity of the builders, like the people making these agents… if they're using some sort of Bitbucket, GitHub, you have identities for those that can have access to the code repos. It's like there's an identity layer woven throughout." - Shawn Hays, Varonis

Each of those identity surfaces needs ITDR coverage. Not just the cloud architect (with normal CIEM/ITDR for elevated privileges and lateral movement), but the agent identity itself, alerting on agents that suddenly gain entitlements, access resources they typically don't, or "feverishly" light up after a period of dormancy.

Ashish pushes on the obvious counter, "I have an E5 license, the native services cover this, right?", and Shawn's response is the most quotable piece of practical guidance in the episode:

"Native tools are really good about solving some of the native challenges. But then once you start broadening the scope and the aperture, that's when it gets a little tough." - Shawn Hays, Varonis

The example he uses is HIPAA. Microsoft Purview will do an excellent job preventing PHI exfiltration via labeled data and DLP policies, inside the Microsoft tenant. The moment an AWS Bedrock agent calls an EHR system through an MCP server, that protection envelope ends, but the regulator's expectation does not. For CISOs in regulated industries, this is the architectural argument for a cross-stack AI security platform regardless of how aligned your primary cloud is.

4. The Connector Ecosystem Is Quiet Third-Party Risk

The connector ecosystem is the part of the AI security problem most enterprises haven't budgeted for. Shawn's example: turning on the Salesforce connector in Microsoft 365 Copilot doesn't require deploying a "highly configured, sophisticated pro-code AI solution." It's a checkbox. But the moment that connector is on, Copilot is grounded in Salesforce data via the same can-access model, and any data permission misconfiguration in Salesforce now flows into Copilot output.

This is why DSPM and AI security are intertwined, not adjacent. RAG AI inherits user permissions; if those permissions are over-permissive, the AI is over-permissive. Shawn's guidance for Copilot governance specifically (and it transfers to any RAG-based AI tool in your estate):

  1. Understand what data Copilot can access.

  2. Test whether existing classification and labeling actually works.

  3. Define how new data will be classified and labeled going forward.

  4. Apply zero trust at runtime: monitor how Copilot interacts with data even after permissions are right-sized.

5. Zero Trust Applied to the Whole Chain, Not Just the User

The strongest architectural insight in the conversation is Shawn's reframing of zero trust as applied to every actor in an agentic transaction simultaneously:

"I need to not trust that agent. I need to even not trust the user prompting… I need to also not trust the cloud architect that's over that maybe data SQL database that's sitting in Azure. I wanna apply zero trust to that entire chain, and the reason being [is] data." - Shawn Hays, Varonis

In a healthcare patient-facing agent example, this means: don't trust the prompt (it might contain a jailbreak embedded inside legitimately-ingested PHI), don't trust the agent's downstream actions (it might write PHI to the wrong table), don't trust the Azure architect's identity (it might be compromised), and don't trust the cloud configuration (misconfig could leak data via SQL). Each link is a separate enforcement point with separate controls.

This is also the lens that makes this week's news cohere. The Anthropic Mythos incident violated trust at the contractor link. FIRESTARTER violated trust at the network appliance link. LiteLLM violated trust at the AI gateway link. None of these were AI-specific in the bug-class sense. They were identity-and-access failures dressed up in different costumes.

The "Ron Burgundy" Analogy: Why Data and Identity Will Outlast Every AI Architecture

When Ashish asks what controls will stand the test of time as 2027 and 2028 AI architectures arrive, Shawn lands on an analogy worth keeping:

"At least the AI we have now, and maybe for the next five years… AI is Ron Burgundy. It's only gonna read what's on the teleprompter. So if we're looking at identity solutions and data security… it's like, how are we gonna make sure that the right data shows up on the teleprompter?" - Shawn Hays, Varonis

The point is durable: whatever the next-generation agent architecture looks like, it will still be reading from a context window, and that context window is still being populated by data systems and identity decisions that you control. Right-size data access, instrument identity at every layer, and apply zero trust to the chain. Do that, and you'll be in a defensible position regardless of what AI architecture wins next.

Ashish's Frame: The Horse Has Left the Barn

Ashish makes the operational counterpoint that should sit with every CISO reading this:

"With AI, that horse has left the barn." - Ashish Rajan, Cloud Security Podcast

Data classification programs that "never had the rubber hit the road" (Ashish's words from his own CISO experience) are no longer a deferrable problem. The assumption that confidential data stays inside organizational boundaries is broken the moment an agent reaches into Salesforce, Jira, or a third-party MCP server. The teams that get ahead in 2026 are the ones treating data classification, identity hygiene, and access right-sizing as the AI security work, because, per Shawn's argument, that is the AI security work for the dominant RAG-based AI patterns.

Practical Application: A 30–60 Minute Action List

Drawing from Shawn's eight pillars and this week's news, the immediate work for cloud security teams:

  • Inventory every AI gateway, proxy, and middleware in your estate (LiteLLM, Portkey, custom OpenAI proxies, etc.) and treat their secrets stores as Tier-0. If you ran a vulnerable LiteLLM build, rotate now.

  • Map your full multi-AI footprint (Copilot, Copilot Studio, Foundry, Bedrock, Agentforce, Jira agents, custom agents, MCP servers, and any third-party SaaS using AI as a feature). Score each for AI BoM availability.

  • Apply ITDR to non-human identities (agents and service principals), not just to humans. Anomalous agent behavior should page the SOC the same way anomalous human behavior does.

  • Audit your connector ecosystem. Every cross-product connector (Copilot ↔ Salesforce, etc.) inherits permissions. Run a DSPM/AISPM pass on the data side of each connector.

  • Add pen-testing and guardrails to your agent SDLC. If your AI program documentation only describes posture and inventory, it's incomplete.

Podcast Episode

Question for you? (Reply to this email)

🤔 If you had to pick one (AISPM, runtime guardrails, or AI-aware ITDR), which is the biggest gap in your program right now?

Next week, we'll explore another critical aspect of cloud security. Stay tuned!

📬 Want weekly expert takes on AI & Cloud Security? [Subscribe here]”

We would love to hear from you📢 for a feature or topic request or if you would like to sponsor an edition of Cloud Security Newsletter.

Thank you for continuing to subscribe and Welcome to the new members in tis newsletter community💙

Peace!

Was this forwarded to you? You can Sign up here, to join our growing readership.

Want to sponsor the next newsletter edition! Lets make it happen

Have you joined our FREE Monthly Cloud Security Bootcamp yet?

checkout our sister podcast AI Security Podcast