🚨 Google Stops the First AI-Generated Zero-Day - Why "Guardrails Are Dead"

Google Threat Intelligence disrupted the first documented AI-generated zero-day this week, Microsoft published research turning Semantic Kernel prompt injection into host-level RCE, and a 172-package npm/PyPI worm tore through TanStack, Mistral AI, and UiPath in under six minutes. Against that backdrop, Check Point's David Haber (former Lakera CEO) and Paul Barbosa argue the layered-guardrail model security teams have built over the last two years is structurally finished, and explain what replaces it.

This week’s Cloud Security Newsletter topic: Guardrails Are Dead β€” What Replaces Them in the Agentic Era (continue reading) 

This image was generated by AI. It's still experimental, so it might not be a perfect match!

Incase, this is your 1st Cloud Security Newsletter! You are in good company!
You are reading this issue along with your friends and colleagues from companies like Netflix, Citi, JP Morgan, Linkedin, Reddit, Github, Gitlab, CapitalOne, Robinhood, HSBC, British Airways, Airbnb, Block, Booking Inc & more who subscribe to this newsletter, who like you want to learn what’s new with Cloud Security each week from their industry peers like many others who listen to Cloud Security Podcast & AI Security Podcast every week.

Welcome to this week’s Cloud Security Newsletter

The week of May 8–13 produced two stories that sit at exactly the same intersection: AI is now both the attacker's tool and the attack surface. Google Threat Intelligence Group disrupted the first documented case of a criminal actor using an AI-generated zero-day exploit. Microsoft published research showing how prompt injection in its own Semantic Kernel framework escalates to host-level remote code execution in two separate code paths. And TeamPCP's Mini Shai-Hulud worm, which the same Google report attributed to UNC6780, compromised 172 packages across 403 versions, including the first malicious npm package ever to carry valid SLSA provenance.

This week's conversation is with David Haber, VP AI Security at Check Point and founder of Lakera (the team behind Gandalf, the AI red-team game that has logged over 100 million human-AI interactions), and Paul Barbosa, VP of Cloud and SASE at Check Point, hosted by Ashish Rajan. The framing is sharp: David's position, "I believe guardrails are dead," is the editorial spine of this week's news as much as it is of the episode. [Listen to the episode]

⚑ TL;DR for Busy Readers

- Mini Shai-Hulud (CVE-2026-45321) compromised 172 npm/PyPI packages including TanStack, Mistral AI, UiPath, and OpenSearch. First malicious npm package with valid SLSA provenance. Rotate every secret that touched affected pipelines May 10–12.

- Google disrupted the first AI-generated zero-day: a Python 2FA bypass authored by an LLM and identified before mass exploitation. Assume exploit dev is now faster than your patch cycle.

- Microsoft Semantic Kernel CVEs (CVE-2026-26030, CVE-2026-25592) turn prompt injection into host RCE. If your agent calls tools, prompt injection is now a code-execution problem with blast radius equal to the agent's IAM.

- PAN-OS CVE-2026-0300 (CVSS 9.3) actively exploited by a likely state-sponsored cluster. Restrict Captive Portal to internal zones now; patches start May 13.

- David Haber's central argument: Stop layering perimeter guardrails. Move to contextual intelligence: evaluating agent intent, system instructions, and behavioral traces against what the agent is currently doing.

πŸ“° THIS WEEK'S TOP SECURITY HEADLINES

Each story includes why it matters and what to do next β€” no vendor fluff.

1. Google Disrupts the First Documented AI-Generated Zero-Day

Primary source: Google Cloud Threat Intelligence 
Reporting:  Bloomberg

What happened: On May 11, Google Threat Intelligence Group disclosed the first known case of a criminal threat actor using an AI-generated zero-day exploit: a Python script that bypassed two-factor authentication on a popular open-source, web-based system administration tool. GTIG identified the exploit before mass exploitation and attributes AI authorship with high confidence based on hallucinated CVSS scores, abundant educational docstrings, and textbook Pythonic formatting. Gemini was not the LLM used. The same report documents North Korea-linked APT45 running thousands of recursive prompts to validate PoC exploits at scale, China-linked UNC2814 using persona jailbreaks ("act as a senior security auditor") to research TP-Link firmware flaws, and a Chinese actor deploying agentic offensive tools Strix and Hexstrike against East Asian targets.

Why it matters: This is the operational confirmation defenders have been forecasting for two years. AI-assisted vulnerability discovery has moved from theory to a working capability in the wild. For cloud security teams, the implication is concrete: if a single LLM-augmented researcher can produce weaponizable exploits faster than your patch cycle can absorb them, patch SLAs alone are no longer a viable program. Weight has to shift toward compensating controls: behavioral detection, least-privilege scoping for service accounts and non-human identities, and rapid containment playbooks that work without a patch in hand. The 72-minute breakout-time benchmark Unit 42 cited earlier this year now looks like the floor, not the ceiling.

2. Mini Shai-Hulud Worm: 172 Packages Compromised, Including the First Malicious Package with Valid SLSA Provenance 🚨

Primary source:  Wiz Β· Snyk 
Advisory: NHS England Cyber Alert 
Analysis: The Hacker News

What happened: Between 19:20 and 19:26 UTC on May 11, TeamPCP (tracked as UNC6780 in Google's report this week) published 84 malicious versions across 42 @tanstack/* npm packages in under six minutes, then propagated to Mistral AI, UiPath, OpenSearch, Guardrails AI, and PyPI within hours. By May 12 the campaign had hit 172 unique packages across 403 malicious versions with cumulative downloads exceeding 518 million. TanStack's compromise was assigned CVE-2026-45321 (CVSS 9.6). The attack chain hijacked GitHub Actions via a pull_request_target trigger, used cache poisoning, and extracted OIDC tokens from /proc memory on the runner. npm tokens were never stolen; the publish pipeline itself was compromised. Payloads exfiltrate AWS IAM keys, GitHub PATs, HashiCorp Vault tokens, Kubernetes secrets, and 1Password/Bitwarden vaults, and inject persistence hooks into Claude Code (.claude/settings.json) and VS Code (tasks.json with runOn: folderOpen).

Why it matters: This is the most consequential supply chain attack of 2026 to date, and it breaks an assumption cloud security programs were starting to rely on: SLSA provenance attestation no longer guarantees a package wasn't tampered with. Any CI/CD environment that mints OIDC tokens (i.e., most modern GitHub Actions pipelines) is in scope. The persistence vector into AI coding agents is novel and underappreciated: a single .claude/settings.json injection turns a developer's daily AI assistant into a sustained execution channel. This is also the precise scenario David Haber describes in this week's episode: agents with tool access, untrusted inputs, and the autonomy to act.

3. Microsoft Research: Semantic Kernel Prompt Injection Becomes Host-Level RCE

Primary source: Microsoft Security Blog β€” Semantic Kernel research disclosure Advisory: CVE-2026-26030 (Python semantic-kernel < 1.39.4) Β· CVE-2026-25592 (.NET SessionsPythonPlugin) Analysis: Vibe Graveyard Β· Windows Forum

What happened: On May 7, Microsoft disclosed and patched two CVEs in its Semantic Kernel AI agent framework. CVE-2026-26030 (critical, Python semantic-kernel < 1.39.4) is in the In-Memory Vector Store / Search Plugin path: a single crafted prompt launches calc.exe on the agent's host machine when the agent uses the default filter functionality. CVE-2026-25592 is in the .NET SDK's SessionsPythonPlugin, which is meant to isolate Python execution inside Azure Container Apps dynamic sessions but exposed a sandbox-to-host file-transfer helper as a kernel function. Once the LLM could invoke it as a tool, the local file path became attacker-controlled, enabling arbitrary host file writes from inside what was supposed to be an isolated sandbox. Microsoft framed the research as a class problem, not a one-off.

Why it matters: This is the cleanest existing demonstration that AI agent security cannot be reduced to prompt safety. When an agent can call functions, query stores, run scripts, or touch cloud APIs, prompt injection is an application-security and identity problem with blast radius proportional to the agent's privileges. The story lands the same week David Haber argues, for unrelated reasons, that guardrail-style perimeter defenses are no longer sufficient against exactly this class of attack. The Microsoft research is the proof.

4. PAN-OS Captive Portal Zero-Day Actively Exploited β€” Likely State-Sponsored 🚨

Advisory: Palo Alto Networks Advisory
Analysis: Unit 42 Threat Brief  Β· Wiz 
Reporting: The Hacker News

What happened: Palo Alto Networks disclosed CVE-2026-0300 (CVSS 9.3) on May 6, with patches starting May 13. The flaw is an unauthenticated buffer overflow in the PAN-OS User-ID Authentication Portal (Captive Portal) on PA-Series and VM-Series firewalls, enabling RCE with root privileges via crafted packets. Prisma Access, Cloud NGFW, and Panorama are unaffected. CISA added it to KEV on May 6 with a May 9 federal patching deadline. Unit 42 attributes limited observed exploitation to CL-STA-1132, a likely state-sponsored cluster that achieved RCE on April 16 after a week of unsuccessful attempts beginning April 9, injected shellcode into an nginx worker, deployed EarthWorm and ReverseSocks5 tunneling tools, and conducted Active Directory enumeration using the firewall's service account credentials.

Why it matters: A second story this week reinforcing the same theme as #1 and #2: internet-exposed security infrastructure is a primary target, not a hardened boundary. The post-exploitation pattern (pivot from the firewall's service account into AD enumeration) is textbook state-sponsored playbook and reinforces why firewall service accounts deserve tier-zero-equivalent treatment in your identity model. Approximately 225,000 internet-facing PAN-OS instances exist globally per Shodan, though only a subset run Captive Portal on ports 6081/6082.

5. Instructure Pays Ransom to ShinyHunters After Double Canvas Breach Hits 275M Users

Primary source: Inside Higher Ed Β· The Register
Reporting:  The Hacker News 

What happened: Instructure, parent of the Canvas LMS used by ~41% of North American higher-ed institutions, confirmed on May 11 it reached an agreement with ShinyHunters after a two-stage breach. The initial intrusion on April 29 (disclosed May 1) exploited a flaw in the Canvas Free-for-Teacher account program to exfiltrate 3.65 TB of data: ~275 million records across 8,809 institutions including Harvard, Princeton, Columbia, Stanford, Penn, and Georgetown. After Instructure declined to negotiate and attempted to patch, ShinyHunters returned on May 7, defaced ~330 Canvas login portals worldwide, and took the platform offline during US finals week. Instructure subsequently stated it received "digital confirmation of data destruction (shred logs)," a strong implication of a ransom payment the company has not explicitly confirmed. The Free-for-Teacher program has been permanently shut down.

Why it matters: This is the largest education-sector breach on record and a textbook SaaS concentration-risk case. A single under-governed product feature gave the adversary a path into the production multitenant Canvas environment. ShinyHunters' 2026 playbook (third-party integrator compromise to reach downstream customers at scale) builds on its 2024 Snowflake-customer campaign and 2025 Salesforce campaign. The broader question for enterprise CISOs: many organizations now depend on a small number of critical SaaS platforms but lack mature playbooks for tenant compromise, platform-wide outage, or third-party-driven data exposure. The gaps this incident exposes for ed-tech are equally true for HR, CRM, identity, and collaboration SaaS.

🎯 Cloud Security Topic of the Week:

Guardrails Are Dead - What Replaces Them in the Agentic Era

The thread running through every story above is the same: attackers no longer need to break the perimeter when they can manipulate the systems that operate inside it. The Google-disrupted AI zero-day worked because LLMs collapse the cost of producing weaponizable exploits. The Mini Shai-Hulud worm worked because trusted CI/CD primitives (OIDC tokens, SLSA attestations, GitHub Actions triggers) could be weaponized inside a pipeline that defenders had explicitly designed as trusted. The Semantic Kernel CVEs work because an agent's tool-calling layer is, by design, a privileged execution channel that text from anywhere can reach.

David Haber and Paul Barbosa's argument is that the security industry's response to this class of problem, layering more guardrails, more perimeter checks, more "don't do that" rules at the prompt level, is structurally finished. What replaces it is something Haber calls contextual intelligence: evaluating intent, system instructions, behavioral traces, and tool calls in real time against what the agent is actually supposed to be doing. It is a harder problem and a different operating model.

Featured Experts This Week 🎀

Definitions and Core Concepts πŸ“š

Before diving into our insights, let's clarify some key terms:

  • Direct prompt injection: User input crafted to override an LLM's system instructions. The original form ("ignore your previous instructions and do as I say") exploits the fact that LLMs do not architecturally distinguish between system instructions, retrieved data, and user input. It is all text and tokens.

  • Indirect prompt injection: Malicious instructions delivered to an agent via the data and tools it consumes: a shared document, an email, an MCP-connected drive, an output from another agent. The victim does not interact with the malicious content; the agent does. As David Haber puts it, indirect prompt injections are often invisible, both hard to spot in real time and undetectable after the fact.

  • Gandalf: Lakera's open-source AI red-teaming game launched ~2.5 years ago. Has reached tens of millions of people and logged over 100 million human-AI interactions, one of the largest datasets of how people actually try to exploit LLMs.

  • Contextual intelligence (as a defense pattern): The replacement Haber proposes for perimeter guardrails. Evaluates the agent's design, system instructions, behavioral traces, and current actions in real time to reason about whether the agent is being manipulated. Requires substantially more telemetry than guardrail-based defenses.

  • Non-human identity (NHI): Service accounts, API keys, OAuth tokens, and AI-agent identities. The credentials that AI agents and automation use to act inside cloud and SaaS environments.

  • Language as the new executable: Paul Barbosa's framing that the domain of exploitation has shifted from code (requiring CS expertise and tool fluency) to natural language (bounded only by human creativity).

This week's issue is sponsored by Beyond Identity

AI Agents Are Running With Keys Your Security Stack Can't See

The pressure to ship AI agents is real: do more with less, automate everything, yesterday. But every agent you deploy carries API keys, accesses sensitive systems, and executes actions your security tools were never designed to see. Legacy architectures leave you choosing between AI velocity and actual governance. Ceros eliminates that tradeoff. It controls the agent launch point with hardware-bound identity and continuous authorization, giving you full visibility into every tool call, MCP connection, and data flow.

πŸ’‘Our Insights from this Practitioner πŸ”

1. The Threat Model Shift: Language Is the New Executable

The opening frame of the conversation is one cloud security leaders should sit with. Paul Barbosa describes the moment he understood why prompt injection is qualitatively different from prior classes of vulnerability:

"The domain of exploit was very code driven. You had to know what you were doing. But with prompt injection, it's language. And we're only bound by like the limits of human creativity, which we know is boundless." β€” Paul Barbosa

The implication is that the population of people who can produce a working exploit has expanded by orders of magnitude. Haber confirms this empirically from Gandalf's 100M+ interaction dataset:

"The most beautiful example we see β€” 12-year-old kids that are very successful playing the game. And we see some of the most advanced hackers that are also very successful at playing the game." β€” David Haber

That observation is the editorial bridge to this week's Google Threat Intelligence report. AI-assisted vulnerability research is not theoretical for state actors. APT45's recursive PoC validation runs and the criminal actor who produced the disrupted 2FA bypass are both empirical confirmations of what Haber has been seeing from the offensive side for two years.

What to do with this: Audit which assumptions in your threat model still depend on "attacker scarcity," the idea that sophisticated attacks require sophisticated attackers. The assumption no longer holds. Programs that rely on it (vulnerability triage that deprioritizes anything not on KEV, MFA selection that treats SMS as adequate, AppSec coverage that assumes attackers won't fuzz a particular surface) need an explicit refresh.

2. Indirect Prompt Injection Is the Attack That Matters

Haber draws a sharp distinction between direct and indirect prompt injection that maps directly onto how security teams should weight their concern:

"I can exfiltrate your entire corporate inbox in about three seconds while you are on vacation, sipping a mojito. You will not even notice that I did that through an indirect prompt injection. You will have no idea. The indirect ones are often invisible. They're not only hard to spot β€” but also after the fact, you wouldn't even know." β€” David Haber

The Check Point team demonstrated this with a now-canonical example: a Google Doc containing a malicious prompt, shared with a user who never opens it. The user's AI agent, connected to their Drive, reads the document, follows the injected instructions, and exfiltrates data. The victim is not in the interaction loop at any point.

This is the exact attack class Microsoft's Semantic Kernel research validates this week. The agent has tools. The tools have privileges. Any text the agent processes (including text it was asked to summarize, retrieve, or analyze) can become an instruction. The blast radius is determined entirely by what the agent is permitted to do.

What to do with this: Inventory every AI agent in your environment by what data it reads and what tools it can call. Any agent that ingests untrusted content (email, shared documents, web pages, MCP-connected drives, tickets) and also has write or execute privileges is a candidate for indirect prompt injection. Reduce one side of the equation or the other; most agents are over-permissioned on tools relative to what they actually need.

3. Guardrails Are Dead, and Why That Matters Now

This is the central thesis of the conversation, and Haber is unambiguous:

"Last year, the hot talk in town was guardrails. I believe guardrails are dead. With the autonomy and the complexity that agentic AI brings, we need to go away from what are essentially perimeter checks. Putting one guardrail after another β€” 'don't talk about weapons, no hate speech, don't bash the competitor, prompt injection defense' β€” we've been layering on these guardrails on top of AI. That's over. It doesn't scale. What we need to do now is we need to move from perimeter checks to contextual intelligence." β€” David Haber

Contextual intelligence, in Haber's framing, evaluates the agent's design intent, system instructions, traces from past behavior, and user analytics in real time against the action the agent is currently taking. It is reasoning about whether the agent is being manipulated, not a static rule about what it can and cannot say.

The structural critique applies just as cleanly to traditional cloud security tooling. Barbosa surfaces the WAF analogue:

"How do you detect the prompt injection through a WAF? It's impossible. So if that's the access modality β€” that's one of the first places that we chose to make the integration with Lakera and the runtime security β€” to augment the WAF. Otherwise it's a request and there's no existing method to try to detect it." β€” Paul Barbosa

The point is not that WAFs are obsolete. It is that the layer in your stack designed to inspect text-shaped requests has no native concept of "is this text trying to manipulate a downstream model?" Adding that capability is an architectural change, not a rule update.

What to do with this: If your current AI security strategy is a list of prompt filters or content guardrails bolted to model APIs, treat that as a starting baseline and not the end state. The investments that actually scale are agent-level behavioral instrumentation, tool-call auditing, and runtime evaluation of agent actions against declared intent. These are heavier engineering lifts than guardrails, and they need to start now.

4. Identity for Agents Is Not a Solved Problem

When asked whether non-human identity controls can carry the weight that guardrails no longer can, Haber's answer is direct. His position, paraphrased from the transcript (where one word appears garbled and is preserved as such below):

"I don't think the identity prompt [transcript reads "prompt"; almost certainly "problem"] for agents has been solved. At all... There are certain important questions around how we want to treat agents... One of the big areas people are looking into is self-replicating agents. So you've got teams that are replicating themselves to maybe do other tasks. How do identities evolve with that? I don't think that's clear at all. Many claim they've solved it. I've not seen anything that convinces me that we have a good handle on that." β€” David Haber [VERIFY exact word β€” possible transcription error]

Barbosa reinforces the point. Least privilege for NHIs is necessary but not sufficient, because the space is moving too fast for any static control to be a stopping point:

"It's never gonna be enough. The space is evolving too fast for any static control to say, okay, I understand I have non-human identity and I'm gonna apply least privilege. That's just table stakes." β€” Paul Barbosa

This connects directly to the Mini Shai-Hulud worm and the Semantic Kernel CVEs. The worm exfiltrates exactly the credentials agents and automation use (AWS IAM keys, GitHub PATs, Vault tokens) and uses them to expand. The Semantic Kernel research shows that the agent's own service identity is the blast radius. Least privilege bounds the damage; it does not prevent the path.

What to do with this: Treat NHI governance as a year-long program, not a project. The minimum tactical baseline: every AI agent in production has its own service identity (not a shared one), the identity is scoped to the specific tools and data the agent needs, and there is logging that tells you when the identity is used outside its expected pattern. Beyond that, plan for the harder problems Haber names: identity for self-replicating agents, identity that travels across environments, distinguishing "Ashish acting via an agent" from "an agent acting on Ashish's behalf."

5. The Same Action, Catastrophic by Context

Barbosa's framing of why context-aware defense matters is the cleanest articulation of the agentic security problem:

"The same action that an agent could take could be okay β€” or it could be catastrophic, just depending on the conditions. The constraint on AI is never gonna be security, unfortunately. The constraint is gonna be productivity β€” and productivity by its very nature is always to be more helpful, is gonna ask for more and more access, more and more authorization. And I think as humans we're gonna gladly grant that. That same action taken by an attacker could be catastrophic." β€” Paul Barbosa

This is the exact dynamic in the Semantic Kernel SessionsPython case: a file-transfer helper is benign in the workflow it was written for, and an arbitrary host write primitive in the hands of an injected prompt. The function did not change. The context did.

What to do with this: When evaluating agent deployments, the question is not "does this tool seem dangerous?" but "what is the worst action this tool enables if the agent is being manipulated by content it just ingested?" Most production agents have not been audited against that question. Start with the agents that touch customer data or cloud control planes.

6. AppSec, Network, Data β€” It's All One Problem Now

The conversation closes on an organizational point that maps directly onto how this week's news has to be triaged. Ashish presses on whether AI security is an AppSec problem or a data security problem. Haber's answer:

"Now it's everything." β€” David Haber

Barbosa expands on what that means operationally:

"It used to be like they own the tool β€” they're the network security team, they got the firewall, we'll get a ServiceNow ticket, it'll go to them. Now I think everyone more than ever, it's everyone's problem. If I'm a CISO, I'm going to every domain that I have a tool set and saying β€” how are you solving for this? Because it can render itself through your control, your tool set, or the applications that you're protecting." β€” Paul Barbosa

The Mini Shai-Hulud worm illustrates the point. The story is simultaneously a CI/CD problem (poisoned GitHub Actions), an identity problem (OIDC token theft), a secrets problem (Vault and IAM exfiltration), an endpoint problem (persistence into Claude Code and VS Code config), and a SaaS problem (npm and PyPI as the distribution channel). No single domain owner can fully respond to it.

What to do with this: This week is a forcing function to ask, across every security domain in your organization, the same question Barbosa describes: "how are you solving for this?" If the answer is "we're not, that's another team," find the gap and own it.

7. The Defense Window Is Open, But It Is Closing

Haber's closing observation is the one worth carrying into next week's planning:

"We are actually at a very unique time, I believe right now, where we still have a chance for defense to catch up. I see both offensive security and defense on an exponential curve. But I think the question is β€” what's the exponent? How fast are we actually moving?" β€” David Haber

The Google-disrupted exploit was caught because GTIG was looking; the next one may not be. Mini Shai-Hulud broke an attestation primitive (SLSA) that defenders were starting to trust. Microsoft framed the Semantic Kernel research as a class problem, not a one-off.

What to do with this: Pick one of the harder problems β€” agent behavioral monitoring, NHI lifecycle management, CI/CD supply chain attestation that survives runner compromise β€” and make actual progress on it this quarter.

🧠 Mental Model β€” Language Is the New Executable

If language is executable, then every place an agent reads untrusted data is a place an attacker can run code.

Guardrails were the AV scanner of the LLM era: pattern-matching on a target that mutates faster than the patterns. The successor is not a better filter. It is runtime context β€” what was this agent told to do, what is it doing now, and does the second match the first?

Cloud security teams that build that telemetry layer in 2026 will be the ones positioned to defend in 2027. Teams that keep adding guardrails will be running an antivirus strategy against an autonomous adversary.

  • Check Point AI Security research β€” AI security threat research and Lakera-related publications

  • Gandalf by Lakera β€” gandalf.lakera.ai β€” AI red-team game, educational tool for prompt injection patterns

  • CISA Known Exploited Vulnerabilities Catalog

  • Google Cloud Threat Intelligence β€” AI Threat Tracker (May 2026)

  • Microsoft Security Blog β€” Semantic Kernel research disclosure

  • Unit 42 β€” PAN-OS CVE-2026-0300 Threat Brief

  • Wiz / Snyk β€” Mini Shai-Hulud analyses

Podcast Episode

Question for you? (Reply to this email)

πŸ€” If guardrails are dead, what is the first thing you take out of your AI security stack β€” and what do you put in its place?

Next week, we'll explore another critical aspect of cloud security. Stay tuned!

πŸ“¬ Want weekly expert takes on AI & Cloud Security? [Subscribe here]”

We would love to hear from youπŸ“’ for a feature or topic request or if you would like to sponsor an edition of Cloud Security Newsletter.

Thank you for continuing to subscribe and Welcome to the new members in tis newsletter communityπŸ’™

Peace!

Was this forwarded to you? You can Sign up here, to join our growing readership.

Want to sponsor the next newsletter edition! Lets make it happen

Have you joined our FREE Monthly Cloud Security Bootcamp yet?

checkout our sister podcast AI Security Podcast