๐Ÿšจ Lovable's Blueprint for Security in the Age of Vibe-Coding and Agentic AI

This weekโ€™s breakdown with Igor Andriushchenko (Head of Security, Lovable) shows how AI-native companies are already redesigning security for this new reality. Topics include agentic AI governance, identity and access controls for AI agents, SCA in the LLM era, AI-assisted AppSec workflows, and the Mandiant M-Trends 2026 findings on cloud initial-access vectors.

Hello from the Cloud-verse!

This weekโ€™s Cloud Security Newsletter topic: Governing AI Agents as Federated Developers: The New Identity and Access Frontier (continue reading) 

Incase, this is your 1st Cloud Security Newsletter! You are in good company!
You are reading this issue along with your friends and colleagues from companies like Netflix, Citi, JP Morgan, Linkedin, Reddit, Github, Gitlab, CapitalOne, Robinhood, HSBC, British Airways, Airbnb, Block, Booking Inc & more who subscribe to this newsletter, who like you want to learn whatโ€™s new with Cloud Security each week from their industry peers like many others who listen to Cloud Security Podcast & AI Security Podcast every week.

Welcome to this weekโ€™s Cloud Security Newsletter

The velocity of change inside AI-first engineering organizations is no longer a competitive talking point; it is a security crisis hiding in plain sight. When a single developer can generate and deploy more code in a day than an entire team could in a week, every assumption your AppSec program was built on needs to be re-examined.

This week we sit down with Igor Andriushchenko, Head of Security at Lovable, one of Europe's fastest-growing AI-native companies (the platform that lets anyone build fully functional web apps without writing a line of code). Igor joined when Lovable had 40 employees; six months later it had grown to 150+. He brings a rare vantage point: operating security at the bleeding edge of the agentic AI revolution, at a company where developers are not just using AI tools, they are building AI-native products at scale. Hosted by Ashish Rajan of the Cloud Security Podcast and AI Security Podcast, this conversation maps exactly what enterprise security leaders need to rethink now before the volume of AI-driven change overwhelms their existing controls.

We also cover the week's most critical news, including the Stryker cyberattack, a claimed 100GB breach at Crunchyroll, Mandiant's M-Trends 2026 cloud findings, and the AI-powered supply chain threat campaigns that are directly relevant to every organization adopting agentic developer tooling. [Listen to the episode]

โšก TL;DR for Busy Readers

  • AI-generated code is producing 100ร— the change velocity your SAST, DAST, and SCA pipelines are already load-failing; retool now.

  • Treat AI agents like federated developers: they inherit every credential and permission their human operator holds, apply PAM, least privilege, and human-in-the-loop escalation for privileged actions.

  • AI hallucinated supply chain risk is real: LLMs recommend abandoned or phantom packages; layer AI-native SCA with dependency pinning and registry controls.

  • Mandiant M-Trends 2026: voice phishing (23%) and SaaS token theft are your top cloud initial-access vectors not exploits (only 6%). Rethink your detection priorities.

  • The winning AI adoption play is not top-down mandate; it is building the internal skills, MCP server ecosystem, and data connections that make AI genuinely useful to security teams in their actual daily work.

๐Ÿ“ฐ THIS WEEK'S TOP 4 SECURITY HEADLINES

Each story includes why it matters and what to do next โ€” no vendor fluff.

1. Healthcare Giant Stryker Hit by Cyberattack

๐Ÿ“ฐ What Happened: Medical technology leader Stryker confirmed it suffered a cyberattack, with containment measures underway and external forensic incident response firms engaged. Investigations are ongoing and the full scope has not yet been disclosed.

๐Ÿ” Why It Matters: Healthcare remains the highest-value target in critical infrastructure, not only for ransomware groups but for nation-state actors interested in supply chain leverage. Stryker's sprawling ecosystem of connected medical devices, SaaS platforms, and vendor integrations exemplifies the modern enterprise attack surface, one where a compromise upstream can cascade into clinical environments. This incident is a sharp reminder that IT/OT convergence risk is no longer theoretical: segmentation gaps between cloud workloads, operational technology, and SaaS-connected clinical systems are actively being exploited.

โœ… Key Actions:

  • Validate third-party access controls enforce ZTNA and least-privilege for all vendor and partner integrations.

  • Harden segmentation between IT, OT, and cloud workloads; assume lateral movement paths exist until tested.

  • Exercise your ransomware response playbook across hybrid environments, including cloud failover and clinical backup systems.

2. $120M Funding Round for Agentic Access Management Signals the NHI Security Era Has Arrived

What Happened: On March 19, 2026, Oasis Security announced $120 million in Series B funding led by Craft Ventures, with participation from Sequoia Capital and Accel, bringing total funding to $195 million. The round reflects the rise of AI agents becoming embedded across enterprise infrastructure.Over the past year, Oasis has seen new ARR growing 5x year over year, with a majority of its client base coming from the Fortune 500.

Why It Matters: Machine identities now outnumber humans 82:1. The systems designed to govern access were built for people, not autonomous systems. The Oasis round is venture capital confirming what CISOs are already experiencing: AI agents with cloud permissions are the fastest-growing and least-governed identity class in enterprise environments today.

The Trivy supply chain attack this week โ€” where stolen CI/CD secrets included AWS IAM keys, GCP service accounts, and Kubernetes tokens โ€” illustrates exactly the problem Oasis is solving. Every AI agent and automation workflow introduced into cloud environments creates non-human identities that require lifecycle management, least-privilege enforcement, and real-time access governance. One survey found 79% of IT professionals feel ill-equipped to handle attacks tied to non-human identities, even as adoption of AI agents continues to climb. 

3. RSA 2026: The AI-Powered Security Tooling Surge and the Governance Gap

๐Ÿ“ฐ What Happened: At RSA Conference 2026, CrowdStrike, Palo Alto Networks, Cisco, and dozens of emerging vendors unveiled new AI-driven security capabilities spanning autonomous detection, accelerated incident response, and agentic SOC workflows. The announcements signal an arms race between attackers and defenders in which AI is the primary lever.

๐Ÿ” Why It Matters: The RSA announcements confirm what Igor articulated clearly in this week's transcript: AI is now being deployed on both sides of every security encounter. The risk for enterprise security leaders is not falling behind on AI adoption it is the governance gap that opens when AI tooling is deployed without the data connections, tuned models, and human oversight structures needed to make it reliable. Hallucinations in security tooling are not just embarrassing they generate alert fatigue, erode analyst trust, and create the dangerous illusion of coverage. The organizations that will win are those that integrate AI into existing SecOps workflows thoughtfully, not those that deploy the most tools.

โœ… Key Actions:

  • Prioritize AI-assisted triage over full autonomy human review of AI-surfaced findings remains essential at this stage of maturity.

  • Validate model outputs rigorously; establish guardrails against hallucinated findings before operationalizing any AI detection capability.

  • Integrate AI tooling into your existing SecOps stack; parallel tool sprawl increases noise and management burden.

๐Ÿ”— Source: RSA Conference 2026 Coverage

4. AI + Supply Chain Attacks Target Developer Ecosystems and Cloud Pipelines

๐Ÿ“ฐ What Happened: Recent threat intelligence documents active campaigns abusing developer tooling ecosystems including malicious VS Code extensions, poisoned GitHub repositories, and AI-themed lure packages to deliver infostealers and backdoors directly into enterprise cloud environments.

๐Ÿ” Why It Matters: The developer is the new perimeter. Malicious packages mimicking legitimate AI coding tools are being published to npm, PyPI, and similar registries, targeting the exact toolchain that engineering teams are actively expanding as part of their AI adoption push. Igor raised precisely this risk in discussing AI-recommended abandoned packages and dependency confusion attacks: AI coding assistants can introduce compromised or phantom dependencies at a rate no human reviewer can track without automated controls. Cloud credentials harvested from a developer's environment translate directly into cloud infrastructure access.

โœ… Key Actions:

  • Enforce secure SDLC controls: mandatory SCA scanning on every PR, with AI-native tooling that understands business logic, not just CVE signature matching.

  • Restrict use of unverified IDE extensions, MCP servers, and AI skill packages build an internal approved registry.

  • Monitor developer credential usage patterns in cloud environments; flag anomalous API activity from developer identity paths.

5. TeamPCP/Trivy Supply Chain Attack โ€” 1,000+ SaaS Environments Hit, Lapsus$ Joins Extortion Wave

What Happened

On March 19, 2026, threat actor TeamPCP compromised Aqua Security's Trivy vulnerability scanner, injecting a credential-stealing payload into CI/CD pipelines across thousands of repositories by force-pushing 75 version tags in aquasecurity/trivy-action to malicious commits. The attack exploited credentials retained from an incomplete remediation of a prior breach.

The campaign has since escalated significantly. Mandiant CTO Charles Carmakal confirmed 1,000+ enterprise SaaS environments are actively compromised, with projections of "another 10,000" downstream victims as fallout continues. TeamPCP is now confirmed to be channeling stolen access to Lapsus$, converting a credential theft operation into an active extortion campaign. Attackers also pivoted into LiteLLM โ€” a widely used AI middleware library โ€” using stolen Trivy credentials, and deployed CanisterWorm to backdoor 29+ npm packages. Notably, the LiteLLM attack was deliberately timed to coincide with the RSA Conference, while defenders were distracted. Cached malicious Trivy Docker images (0.69.4โ€“0.69.6) continue to circulate via mirror.gcr.io despite takedowns.

Why It Matters

The payload targeted the full cloud credential stack โ€” AWS IAM keys, GCP service account tokens, Azure service principals, Kubernetes tokens, SSH keys, and Docker registry credentials. The Lapsus$ involvement means stolen credentials are already distributed and monetized; the remediation window has passed for affected environments. The LiteLLM pivot signals that AI/ML pipeline infrastructure is now a direct lateral movement target. Three structural lessons: tag-based GitHub Actions provide no integrity guarantee; incomplete credential rotation turns one breach into a campaign; and runtime detection outperformed every static control deployed.

Immediate Actions

  • Audit GitHub Actions logs for trivy-action or setup-trivy runs between 17:00โ€“23:13 UTC on March 19; search for tpcp-docs repos in your GitHub org

  • Treat all CI/CD secrets as compromised if affected Actions ran โ€” rotate immediately

  • Pin all GitHub Actions to full commit SHAs, not version tags

  • Audit LiteLLM deployments and associated credentials in AI/ML pipelines

  • Verify npm and PyPI dependency integrity against Socket's CanisterWorm IOCs

  • Do not assume takedowns ended exposure โ€” check mirror.gcr.io and other caches

  • Block domain scan.aquasecurtiy[.]org and IP 45.148.10.212

๐ŸŽฏ Cloud Security Topic of the Week:

Governing AI Agents as Federated Developers: The New Identity and Access Frontier

The most consequential security architecture challenge of 2026 is not a new vulnerability class it is the identity question posed by AI agents operating autonomously inside enterprise environments. When a developer runs ten AI coding agents overnight, each agent is not just generating code. It is operating with that developer's credentials, accessing that developer's permitted systems, and taking actions that the organization's existing IAM, PAM, and audit frameworks were never designed to capture or govern.

Igor Andriushchenko frames this with unusual clarity: the mental model shift required is from thinking of AI agents as tools to recognizing them as delegated principals  agents that inherit the permissions, credentials, and organizational trust of the human who invoked them. That reframing has immediate, practical implications for how cloud security architects design access controls, audit trails, and escalation paths in an agentic world.

This week, we go deep on what that actually looks like in practice from PAM-enforced human-in-the-loop escalation for privileged actions, to deny-list prompting strategies, to the skills and MCP server ecosystem that makes AI genuinely useful to security teams rather than just a mandate to fulfill.

Definitions and Core Concepts ๐Ÿ“š

Before diving into our insights, let's clarify some key terms:

  • Agentic AI / AI Agents AI systems that do not just respond to prompts but autonomously take multi-step actions writing code, calling APIs, browsing the web, running commands on behalf of a user. The key security distinction: agents act, not just advise. When agents run with real credentials and real system access, the blast radius of a compromise or error scales accordingly.

  • MCP Server (Model Context Protocol) An emerging open standard that allows AI agents to connect to external tools, data sources, and APIs in a structured way. MCP servers define what capabilities an agent can invoke. Analogous to API integrations in traditional software, but with the critical difference that AI agents can discover and chain MCP capabilities dynamically. Supply chain attacks targeting MCP marketplaces are already documented.

  • PAM (Privileged Access Management) A security discipline and tooling category focused on controlling, monitoring, and auditing access to privileged accounts and credentials. In the context of AI agents, PAM provides the mechanism for requiring human escalation before an agent can access production secrets, high-privilege API keys, or sensitive system operations creating the "healthy friction" Igor describes.

  • SCA (Software Composition Analysis) Automated tooling that identifies open-source and third-party components in a codebase, mapping them against known vulnerabilities (CVEs), license issues, and increasingly supply chain risk signals. In the AI coding era, SCA is under new pressure: LLMs can recommend abandoned, fictional, or malicious packages, and the volume of dependencies introduced per day has multiplied dramatically.

  • SAST / DAST Static Application Security Testing (SAST) analyzes source code for security issues without executing it. Dynamic Application Security Testing (DAST) tests a running application by simulating attacks. Both categories are being disrupted by AI-native alternatives that build semantic models of application business logic rather than relying on pattern matching to find high-signal, low-noise findings like broken access control and business logic flaws.

  • Dependency Confusion Attack An attack technique in which a threat actor publishes a malicious package to a public registry (npm, PyPI, etc.) with a name that matches or closely resembles a private internal package name. AI coding assistants that recommend package names without verifying their provenance or freshness can be manipulated into recommending the malicious public package over the intended private one.

  • OAuth Token Harvesting The theft of OAuth access tokens  short-lived credentials used by SaaS applications to authorize access to impersonate legitimate users or service accounts. Per M-Trends 2026, this is a primary technique in cloud initial-access campaigns, particularly when tokens are embedded in SaaS vendor environments with broad downstream permissions.

  • Voice Phishing (Vishing) Social engineering attacks conducted via telephone or voice-over-IP, often combined with AI voice synthesis to impersonate executives, IT help desks, or vendors. M-Trends 2026 identifies vishing as responsible for 23% of cloud intrusions frequently used to convince targets to approve MFA push notifications or surrender credentials verbally.

  • Prompt Injection An attack against AI systems in which malicious instructions are embedded in content that the AI processes (documents, web pages, API responses), causing the AI agent to take unintended actions. In the context of AI coding agents with cloud access, successful prompt injection can result in credential exfiltration, data destruction, or unauthorized API calls.

  • ASBOM (AI Software Bill of Materials) An emerging practice of documenting the AI models, datasets, and AI-dependent components within a software product analogous to a traditional SBOM for open-source dependencies. Igor describes using ASBOM in enterprise sales contexts, where banks are beginning to require vulnerability disclosure and reachability analysis for AI-dependent components before procurement approval.

This week's issue is sponsored by Varonis

AI Security Requires More Than Visibility. It Requires Control. 

Security leaders are under pressure to enable AI innovation while managing a rapidly expanding attack surface across cloud, identity, and data layers. AI agents and copilots can introduce new access paths, automated high-impact actions, and accelerate threat timelines. 

Varonis Atlas helps organizations secure AI end-to-end - from understanding usage and enforcing guardrails to detecting suspicious activity and reducing risk dynamically. Join our upcoming webinar to learn how Varonis Atlas can help security teams operationalize AI security at scale. 

๐Ÿ’กOur Insights from this Practitioner ๐Ÿ”

The CI/CD Pipeline Is Being Load-Tested to Failure

Igor's opening framing is the most important thing any security leader can hear right now: the infrastructure that enterprise security programs were built on the code review gates, the SAST scan cadences, the human-in-the-loop approval workflows is failing not because it was badly designed, but because it is being subjected to a load it was never engineered to handle."These old rails that CI/CDs run on, they are just getting load tested and reaching its limits in every organization." Igor, Lovable

A developer running ten AI coding agents in parallel overnight can produce more PRs in eight hours than a traditional team produced in a month. Each of those PRs represents state change in the system. Each state change is a potential security regression. And the tooling that security teams depend on to catch those regressions SAST scanners, human reviewers, manual threat models cannot keep pace with the throughput.

The practical implication for enterprise security architects is urgent: the right question is not "how do we improve our existing SAST pipeline?" It is "what does a security program look like that was designed for 1,000 commits per day?" That requires AI-native tooling with semantic code understanding, not pattern matching; autonomous triage agents that can investigate findings without waiting for analyst bandwidth; and a risk acceptance framework that explicitly acknowledges the speed-security tradeoff rather than pretending it does not exist.

๐Ÿ”‘ Action: Conduct a throughput stress-test of your existing security tooling: how many PRs per day can your current SAST, SCA, and code review process handle before coverage degrades? That number is your current security ceiling and AI adoption is almost certainly pushing your organization toward it.

AI Agents Are Federated Developers Govern Them Accordingly

The most architecturally important insight from the conversation is Igor's framing of AI agents as delegated principals. When a developer invokes a coding agent, they are not just using a smart autocomplete tool they are federating their credentials, their access scope, and their organizational trust to an autonomous system that will act on their behalf without human review of each individual action. "Developer federates its access, its credentials, its knowledge to the AI. So you should almost see it as like another developer, essentially." Igor, Lovable

This reframing has immediate, practical consequences. If your IAM architecture allows a developer to access production secrets on-demand, then an AI agent running as that developer can also access production secrets on-demand and can do so thousands of times per day, silently, without any of the human judgment that (sometimes) catches misuse. The solution Igor implements at Lovable is elegant: PAM-enforced escalation for privileged actions. Developers can freely use AI agents for development-scoped work, but anything requiring production credential access requires a human to explicitly perform an escalation step that the agent cannot complete autonomously.

This is not a new concept, it is the principle of least privilege applied to a new class of principal. But its implementation requires deliberate architectural decisions: which actions should require human escalation? Where does the PAM boundary sit? How do you enforce it when AI agents are being invoked from developer machines, CI/CD pipelines, and SaaS platforms simultaneously?

๐Ÿ”‘ Action: Map your current developer access entitlements and identify which of those entitlements an AI agent inheriting that access could abuse autonomously. For each high-risk entitlement, evaluate whether PAM-enforced human escalation is technically feasible and operationally acceptable given your speed-security tradeoff.

The Deny-List Paradox: Why Standard Security Prompting Logic Fails for Agents

One of the most counterintuitive insights from the conversation addresses a subtle but consequential difference between how security professionals think about access control and how AI agents process instructions. Security practitioners are trained to prefer allow-lists over deny-lists to define what is permitted and block everything else. Igor explains why this logic inverts AI agents. "For the agents, it's reversed. Because if you do an allow list, it'll always focus against that.The real world has so many more paths, but it will always choose the paths that you say you are allowed to do this." Igor, Lovable

When you give an AI agent a list of permitted actions, the agent optimizes toward executing those actions treating the allow-list as a set of objectives rather than a boundary. The agent will find paths to those permitted actions that you did not anticipate and did not sanction.The more effective approach, Igor argues, is to explicitly instruct agents on what not to do providing a security context document (embedded in the agent's system prompt or knowledge file) that defines the threat model, the sensitive components, and the prohibited actions.This is analogous to a new-hire security briefing: rather than listing every permitted workflow, you explain the business, what needs protecting, and what constitutes a security incident.

๐Ÿ”‘ Action: For every AI coding agent or security agent your team deploys, create a security context document a structured summary of your threat model, sensitive components, and prohibited actions and embed it in the agent's knowledge file or system prompt. Treat it as a living document that security updates as the codebase and threat landscape evolve.

AI-Native SCA: The Overlooked Surface That LLMs Are Actively Expanding

Igor raises a specific, concrete risk that deserves standalone attention: AI coding assistants are actively introducing supply chain vulnerabilities that existing SCA tooling is not equipped to catch. The mechanism is hallucination LLMs confidently recommending libraries that do not exist at the specified version, have not been maintained in years, or share names with malicious packages published by attackers. "I asked AI about what is a good library for PII sanitization in Go, and it gave me a library and said it's actively maintained and the last commit was eight years ago." Igor, Lovable

This is not an edge case. Research presented at Black Hat has demonstrated methods for generating package names that, in some percentage of LLM invocations, cause the model to recommend a malicious attacker-controlled package over the intended legitimate one. As AI-generated code becomes the majority of enterprise commits, the aggregate risk from hallucinated or manipulated package recommendations becomes a meaningful supply chain vector, one that bypasses traditional SCA scanners that only check for known-bad CVEs against packages that are correctly installed.

The defensive architecture requires layering: AI-native SCA tools that understand the semantic context of dependency introduction (not just CVE signature matching), registry-level controls that restrict installation to approved package sources, and dependency pinning policies that prevent silent version drift. Critically, the AI agent's code generation pipeline itself must be instrumented not just the final PR.

๐Ÿ”‘ Action: Audit your current SCA tooling against the AI-hallucination threat model: does it flag packages that are minimally maintained, have suspicious commit histories, or share names with known internal packages? If not, evaluate AI-native SCA vendors that surface these signals.

Building the Skills Ecosystem: The Unsexy Work That Determines AI Security ROI

A recurring theme in the conversation is the gap between announcing an "AI adoption strategy" and actually making AI useful to security teams in their day-to-day work. Igor's experience at Lovable building an incident responder skill that allows analysts to trigger autonomous investigation of suspicious alerts with a single command illustrates what genuine AI value looks like versus performative tooling deployment. "The overlooked part is that non-sexy work building skills, making connections, building MCP servers. Organizations usually just put Copilot into everyone's environment. But then what? What is it connected to? What can it do?" Igor, Lovable

The insight here is structural. The value of an AI agent in a security context is almost entirely determined by the quality of the data connections and skills it has access to the MCP servers that connect it to your SIEM, your cloud audit trails, your vulnerability management platform, your ticketing system. An AI agent without those connections is a sophisticated chat interface. An AI agent with them is a force multiplier.

Ashish reinforces this from a practitioner standpoint: the path to genuine AI security capability is iterative start with a single, high-value connection (e.g., "query AWS CloudTrail for this suspicious IP"), validate the output, add the next connection, and gradually build toward multi-step autonomous workflows that would previously have required hours of analyst time.

๐Ÿ”‘ Action: Identify the three highest-friction, highest-repetition tasks your security team performs (e.g., alert triage, vulnerability verification, IOC lookups across multiple platforms). For each, map the data connections required and evaluate whether an MCP server or skills-based agent could automate the retrieval and initial analysis steps.

The Crawl-Walk-Run Model for AI Security Adoption

Both Igor and Ashish push back firmly against the organizational tendency to jump from zero AI usage to fully autonomous agentic workflows, a pattern that consistently produces failed implementations and organizational backlash. Igor's "air pocket" metaphor is instructive: within any organization, there are specific, bounded use cases where AI can be deployed safely and generate clear value without requiring a complete overhaul of security architecture.

For a large bank with significant regulatory exposure and risk aversion, the air pocket might be internal tooling prototyping with no production data involved. For a fast-moving scale-up, it might be PR-creation by non-developers with mandatory human review before merge. For a security team overwhelmed by SAST findings, it might be an AI agent that fetches the relevant code context and performs initial triage reducing analyst time-to-decision from thirty minutes to three.

The principle generalizes: find the smallest, highest-value problem that AI can solve today, solve it well, generate internal credibility, and use that credibility to expand the scope of AI deployment incrementally. Organizations that attempt to solve the entire class of problems simultaneously deploying fully autonomous security operations without the foundational skills, data connections, and governance frameworks are the ones that end up publicly reversing their AI strategy.

Threat Intelligence & Research

Cloud-Native Guidance & Tools

Podcast Episode

Question for you? (Reply to this email)

๐Ÿค”  Has your organization mapped which developer entitlements an AI agent could inherit and defined the PAM boundary that requires a human in the loop?

Next week, we'll explore another critical aspect of cloud security. Stay tuned!

๐Ÿ“ฌ Want weekly expert takes on AI & Cloud Security? [Subscribe here]โ€

We would love to hear from you๐Ÿ“ข for a feature or topic request or if you would like to sponsor an edition of Cloud Security Newsletter.

Thank you for continuing to subscribe and Welcome to the new members in tis newsletter community๐Ÿ’™

Peace!

Was this forwarded to you? You can Sign up here, to join our growing readership.

Want to sponsor the next newsletter edition! Lets make it happen

Have you joined our FREE Monthly Cloud Security Bootcamp yet?

checkout our sister podcast AI Security Podcast