- Cloud Security Newsletter
- Posts
- AI Native Security: Securing the Future as Applications Evolve with AI | Google Cloud Functions Vulnerability
AI Native Security: Securing the Future as Applications Evolve with AI | Google Cloud Functions Vulnerability
This week's newsletter explores how AI is reshaping enterprise security architecture, with expert insights from Ankur Shah of Straiker. From unstructured data challenges to the rise of AI agents, cloud security leaders must understand why traditional security approaches are no longer sufficient for protecting AI-enabled applications.
Hello from the Cloud-verse!
This week’s Cloud Security Newsletter Topic is - AI Native Security: Securing the Future as Applications Evolve (continue reading)

This image was generated by AI. It's still experimental, so it might not be a perfect match!
Incase, this is your 1st Cloud Security Newsletter! You are in good company!
You are reading this issue along with your friends and colleagues from companies like Netflix, Citi, JP Morgan, Linkedin, Reddit, Github, Gitlab, CapitalOne, Robinhood, HSBC, British Airways, Airbnb, Block, Booking Inc & more who subscribe to this newsletter, who like you want to learn what’s new with Cloud Security each week from their industry peers like many others who listen to Cloud Security Podcast & AI CyberSecurity Podcast every week.
Welcome to this week's edition of the Cloud Security Newsletter!
As AI adoption accelerates across enterprises, security teams face unprecedented challenges in protecting these new architectures. Traditional security tools weren't designed for AI's unstructured request/response patterns, making it increasingly difficult to secure these evolving environments with conventional approaches.
In this issue, we're diving deep into insights from Ankur Shah, founder of Straiker, who shared his perspectives on AI security challenges and why securing AI-enabled applications requires fundamentally different approaches from traditional cloud security. We'll also look at major security incidents affecting organizations like Coinbase, Marks & Spencer, Microsoft, and more.
📰 THIS WEEK'S SECURITY NEWS
☠️ Google Cloud Functions Vulnerability Raises Broader Security Concerns
Security researchers have identified a privilege escalation vulnerability affecting Google Cloud Platform's Cloud Functions and Cloud Build services. Initially discovered by Tenable Research, the flaw allowed attackers to exploit the deployment process to gain elevated permissions. Google has since patched the issue to mitigate excessive privileges previously granted to default Cloud Build service accounts.
Cisco Talos expanded on the findings by testing similar techniques across multiple cloud platforms. Even with Google's patch in place, researchers demonstrated that similar approaches could be used for environment enumeration and reconnaissance across cloud services, including AWS Lambda and Azure Functions. More Information
Why this matters: This vulnerability serves as an important reminder about the risks of overly permissive service account configurations in cloud environments. Cloud security teams should regularly audit permissions using the principle of least privilege, monitor for unexpected cloud function modifications, and inspect outgoing traffic for potential data exfiltration.
🚨🫣 Coinbase Security Breach Could Cost Up to $400M Following Support Staff Bribery
Cryptocurrency exchange Coinbase disclosed a security breach where attackers bribed "weak links" in its international customer support team to access personal information of approximately 1% of customers. While no passwords, private keys, or funds were compromised, the leaked personal information (names, addresses, phone numbers, email addresses, masked financial information, and transaction histories) enables social engineering attacks against customers.
Coinbase has refused to pay the $20 million ransom demand, instead offering that amount as a reward for information leading to the hackers' arrest. The company expects the breach to cost up to $400 million in remediation costs and customer reimbursements. More Info.
Why this matters: This incident highlights the critical importance of insider threat protection and human security elements in any cloud security program. Cloud security teams should evaluate access controls for customer support platforms and implement better security monitoring for privileged users - even those with supposedly "limited" access to sensitive information.
🚨🫣 Marks & Spencer Faces £300 Million ($400M) in Losses from Cyber Attack
UK retailer Marks & Spencer disclosed that its ongoing cyber incident, which began in April 2025, will cost approximately £300 million ($400 million) in lost sales and disruption costs. The suspected ransomware attack forced the company to suspend online ordering, with digital sales not expected to resume until July. The attack has significantly impacted the company's fashion, home, and beauty sales while creating additional costs from waste, logistics, and manual processes.
CEO Stuart Machin described the incident as a "bump in the road" and announced plans to accelerate infrastructure upgrades, improve network connectivity, and enhance operational resilience. The attack also resulted in the theft of customer personal information, putting individuals at risk of follow-on attacks. More Info Link 1 Link 2
Why this matters: For cloud security leaders, this incident demonstrates how operational disruption from security breaches can create massive financial impact far beyond immediate remediation costs. It underscores the importance of building resilient architectures that can maintain critical business functions even during active security incidents.
🚨 Microsoft Leads International Takedown of Lumma Stealer Malware
Microsoft's Digital Crimes Unit announced a major operation to disrupt Lumma Stealer, an information-stealing malware variant popular among criminal groups. Over a two-month period, Microsoft identified more than 394,000 Windows computers infected with Lumma, which steals passwords, credit card information, bank account details, and cryptocurrency wallets.
After securing a court order, Microsoft seized 2,300 domains forming Lumma's infrastructure, while the U.S. Department of Justice seized its central command structure and disrupted its distribution network. The malware, developed by a Russia-based actor known as "Shamel," has been linked to major attacks on Booking.com and other platforms. More Info
Why this matters: Infostealers represent a significant threat to cloud environments, as they can harvest cached credentials, API tokens, and other authentication material stored on compromised endpoints. Cloud security teams should implement advanced MFA and continuous authentication protocols that can detect suspicious access patterns, even when valid credentials are used.
☠️ Cybercriminals Exploit Kling AI Popularity to Distribute Infostealer Malware
A new malware campaign masquerading as the popular AI media platform Kling AI has been uncovered by Check Point Research. The operation uses fake Facebook ads and counterfeit websites to distribute infostealer malware embedded in seemingly innocent AI-generated media files. The attackers exploited Kling AI's popularity (6 million users since June 2024) to lure victims into downloading ZIP files containing disguised executables.
Once opened, the malware deploys a sophisticated loader that evades security tools and injects a second-stage payload, identified as PureHVNC RAT. This remote access trojan specifically targets cryptocurrency wallets and browser-stored credentials, scanning for over 50 browser extensions linked to digital wallets and monitoring standalone applications like Telegram and cryptocurrency management software. More Information
Why this matters: This attack highlights how threat actors are targeting the growing AI ecosystem with increasingly sophisticated social engineering techniques. Cloud security teams should ensure their security awareness training includes emerging threats related to AI tools and establish clear policies about which AI platforms employees are permitted to use.
CLOUD SECURITY TOPIC OF THE WEEK
The AI Security Revolution: Why Traditional Security Tools Fall Short for AI-Enabled Applications
As AI becomes embedded throughout enterprise technology stacks, security teams face a fundamental challenge: traditional security tools were never designed to handle the unique characteristics of AI-enabled applications. This week, we examine why AI-native security approaches are becoming essential as organizations transition from "AI immigrants" to "AI natives."
Featured Experts This Week 🎤
Ankur Shah - Founder and CEO of Straiker - ex -Palo Alto Networks, Redlock, and Prisma Cloud
Definitions and Core Concepts 📚
Before diving into our insights, let's clarify some key terms:
AI Native Applications: Apps built from the ground up around AI capabilities, with conversational interfaces and unstructured request/response patterns as their fundamental design elements.
RAG (Retrieval-Augmented Generation): A technique that enhances AI outputs by first retrieving relevant information from a knowledge base, then using that information to generate more accurate responses. This helps reduce hallucinations and makes AI more reliable.
Inference: In the AI context, inference refers to when an AI model uses its training to generate a response or perform a task. Inference is the operational phase of an AI model, after it has been trained.
Agentic AI: Systems that can autonomously take actions on behalf of users, rather than just providing information or recommendations. Agentic AI can interact with tools, APIs, and other systems to accomplish tasks.
This week's Issue is sponsored by AI CyberSecurity Podcast
A Weekly Show with hosts - Ashish Rajan & Caleb Sima on the practioners version of AI for Security & Security for AI for CISOs & Security Leaders with guests like CISO of Anthrophic, DeepMind, Google Cloud to name a few.
💡Our Insights from this Practitioner 🔍
The Fundamental Shift in Application Architecture
Ankur Shah highlights a critical insight many security teams have missed: AI isn't just another feature being added to applications – it represents a fundamental change in how applications are structured and how they process data.
"AI adds one more level of abstraction, unlike anything we have seen before. Basically you can use English or natural language to build and consume software. That paradigm has never happened. So what we're gonna see is an explosion of applications. Anybody can build anything."
This structural change has profound security implications. Traditional security tools were designed for structured request/response patterns – API calls with well-defined parameters, user inputs in specific formats, form submissions with validation rules. But AI-enabled applications operate on entirely different principles:
"Most importantly, and fundamentally and this has never happened in the application, is that now you have an unstructured request response pattern in the application. If you're building an e-commerce application, you can say a list product, add to cart, checkout. Very structured Request. Response paradigm. Now you actually have to give AI a task or a goal. So it's a completely unstructured request, a response paradigm."
This means that rule-based security controls, such as WAFs designed to catch SQL injection or parameter tampering, aren't equipped to handle the fluid nature of AI-based interactions. When a user can simply ask an AI to "find all confidential documents about Project X," traditional input validation becomes impossible.
Understanding the Evolving AI Application Stack
To properly secure AI-enabled applications, security teams need to understand how the application stack itself is changing. Shah outlines several critical layers:
Infrastructure layer: Still largely unchanged, typically running on hyperscaler cloud infrastructure
Data layer: Completely transformed with vector databases and RAG systems
Model inference layer: New components for processing AI requests
Prompting layer: Interface for unstructured interactions
Agentic layer: Emerging capabilities that allow AI to autonomously take actions
Each of these layers introduces new security considerations. For example, vector databases store embeddings (mathematical representations of concepts) rather than simple data fields, making traditional data security approaches insufficient. The inference layer introduces concerns about model evasion and jailbreaking that don't exist in traditional applications.
Why Only “AI” Can Secure “AI"?
Perhaps Ankur’s most provocative insight is that traditional rule-based security approaches simply cannot keep pace with the flexibility of AI-based attacks:
"But I'll tell you the most fundamental difference that security teams have to think about, which is the engine. It seems like a buzzword. But it is true, which is you can only secure AI with AI. There is no other way to do it. It's an intractable problem. You give me any scenario, gimme any pattern. If you have a rule-based system that's looking for leakages or safety problems, it's not gonna work."
The reason is simple: if attackers can use natural language to craft their attacks (through techniques like prompt injection), they have virtually unlimited flexibility to evade static rules:
"Most of the enterprises can't even keep track of the existing human, non-human identity part of it. How are they gonna reign in the agentic layer, tool calling layer, all of that stuff? So again, you have to fundamentally build an engine with AI in mind and specifically fine tune LLM models. Not big models, ideally small models that can do it accurately and in real time."
For cloud security teams, this means investing in AI-powered security solutions that can detect anomalous patterns and potentially harmful intents in natural language interactions with AI systems.
The Three Categories of AI-Enabled Organizations
Ankur provides a useful framework for understanding where organizations currently stand in their AI journey:
AI Natives: Organizations building applications with AI at their core, using LLMs for business logic with conversational interfaces as their primary interaction model.
AI Immigrants: Traditional software companies trying to add AI capabilities to existing applications, often by "bolting on a chat bot" to their existing interfaces.
AI Explorers: Companies in early experimental phases, trying to appear AI-first but still figuring out their strategy.
"What you are seeing is now coding copilot does now have a proven ROI. So a lot of enterprises are embracing it, like our, ourselves at Straiker, like we're seeing anywhere from 50 to a hundred percent productivity improvement. Yeah. So that's real. Call center, customer support chat bots, that's like really up there. Now the thing that hasn't happened yet, which is the next level, which is right now, AI is acting as an assistant. That's right. But the whole agentic stuff is still in the early days. It's not taking action. Because we're dealing with non-deterministic systems."
Understanding where your organization falls in this framework is crucial for developing an appropriate security strategy. AI Natives need comprehensive AI-specific security from day one, while AI Immigrants may need to focus on securing the interfaces between traditional and AI components.
The Changing Threat Landscape
Ankur predicts that AI-related threats will eventually exceed all other security categories combined:
"What I predict is the threat landscape with AI is gonna be bigger than all the other security categories combined. Let's start with application security. I think if all apps are gonna have AI as a major component. I don't think the next Gen AppSec is gonna look like a WAF or a current AI security products. I don't think if bulk of the employee traffic is going to AI in the years to come. I don't think SASE and zero trust is the right way to go about it also."
He identifies several key threat vectors that cloud security teams should prepare for:
Prompt injection: Bypassing controls to make AI systems generate harmful content, leak data, or perform unauthorized actions
Data poisoning: Manipulating training or RAG data to influence AI outputs
Identity challenges: Managing and securing agentic identities with appropriate permissions
New endpoint risks: AI agents running on endpoint devices present new attack vectors
Practical Steps for Cloud Security Teams
Based on Ankur's insights, here are practical steps cloud security teams should take:
Gain visibility into AI usage: Implement tools to understand which AI systems are being used across your organization. This is a critical first step before attempting to govern or secure these systems.
Establish governance frameworks: Create clear policies about which AI systems can be used, how they should be configured, and what data they can access.
Implement AI-native guardrails: Deploy security tools specifically designed to detect and prevent AI-specific attacks, such as prompt injection and data exfiltration.
Upskill security talent: Invest in training your security team on AI fundamentals, as Ankur emphasizes: "Yeah. Look, I think the … most important thing is to upskill the talent that you have. Look, this happened in cloud as well. Where you had to hire or retrain people on cloud security."
Evaluate your cloud provider's AI security features: Major cloud providers are beginning to offer AI-specific security tools, but Ankur cautions against vendor lock-in: "Our promise is and the way the customer should think about this is that Hey, do you want a infrastructure model, data layer agnostic security or not. If you are, if you're all in on a ecosystem, AWS Anthropic ecosystem. I think it may make sense for you to do it, but the question I ask is that, like with the inference cost dropping 10 x every year. Like, why would lock in with a single vendor?"
AI Incident Database - Repository of public incidents involving AI systems
OWASP Top 10 for Large Language Model Applications - Security risks specific to LLM applications
Google's AI Security Best Practices - Guidelines for securing generative AI applications
NIST AI Risk Management Framework - Standards for managing AI risks
Question for you? (Reply to this email)
Do you agree the Application Architecture are different after AI?
Next week, we'll explore another critical aspect of cloud security. Stay tuned!
We would love to hear from you📢 for a feature or topic request or if you would like to sponsor an edition of Cloud Security Newsletter.
Thank you for continuing to subscribe and Welcome to the new members in tis newsletter community💙
Peace!
Was this forwarded to you? You can Sign up here, to join our growing readership.
Want to sponsor the next newsletter edition! Lets make it happen
Have you joined our FREE Monthly Cloud Security Bootcamp yet?
checkout our sister podcast AI Cybersecurity Podcast