- Cloud Security Newsletter
- Posts
- 🚨 Critical AI Tool RCE Exposes Developer Machines: Lessons from Block's Escape-Proof Cloud Environments
🚨 Critical AI Tool RCE Exposes Developer Machines: Lessons from Block's Escape-Proof Cloud Environments
This week’s cloud security highlights expose a sharp rise in AI development tool vulnerabilities starting with a critical RCE in Anthropic’s MCP Inspector and a prompt injection flaw in GitLab Duo. But at the heart of it all is a bigger question: how do you keep sensitive data from leaking out of your environment?Our featured expert, Ramesh Ramani (Staff Security Engineer, Block), walks us through how Block built a scalable egress access control system that actually works across multi-cloud, developer tools, and real-world incident response.
Hello from the Cloud-verse!
This week’s Cloud Security Newsletter Topic we cover - Inside Block’s Escape-Proof Cloud: What AI Tool Breaches Reveal About Egress Security! (continue reading)
Incase, this is your 1st Cloud Security Newsletter! You are in good company!
You are reading this issue along with your friends and colleagues from companies like Netflix, Citi, JP Morgan, Linkedin, Reddit, Github, Gitlab, CapitalOne, Robinhood, HSBC, British Airways, Airbnb, Block, Booking Inc & more who subscribe to this newsletter, who like you want to learn what’s new with Cloud Security each week from their industry peers like many others who listen to Cloud Security Podcast & AI Security Podcast every week.
Welcome to this week's edition of the Cloud Security Newsletter!
This week brought sobering reminders about the expanding attack surface in cloud security, from critical vulnerabilities in AI development tools to sophisticated social engineering campaigns targeting critical infrastructure. These incidents underscore a fundamental challenge many organizations face: controlling what leaves their environments is often harder than controlling what enters.
Our featured expert this week, Ramesh Ramani, Staff Security Engineer at Block, shares insights from building enterprise-scale egress access control systems that bridge the gap between security governance and operational efficiency. His approach to centralizing egress control while maintaining developer productivity offers valuable lessons for organizations struggling with the complexity of multi-cloud environments.
📰 THIS WEEK'S SECURITY NEWS
⚠️ Critical AI Development Tool Vulnerability Exposes Enterprise Networks
A critical Remote Code Execution (RCE) vulnerability in Anthropic's Model Context Protocol (MCP) Inspector project (CVE-2025-49596) carries a CVSS score of 9.4. The flaw allows attackers to execute arbitrary code on developer machines simply by visiting a malicious website while the Inspector tool is running.
Why This Matters: This vulnerability represents one of the first critical security flaws in the emerging AI agent ecosystem, exposing browser-based attacks against AI developer tools. With organizations rapidly adopting AI development frameworks, this demonstrates how localhost-exposed development tools can become enterprise attack vectors. As Ramesh notes in our discussion, "egress has multiple points where you can go out... the number of ways that you can egress through cloud is unimaginable." AI development tools expand this attack surface further.
Sources: Oligo Security | The Hacker News
⚠️ GitLab Duo AI Assistant Vulnerable to Prompt Injection Attacks
Researchers discovered a prompt injection vulnerability in GitLab Duo, an AI coding assistant powered by Claude. Attackers could hide malicious instructions in code or documents, leading the AI to leak source code or suggest harmful links. The vulnerability demonstrates how AI assistants integrated into development workflows can become vectors for data exfiltration and supply chain attacks.
Why This Matters: GitLab Duo represents the growing integration of AI assistants into development workflows. This vulnerability highlights how these tools can become vectors for code exfiltration and supply chain attacks. As organizations adopt AI-powered development tools, they need robust egress controls to prevent sensitive code and data from flowing to unauthorized destinations. This directly relates to Ramesh's discussion of data governance—organizations must control not just what data leaves their environment, but how AI tools process and potentially expose that information through seemingly legitimate interactions.
Sources: Adversa AI | Check Point Research
CLOUD SECURITY TOPIC OF THE WEEK
Lessons from Block's Escape-Proof Cloud Environments
Traditional network security focused heavily on perimeter defense building strong walls to keep threats out. But as Ramesh Ramani explains, "A lot of focus is given to ingress and rightly so... But what if that's been breached? You want to make sure that no one can get out with your corporate secrets."
This week's focus examines how organizations can implement centralized egress access control that scales across multi-cloud environments while maintaining operational efficiency and strong data governance.
Featured Experts This Week 🎤
Ramesh Ramani - Staf Security Engineer at Block
Ashish Rajan - CISO | Host, Cloud Security Podcast
Definitions and Core Concepts 📚
Before diving into our insights, let's clarify some key terms:
Egress Access Control: Security measures that control and monitor outbound network traffic from an organization's infrastructure to external destinations, including the internet and partner networks.
SPIFFE ID: Secure Production Identity Framework for Everyone (SPIFFE) provides a secure identity framework for workloads in heterogeneous environments through cryptographically verifiable identities.
Data Security Levels (DSL): Classification system that categorizes data based on sensitivity and regulatory requirements, enabling granular access controls and compliance enforcement.
Scattered Spider: A cybercrime group known for sophisticated social engineering attacks, particularly targeting help desks and customer service representatives to gain initial access to corporate networks.
Model Context Protocol (MCP): An open protocol developed by Anthropic that standardizes how AI applications connect with external tools and data sources.
This week's issue is sponsored by Varonis.
Redefining Data Security Strategies for a Gen AI World
AI is transforming how we work — but is your data security keeping up?
Learn from our data security experts to better understand the AI risk landscape, how to protect your data without slowing down company progress, and better yet - how to use AI to your advantage for even better data protection.
Sign up today for our free session and get access to a free Generative AI risk assessment when you attend.
💡Our Insights from this Practitioner 🔍
1 - The Problem with Traditional Egress Control
Ramesh provides a compelling analogy for why egress control matters: "I was talking to my mother a few days ago... I was like, look, if a hacker managed to enter your system, or if a thief managed to enter a house, if they can't get out, you've solved half the problem there."
The challenge with traditional approaches becomes apparent in cloud environments. As Ramesh explains, "With ingress you perhaps could have singular points. Egress is not the same. Egress has multiple points where you can go out... the number of ways that you can egress through cloud is unimaginable."
This complexity is compounded in multi-cloud environments where different business units may have acquired separate companies, each with their own infrastructure and security baselines. The typical response managing egress through GitOps and exception-based approaches doesn't scale at enterprise levels.
2- The Source of Truth Foundation
Block's approach centers on having comprehensive sources of truth for both applications and approved partners. As Ramesh notes, "Block is in a unique place because we have sources of truth for everything... We have a source of truth for all our applications, we have a source of truth for all our partners. It's just about bridging this gap."
This foundation enables automated decision-making without human intervention in the approval process. The system can automatically validate whether:
An application is legitimate and properly registered
A destination partner is approved for specific data types
The requested data classification level matches partner approvals
3 - Centralized Governance with Distributed Enforcement
Rather than forcing infrastructure teams to adopt new systems, Block's solution replaces existing data sources with centralized ones. "Whatever that current source of truth is where they're pulling API for their application to domain access... we are replacing it with our source of truth," Ramesh explains.
This approach provides several benefits:
Reduced friction: Infrastructure teams continue using familiar tools
Consistent policies: All environments pull from the same centralized source
Simplified governance: Security teams manage one system instead of many
4 - Data-Aware Access Control
Beyond traditional network-level controls, Block's system enforces data governance at the egress point. Users must declare what type of data they plan to share and at what classification level. The system then validates these declarations against partner approvals.
"We have different data security levels, but if the partner, let's say ChatGPT.com is only approved for data security level two, but if the user says... I want access to ChatGPT.com for my application, but I'm gonna send data security level four, it'll deny that," Ramesh explains.
This approach addresses both security and compliance requirements simultaneously, ensuring that sensitive data doesn't inadvertently flow to unauthorized destinations.
5 - Implementing at Scale: Lessons Learned
For organizations considering similar implementations, Ramesh emphasizes starting with source of truth establishment: "You need to have two sources of truth, a catalog of all the applications that you have, and a catalog of all the partners that you have approved."
The implementation should follow a staged approach:
Start small: Begin with a limited set of applications to validate the approach
Ensure infrastructure buy-in: Make the solution easy for infrastructure teams to adopt
Replace, don't rebuild: Integrate with existing systems rather than forcing wholesale changes
Automate governance: Remove manual approval processes wherever possible
Incident Response Benefits
The centralized approach provides significant advantages during security incidents. "Let's say Slack comes up and says, 'Hey, we have been compromised'... our system knows that Slack is a product and Salesforce is the vendor. So now we are able to immediately... give us full visibility into all the applications that have access to api.slack.com," Ramesh explains.
This visibility enables rapid response and risk assessment, allowing security teams to quickly identify affected applications and disable access as needed.
Cloud Security Architecture:
NIST Zero Trust Architecture - Comprehensive guide to zero trust principles
Cloud Security Alliance Zero Trust Maturity Model - Framework for implementing zero trust in cloud environments
Data Governance and Classification:
Microsoft Information Protection - Data classification and protection strategies
AWS Data Classification Guide - Best practices for data classification in AWS
Network Security Tools:
SPIFFE/SPIRE Documentation - Identity framework for cloud-native workloads
Open Policy Agent - Policy engine for cloud-native environments
Question for you? (Reply to this email)
What would you start first with when building an Escape Proof Cloud Environment?
Next week, we'll explore another critical aspect of cloud security. Stay tuned!
We would love to hear from you📢 for a feature or topic request or if you would like to sponsor an edition of Cloud Security Newsletter.
Thank you for continuing to subscribe and Welcome to the new members in tis newsletter community💙
Peace!
Was this forwarded to you? You can Sign up here, to join our growing readership.
Want to sponsor the next newsletter edition! Lets make it happen
Have you joined our FREE Monthly Cloud Security Bootcamp yet?
checkout our sister podcast AI Security Podcast