- Cloud Security Newsletter
- Posts
- 🚨 Zero-Day Exploited in Hours + AI Agent Risk Lessons from CISO of Sendbird
🚨 Zero-Day Exploited in Hours + AI Agent Risk Lessons from CISO of Sendbird
This week's newsletter covers critical React Server Components vulnerability (CVE-2025-55182) under active exploitation by Chinese APT groups, record-breaking DDoS attacks, and exclusive insights from Sendbird CSO Yash Kosaraju on securing AI agents through multi-layered trust frameworks, enterprise LLM safeguards, and cultural transformation in the age of autonomous systems
Hello from the Cloud-verse!
This week’s Cloud Security Newsletter Topic we cover - How to secure your AI Agents: A CISOs Journey (continue reading)
Incase, this is your 1st Cloud Security Newsletter! You are in good company!
You are reading this issue along with your friends and colleagues from companies like Netflix, Citi, JP Morgan, Linkedin, Reddit, Github, Gitlab, CapitalOne, Robinhood, HSBC, British Airways, Airbnb, Block, Booking Inc & more who subscribe to this newsletter, who like you want to learn what’s new with Cloud Security each week from their industry peers like many others who listen to Cloud Security Podcast & AI Security Podcast every week.
Welcome to this week’s Cloud Security Newsletter
As enterprises race to integrate AI agents into production environments, we're witnessing a collision of traditional application security challenges with entirely new threat vectors. This week brings critical lessons from the frontlines: a React vulnerability exploited within hours of disclosure, infrastructure providers scrambling to protect customers, and practical guidance from organizations that have successfully navigated the transition from API-driven platforms to AI-first architectures.
I'm joined this week by Yash Kosaraju, Chief Security Officer at Sendbird, who brings a unique perspective on securing AI agents at scale. Sendbird transformed from a mature chat API platform into an AI agent company practically overnight a microcosm of the transformation many enterprises are experiencing right now. Yash shares candid insights on building security programs that balance innovation velocity with enterprise-grade protection, from embedding security engineers in AI development teams to redefining what constitutes a "security incident" when AI agents make suboptimal decisions. [Listen to the episode]
📰 TL;DR for Busy Readers
React2Shell (CVE-2025-55182): Critical 10.0 CVSS RCE in React Server Components exploited by Chinese APT groups within hours so patch immediately
Zero Trust ≠ Multi-Layered Trust: Yash explains why “controls fail” must be your foundational assumption.
AI Incidents ≠ Breaches: Wrong answers, poor decisions, and unexpected agent actions need new IR playbooks.
LLM Contracts Matter: Scrutinize training exclusions, data deletion clauses, and derivative model usage.
Culture > Tools: Security embedded in AI teams reduces Shadow AI and accelerates safe experimentation.
📰 THIS WEEK'S TOP 3 SECURITY HEADLINES
1. React2Shell: Critical Unauthenticated RCE Demands Immediate Action
A perfect 10.0 CVSS RCE in React Server Components is now under active exploitation by Earth Lamia and Jackpot Panda. Added to CISA KEV within 48 hours.
Action: Patch immediately. Inventory where React Server Components appear across your cloud estate.
Why it matters: Time-to-exploit is compressing. Automated detection + patch orchestration must be in place before disclosure hits.
Source: NVD, AWS Security Blog
2. Cloudflare Outage Tied to React2Shell Emergency Response
A 25-minute Cloudflare outage occurred during emergency firewall rule deployment.
Why it matters: Even security fixes can cause availability impact.
Action: Ask vendors for emergency mitigation SLAs and rollback procedures.
Source: The Cloudflare Blog, Reuters
3. Saviynt Raises $700M, Signaling Identity Consolidation Trend
Identity is the new perimeter, and boards are funding accordingly.
Action: Reevaluate identity governance maturity and JIT access controls.
Source: Wall Street Journal
🎯 Cloud Security Topic of the Week:
Building Multi-Layered Trust for AI Agents (with Sendbird
CISO - Yash Kosaraju)
The transition from traditional applications to AI-powered systems isn't just a technology shift it's a fundamental reimagining of security architecture, incident response, and organizational culture. This week's conversation with Yash Kosaraju reveals how Sendbird navigated this transformation while maintaining enterprise-grade security for customers deploying AI agents that take autonomous actions in production environments.
Featured Experts This Week 🎤
Yashvier Kosaraju, Chief Security Officer at Sendbird
Ashish Rajan - CISO | Co-Host AI Security Podcast , Host of Cloud Security Podcast
Definitions and Core Concepts 📚
Before diving into our insights, let's clarify some key terms:
AI Agent: An autonomous system that uses large language models (LLMs) to understand context, make decisions, and execute actions without constant human oversight. Unlike traditional chatbots that respond to queries, AI agents can modify backend systems, process transactions, and orchestrate complex workflows.
Multi-Layered Security/Trust: A defense-in-depth approach assuming that individual security controls will eventually fail. Rather than relying on a single "zero trust" perimeter, organizations implement overlapping controls across device trust, identity verification, network access, application authorization, and data protection.
RAG (Retrieval-Augmented Generation): A technique where AI models query external knowledge bases before generating responses, improving accuracy and reducing hallucinations by grounding outputs in verified information.
Context Injection/Prompt Injection: Attack techniques where adversaries manipulate AI system inputs to override security constraints, extract sensitive data, or cause unintended actions. Similar to SQL injection but exploiting LLM processing logic rather than database queries.
This week's issue is sponsored by Drata
Security teams shouldn’t be buried in manual evidence collection.
Drata automates compliance end-to-end while providing unified visibility across cloud workloads, identities, and configurations.
Teams use Drata to cut audit prep from weeks to hours, accelerate security reviews, and reinforce DevSecOps pipelines with real-time controls monitoring.
If you're scaling cloud infrastructure and need a smarter path to continuous compliance, Drata is built for you.
💡Our Insights from this Practitioner 🔍
How to secure your AI Agents: A CISOs Journey (Full Episode here)
The Reality Check: AI Security Isn't Just Application Security 2.0
When Sendbird pivoted from a mature chat API platform to an AI agent company, the security team faced a challenge that extends far beyond securing APIs and databases. "You go from fine, we are building an application, go test for OWASP Top 10, go test for XSS, SQL injection," Yash explains. "That changes to a lot of different things. The attack paths are different. The types of security issues are different. The data security models are different."
This transformation mirrors what many enterprises are experiencing as they integrate AI capabilities. The comfortable world of known vulnerabilities and established testing frameworks gives way to questions about LLM hallucinations, context injection attacks, and data leakage through training corpora. Traditional AppSec engineers suddenly find themselves securing systems they don't fully understand a humbling experience that Yash addresses directly with his team.
The Hidden Stress on Security Teams: One of the most candid moments in our conversation came when Yash shared how a senior engineer approached him: "We are building this new AI stuff. We have to review and secure it. I don't know if I can do a good enough job with this." This vulnerability reflects a broader industry challenge that isn't discussed enough the expectation that experienced security professionals should instantly become AI security experts simply because AI is built on software.
Yash's response reveals his leadership philosophy: "Let's acknowledge it's new technology. We will make mistakes. It's okay to do that. We'll eventually catch up to it. A staff AppSec or principal engineer is not automatically a staff AI security engineer." This psychological safety creates space for teams to learn, experiment, and build expertise without the paralysis that comes from fear of failure.
Multi-Layered Trust: Beyond Zero Trust Marketing
When I asked Yash about his approach to zero trust, his response cut through the industry hype: "A true multi-layered approach doesn't use flashy marketing terms and call it zero trust. It's plain multi-layers of security."
The distinction matters. Zero trust has become such an overloaded term that it risks meaning nothing a checkbox on a compliance form rather than a meaningful security architecture. Yash's multi-layered trust framework at Sendbird is more pragmatic: assume every control will eventually fail, and design your architecture accordingly.
The Login Flow That Actually Works: Yash walks through Sendbird's employee authentication flow as a concrete example. When an engineer attempts to access GitHub, the system verifies:
The request originates from a company-issued device
The Chrome browser is enrolled in Google Enterprise
Okta performs device health checks via CrowdStrike integration
Multi-factor authentication using FIDO2/WebAuthn (YubiKey, fingerprint, or Okta Verify)
Password verification for users accessing sensitive systems in certain jurisdictions
"There are four or five layers that happen in the background that you as a user wouldn't even realize," Yash notes. This seamless security creates minimal friction while maintaining defense in depth if one control fails, four others remain.
For cloud security architects, this approach offers a template for designing authentication flows that don't rely on a single trust decision. The key insight is that AWS IAM, GitHub Enterprise, and Azure Active Directory aren't "secure by default" they're powerful platforms with security features that must be deliberately configured and layered.
The Data Question That Changes Everything
The most significant blind spot Yash identified in early AI adoption wasn't technical, it was contractual. "The major blind spot was the notion of 'yes, we're using GitHub Copilot. It's an enterprise tool backed by Microsoft, so it's secure. It's okay to use,'" he explains. "But as you look at it, the different terms of service for a beta model that Copilot has released versus a GA model it's important that you look at those, look at the nuances."
This observation should alarm enterprise security leaders. Many organizations are deploying "enterprise AI" tools without understanding fundamental questions:
Is the vendor using customer data to train their base LLM?
What contracts exist between your AI vendor and their LLM provider?
How is customer data handled during the training lifecycle?
What happens to your data when you terminate the contract?
The 360 Lifecycle of AI Data: Yash emphasizes that data governance in AI systems requires thinking through the complete lifecycle: "If and when I decide to terminate my contract, how do you delete my data? If it has been used in some LLM training context, even if that's just for my account or app that you use for training how does that work?"
These questions extend beyond traditional data processing agreements. LLM training creates derivative works that may contain fragments of your data in model weights. Simple deletion of source data may not remove its influence from the model. Enterprises need contractual guarantees about data isolation, training exclusions, and verifiable deletion requirements that many "enterprise" AI offerings don't yet provide.
Redefining Incidents When AI Makes Mistakes
Perhaps the most thought-provoking insight from our conversation concerns incident response. "The definition of what is an incident is also changing," Yash observes. "When an AI agent gives a wrong answer or a suboptimal answer, how do you classify that? It is an incident of some sort. It's not a breach. This is new."
Consider the implications: An AI customer service agent grants an unauthorized refund. A coding copilot introduces a subtle vulnerability that passes code review. An AI-powered analytics tool misinterprets data and influences a business decision. None of these are traditional security incidents, yet all represent failures with real impact.
The IR Team's New Challenge: Yash describes the complexity facing incident responders: "You add this other element where a call is going into a customer's environment, and then the origination of the incident either could be in our environment or somewhere in a customer's environment with a dotted line between the two. Figuring out how that works when you're in the middle of an incident, these unknowns, even though they may be small, cause big roadblocks."
Traditional incident response playbooks assume clear boundaries: our network, our applications, our data. AI agents that take actions across organizational boundaries blur these lines. When a Sendbird AI agent makes an API call to a customer's backend system and something goes wrong, who owns the investigation? Who has access to the necessary logs? What SLAs apply?
Sendbird's approach involves close collaboration between their incident response team and product teams to build new detection capabilities and response procedures. They're defining metrics for "AI incidents" that sit somewhere between operational errors and security breaches a new category that the industry hasn't yet standardized.
The Enterprise AI Toolkit Strategy
Rather than trying to prevent AI adoption through restrictions, Sendbird provides employees with enterprise-grade AI tools: Google Gemini, ChatGPT Teams (not Enterprise. Yash is deliberate about cost-benefit tradeoffs), Claude, Cursor, and GitHub Copilot. This strategy acknowledges that employees will use AI regardless of policy the question is whether they'll use sanctioned, contractually protected tools or shadow IT solutions.
"If you don't enable them by providing AI tools, they are going to find tools either pay out of pocket or use free versions to get the job done," Yash notes. The alternative to approved tools isn't no AI usage; it's unmonitored AI usage with unknown data handling practices.
The Communication Layer: Providing tools isn't enough. Sendbird accompanies their AI toolkit with clear communication about what's approved, what's blocked, and why. "We send out communications like 'here's why we are blocking it. Here's a reminder, these are the approved AI tools. If you want to try something out, be careful. Do not put real enterprise data.'"
This approach balances enablement with responsibility. Employees understand they can experiment with new AI tools, but they're expected to make risk-conscious decisions about data handling. When organizations treat employees as partners in security rather than potential threats, they typically get better outcomes than pure lockdown approaches.
Trust OS: Making AI Agent Actions Observable
Sendbird's Trust OS represents their answer to a fundamental challenge: how do you give customers confidence in AI agents that take autonomous actions? The platform provides observability into agent decisions, human oversight capabilities, and automated testing frameworks that let customers verify agent behavior before granting expanded permissions.
"It's assuring our customers of 'yes, there's an unknown AI that's performing actions, but here are all the ways you have oversight'" Yash explains. Customers can see conversations, review actions taken, configure permitted operations, and build test cases that alert when agent behavior changes.
This architecture acknowledges that trust in AI systems comes from transparency and control, not from claims about model accuracy. Enterprises deploying AI agents need to know: What is this agent doing? Can I review its decisions? Can I limit its capabilities? What happens when it makes a mistake?
The Testing Paradigm: One of Trust OS's key features is enabling customers to build automated test cases for agent behavior. If an agent should never discuss certain topics, never access particular data types, or always escalate specific request categories, customers can codify these rules and receive alerts when they're violated. This shifts AI governance from reactive incident response to proactive behavior monitoring.
The Skills Gap Nobody Talks About
Throughout our conversation, Yash returned to a theme that deserves more attention: the stress on security professionals expected to secure AI systems without adequate preparation. "A staff engineer, staff AppSec or principal engineer is not automatically a staff AI security engineer," he emphasizes. "It does put a decent amount of stress on AppSec engineers where companies are moving really fast on AI and they're expected to help secure them."
This gap extends beyond technical skills. Understanding LLM behavior, prompt injection attacks, and RAG implementations requires different mental models than traditional application security. The attack surface isn't just code anymore it's the interaction between code, models, prompts, context windows, and external data sources.
Yash's Advice to Overwhelmed Security Teams: "Let's acknowledge it's new technology. We will make mistakes. It's okay to do that. We'll eventually catch up to it. When SQL injections and XSS came out in the very early days, those were big deals and nobody knew exactly how they worked. Today we are much ahead of that game. We will eventually get to something similar on the AI front."
This perspective offers psychological relief to security teams feeling overwhelmed. The goal isn't perfect security from day one it's building capabilities iteratively while maintaining awareness of risks. Organizations that give security teams space to learn, experiment, and occasionally fail will develop more robust AI security programs than those that demand immediate expertise.
Learning AI with AI
One of the most practical insights Yash shared was almost throwaway: "You can use AI to learn AI." When new concepts emerge RAG, context windows, fine-tuning, mixture of experts security professionals don't need to wait for training courses or documentation. They can ask Claude, ChatGPT, or Gemini to explain concepts, provide examples, and answer follow-up questions.
Yash takes this further: "There's this Gemini app where you could have conversations with it. So I go on walks and I'm like, 'explain this to me.' It's a conversation with AI on anything and everything, and that could be how AI works."
For busy security leaders trying to stay current, this approach is remarkably efficient. Rather than blocking out hours to read documentation, you can integrate learning into your daily routine through conversational AI interfaces. The key is approaching these tools as learning aids rather than definitive sources verify important details, but use AI to accelerate your baseline understanding.
AI Security & LLM Guidance
OWASP Top 10 for LLM Applications - Comprehensive framework for AI application security
NIST AI Risk Management Framework - Federal guidance on managing AI risks
Anthropic: Claude's Constitutional AI - Understanding AI safety through design
Multi-Layered Security Architecture
Google BeyondCorp: A New Approach to Enterprise Security - Foundational zero trust architecture principles
AWS Security Best Practices for IAM - Multi-layered access control implementation
Incident Response for Modern Environments:
SANS Cloud Incident Response Framework - Adapting IR for cloud and AI systems
Atlassian Incident Management Handbook - Modern incident response processes
Cloud Security Podcast
For deeper discussion on failed data lakes, AI in detection engineering, and where SIEM still fits.
Question for you? (Reply to this email)
🤖 Is there an AI Incident that is not a breach?
Next week, we'll explore another critical aspect of cloud security. Stay tuned!
📬 Want weekly expert takes on AI & Cloud Security? [Subscribe here]”
We would love to hear from you📢 for a feature or topic request or if you would like to sponsor an edition of Cloud Security Newsletter.
Thank you for continuing to subscribe and Welcome to the new members in tis newsletter community💙
Peace!
Was this forwarded to you? You can Sign up here, to join our growing readership.
Want to sponsor the next newsletter edition! Lets make it happen
Have you joined our FREE Monthly Cloud Security Bootcamp yet?
checkout our sister podcast AI Security Podcast


