• Cloud Security Newsletter
  • Posts
  • 🚨 ShadowV2 Botnet Weaponizes AWS Docker & An Architect & Developer Share Lessons from Building AI in Cloud

🚨 ShadowV2 Botnet Weaponizes AWS Docker & An Architect & Developer Share Lessons from Building AI in Cloud

Bold new threats emerge as enterprises race to deploy AI services on AWS and Azure. Security leaders at healthcare giant Veradigm share hard-won lessons on securing AI workloads, managing cloud defaults, and building platform controls that protect against sophisticated attacks targeting cloud-native infrastructure.

Hello from the Cloud-verse!

This week’s Cloud Security Newsletter Topic we cover - Bridging the AI Security Gap: When Cloud Defaults Fail Enterprise Requirements (continue reading) 

This image was generated by AI. It's still experimental, so it might not be a perfect match!

Incase, this is your 1st Cloud Security Newsletter! You are in good company!
You are reading this issue along with your friends and colleagues from companies like Netflix, Citi, JP Morgan, Linkedin, Reddit, Github, Gitlab, CapitalOne, Robinhood, HSBC, British Airways, Airbnb, Block, Booking Inc & more who subscribe to this newsletter, who like you want to learn what’s new with Cloud Security each week from their industry peers like many others who listen to Cloud Security Podcast & AI Security Podcast every week.

Welcome to this week’s Cloud Security Newsletter

This week brought sobering reminders that cloud-native infrastructure and AI services create expansive new attack surfaces. The discovery of the ShadowV2 botnet transforming misconfigured Docker containers into professional DDoS services coincided with critical vulnerabilities in Microsoft Entra ID and the emergence of prompt injection attacks targeting AI-powered security tools.

To understand how enterprises are navigating these challenges, we spoke with Kyler Middleton and Sai Gunaranjan from healthcare technology company Veradigm, who shared their experiences securing AI applications across AWS and Azure environments. Their insights reveal the security gaps between cloud provider defaults and enterprise requirements, while offering practical guidance for organizations deploying AI services at scale.

📰 TL;DR for Busy Readers

  • ShadowV2 botnet exploits misconfigured Docker containers on AWS, demonstrating how cloud misconfigurations become professional cybercrime infrastructure

  • Microsoft Entra ID vulnerability (CVE-2025-55241) could have enabled cross-tenant admin impersonation, highlighting identity security as the cloud control plane

  • Prompt injection attacks now target AI-powered security scanners, forcing organizations to rethink AI security strategies

  • Healthcare enterprise Veradigm shares lessons on securing AI workloads across AWS and Azure, revealing critical gaps in cloud provider defaults

  • Platform engineering approach essential for bridging security gaps between cloud defaults and enterprise requirements

📰 THIS WEEK'S SECURITY HEADLINES

1 - ShadowV2 Botnet Transforms DDoS Into Cloud-Native Subscription Service

Security researchers at Darktrace discovered the ShadowV2 botnet targeting misconfigured Docker containers on AWS cloud servers to deploy Go-based malware for DDoS attacks. The operation features a Python-based command-and-control framework hosted on GitHub Codespaces with a fully developed login panel and operator interface that resembles a "DDoS-as-a-service" platform, employing HTTP/2 Rapid Reset and Cloudflare UAM bypass techniques.

Why this matters: ShadowV2 represents the industrialization of cybercrime, where attackers provide customers with self-service DDoS capabilities through professional APIs and dashboards. This directly validates concerns raised by Veradigm's platform engineers about Docker security defaults. Cloud security teams must implement container security scanning, secure Docker API access, and monitor for unusual container lifecycle events.

2 - Critical Microsoft Entra ID Vulnerability Enabled Cross-Tenant Admin Impersonation

Microsoft patched a critical vulnerability (CVE-2025-55241) in Entra ID discovered in July 2025 that could have allowed attackers to gain complete administrative control over any tenant using legacy "Actor tokens" and API validation errors. The vulnerability enabled stealthy cross-tenant admin impersonation across Microsoft's global cloud infrastructure.

Why this matters: This represents one of the most severe cloud identity vulnerabilities ever disclosed, reinforcing that identity has become the cloud control plane. As Veradigm's team emphasizes, any bypass that forges tokens outside normal logging undermines zero-trust assumptions and conditional access controls. Organizations should audit Entra ID configurations and implement provided detection queries immediately.

Prompt Injection Attacks Target AI-Powered Email Security Scanners

StrongestLayer researchers detected a malicious email campaign using prompt injection attacks to bypass AI-driven security scanners. The "Chameleon's Trap" attack included hidden HTML instructions directing LLMs to classify malicious emails as "benign" while exploiting the Follina vulnerability in attached files.

Why this matters: This attack demonstrates how cybercriminals adapt to AI-powered security defenses by weaponizing prompt injection techniques. As organizations deploy AI-driven security tools, they become vulnerable to novel manipulation tactics that traditional security approaches cannot address.

Source: SC Media

European Airport Chaos Linked to Collins Aerospace Ransomware

ENISA confirmed ransomware behind disruptions at major EU airports including Heathrow, Brussels, and Berlin, caused by an incident impacting Collins Aerospace's MUSE check-in/boarding software. The third-party vendor compromise cascaded into critical infrastructure failures.

Why this matters: This exemplifies classic SaaS dependency failures cascading into critical infrastructure. Even well-secured organizations face operational risk from upstream vendor compromises. Organizations must inventory mission-critical SaaS dependencies and require incident communication SLAs.

🎯 Cloud Security Topic of the Week:

Bridging the AI Security Gap: When Cloud Defaults Fail Enterprise Requirements

The rapid deployment of AI services has exposed a fundamental tension between cloud provider velocity and enterprise security requirements. As demonstrated by recent attacks and revealed through conversations with platform engineers at healthcare technology company Veradigm, the default configurations for AI services often prioritize ease of use over security, creating dangerous gaps that organizations must address through platform engineering approaches.

Definitions and Core Concepts 📚

Before diving into our insights, let's clarify some key terms:

  • Actor Tokens: Undocumented, internal-use tokens that Microsoft services use to communicate with each other on behalf of users, which became the attack vector in CVE-2025-55241

  • Agentic AI: AI systems that act with autonomy, using tools and pursuing broader goals over extended periods rather than simple question-answer interactions

  • DDoS-as-a-Service: Professional cybercrime platforms that provide distributed denial-of-service capabilities through subscription models with APIs and user interfaces

  • Knowledge Bases: Industry-standard systems for staging data as vectors in databases, enabling AI models to access and reason about private organizational information

  • Prompt Injection: Manipulation of AI model behavior through crafted inputs designed to override system instructions and security controls

  • RAG (Retrieval Augmented Generation): Technique allowing AI models to access external data sources by converting documents into searchable vector embeddings

💡Our Insights from this Practitioner 🔍

The Reality of AI Security in Regulated Industries

Healthcare technology company Veradigm's approach to AI security reveals the complex challenges facing enterprises deploying AI services at scale. Unlike theoretical security frameworks, the experience shared by Kyler & Sai demonstrates the practical trade-offs between developer velocity and security controls.

"The cloud providers aren't looking out for you as much as they should, and the wizards aren't looking out for you," Kyler Middleton explains. This observation captures a critical reality: cloud providers are shipping AI services with defaults optimized for adoption rather than security, creating gaps that platform teams must bridge.

AWS vs Azure: A Security-First Comparison

The conversation reveals meaningful differences between cloud providers' approaches to AI security. Sai Gunaranjan notes a particular challenge with Azure's default behavior: "Microsoft, at least Azure defaults, to send your data globally to any available compute for processing purposes. We actually have to block those as well."

This highlights how cost optimization features can conflict with data governance requirements. While Azure provides warnings about global data processing, the default configuration prioritizes cost efficiency over geographic data controls, requiring explicit policy interventions.

Kyler offers perspective on AWS Bedrock's logging challenges: "The serverless models on AWS bedrock are great. You can get started right away, but because there's no deployment to target, the logs just all get combined into one bucket and there's no rate limiting." This architectural choice complicates incident response and audit requirements, forcing organizations to implement application-specific IAM roles for proper attribution.

Platform Engineering as Security Strategy

Both practitioners emphasize platform engineering approaches to address AI security gaps. Rather than restricting developer access, Veradigm enables self-service capabilities while implementing security guardrails through templates and policies.

"We let them do whatever they want to do and then that's where the defaults become a bit more scary and you have to actually have policies or other things that actually block them," Sai explains. This approach maintains developer velocity while enforcing security boundaries through automation rather than process friction.

The AI Trust Problem

A particularly concerning insight emerges around AI trustworthiness in operational contexts. Kyler shares a revealing example: "One of the funny classic examples is you have an alert that pops and says the database is overloaded, and the AI is like, oh my goodness, the front end is doing this. I delete the front end and the database is saved."

This highlights a fundamental challenge with agentic AI systems: they solve problems literally rather than contextually. The technical solution (deleting the front end stops database load) conflicts with business requirements (users need the application).

Model Selection and Bias Considerations

The choice of AI models carries both technical and geopolitical implications. Kyler notes: "If you choose a model that came out of a region of the world that likes censorship, then it might steer you towards censored ideas or it might refuse to talk about some things you need to talk about."

In healthcare contexts, this extends beyond political considerations to clinical accuracy. Models trained on biased datasets can perpetuate harmful stereotypes or provide inappropriate medical guidance, creating liability and patient safety concerns.

Practical Implementation Patterns

Veradigm's implementation reveals common architectural patterns for enterprise AI deployment:

  • Transactional AI: Serverless functions handling question-answer interactions with knowledge bases, optimized for cost and simplicity.

  • Agentic AI: More complex systems given tools and broader goals, requiring careful permission management and human oversight.

  • Embedded AI: Product integrations where AI enhances existing workflows rather than creating new user interfaces.

For regulated industries, the embedded approach proves particularly valuable. As Sai explains: "Most of them are kind of product integrations, so where we are being extremely cautious when we roll something out and kind of have a lot of checkpoints."

The Skills Gap Bridge

The conversation addresses whether cloud security professionals can transition to AI security roles. Kyler identifies both familiar and novel challenges: "There are just the sort of vanilla platform engineer cloud challenges you are working with. IAM. You're working with cloud resources. You need to do logging and traceability... But there's these totally novel problems that you haven't had to face before."

The novel challenges include bias detection, model validation, and understanding when AI systems fail in subtle ways. Traditional security skills provide the foundation, but AI-specific expertise requires additional learning and experimentation.

Question for you? (Reply to this email)

It is possible to find balance between security and velocity when it comes to AI? 

Next week, we'll explore another critical aspect of cloud security. Stay tuned!

📬 Want weekly expert takes on AI & Cloud Security? [Subscribe here]”

We would love to hear from you📢 for a feature or topic request or if you would like to sponsor an edition of Cloud Security Newsletter.

Thank you for continuing to subscribe and Welcome to the new members in tis newsletter community💙

Peace!

Was this forwarded to you? You can Sign up here, to join our growing readership.

Want to sponsor the next newsletter edition! Lets make it happen

Have you joined our FREE Monthly Cloud Security Bootcamp yet?

checkout our sister podcast AI Security Podcast