• Cloud Security Newsletter
  • Posts
  • ⚙️ From Asahi’s Ransomware Recovery to Google’s AI Bug Bounty -The SOC’s Big 2025 Reboot

⚙️ From Asahi’s Ransomware Recovery to Google’s AI Bug Bounty -The SOC’s Big 2025 Reboot

Asahi’s ransomware recovery and Google’s AI Vulnerability Reward Program highlight how threat and defense are evolving together. Forrester’s Allie Mellen and Cloud Security Podcast host Ashish Rajan share what a modern SOC looks like in 2025 automated, AI-assisted, and built on detection engineering, not ticket queues. How detection engineering and AI agents are transforming security operations in 2025.

Hello from the Cloud-verse!

This week’s Cloud Security Newsletter Topic we cover - The Truth About AI in the SOC: From Alert Fatigue to Detection Engineering (continue reading) 

This image was generated by AI. It's still experimental, so it might not be a perfect match!

Incase, this is your 1st Cloud Security Newsletter! You are in good company!
You are reading this issue along with your friends and colleagues from companies like Netflix, Citi, JP Morgan, Linkedin, Reddit, Github, Gitlab, CapitalOne, Robinhood, HSBC, British Airways, Airbnb, Block, Booking Inc & more who subscribe to this newsletter, who like you want to learn what’s new with Cloud Security each week from their industry peers like many others who listen to Cloud Security Podcast & AI Security Podcast every week.

Welcome to this week’s Cloud Security Newsletter

This week in cloud and cyber, a ransomware recovery, a massive consulting leak, and a fresh wave of AI-driven initiatives redefine the operational risk landscape.

We examine how Asahi Group’s ransomware recovery, Red Hat’s GitLab breach, and Google’s AI Bug Bounty reveal the dual challenge of defending hybrid infrastructure while embracing generative AI.

Guiding this week’s theme are insights from Allie Mellen (Principal Analyst, Forrester) and Ashish Rajan (Cloud Security Podcast), who explore what they call the “SOC reset.
This is a moment of reset… the next five years are gonna be wild,” Allie shared, reflecting on how the role of detection engineering and AI are reshaping security operations.

📰 TL;DR for Busy Readers

  • Asahi’s ransomware recovery underscores the cost of weak OT–cloud segmentation.

  • Red Hat Consulting’s GitLab breach is a wake-up call for secret sprawl in vendor repositories.

  • Google’s New AI bug bounty sets the stage for responsible AI security testing.

  • SOC modernization means replacing alert queues with detection engineering + task-specific AI agents.

  • Information sharing gaps caused by the U.S. government shutdown could hinder threat visibility; strengthen private channels.

📰 THIS WEEK'S SECURITY HEADLINES

1) Red Hat Consulting GitLab breach exposes 28k repos and 800+ client projects

What happened: A threat group calling itself “Crimson Collective” claims to have exfiltrated 570GB of data from Red Hat’s internal GitLab system, including customer source code, cloud configurations, and API keys. Red Hat has confirmed the incident and initiated rotations.

Why it matters: Consulting repositories often store privileged cloud access data, CI/CD secrets, and customer network maps making them high-value breach amplifiers.

Action: Rotate all credentials shared with Red Hat since 2020, revoke aged tokens, and ensure that contracts mandate time-based key expiry and minimal retention of customer data.

Sources: Dark Reading, BleepingComputer, SecurityWeek, Red Hat Official Statement

2) U.S. Cybersecurity Information Sharing Act expires amid government shutdown

What happened: The U.S. government’s temporary shutdown caused a lapse in the law granting liability protection for private-sector threat-intelligence sharing. Legal advisors warn this could reduce participation by up to 80% until renewal.

Why it matters: Enterprises depending on ISAC/ISAO threat feeds may see slower sharing and weaker correlation visibility.

Action: Use bilateral intelligence-sharing NDAs, reinforce internal intel routing, and diversify with commercial feeds and automated peer exchanges.

Sources: Washington Post, World Economic Forum

3) Qilin ransomware halts Asahi Group beer production; factories restart

What happened: The Qilin ransomware gang crippled Asahi’s production operations in Japan and Europe last week. The company has restored operations after isolating OT networks and rebuilding key systems.

Why it matters: Asahi’s experience reinforces that OT–IT convergence can magnify cloud risk. Cloud backups, API tokens, and telemetry data are frequent pivots for ransomware groups.

Action: Segment OT environments, apply immutable storage controls (S3 Object Lock / Azure Immutable Blob), and rotate long-lived service credentials.

Sources: Reuters, The Record

4) DraftKings reports new wave of credential-stuffing attacks

What happened: Sports betting platform DraftKings detected mass credential-stuffing campaigns using reused passwords and automated bots. The company enforced password resets and additional MFA challenges.

Why it matters: Consumer cloud apps remain low-hanging fruit for account takeover; credential reuse continues to undermine MFA adoption.

Action: Implement adaptive MFA, WebAuthn for high-risk actions, and layered rate-limiting on login APIs.

Sources: Reuters, BleepingComputer

5) Discord breach linked to compromised Zendesk third-party provider

What happened: Attackers compromised a third-party Zendesk instance used by Discord’s support team, accessing user emails and ticket data. Attribution points to the Scattered Lapsus$ Hunters group.

Why it matters: SaaS support ecosystems can become indirect threat paths; sub-processor security is an often-overlooked part of vendor governance.

Action: Require sub-processors to use SSO + FIDO2, apply retention limits, and add DLP controls to ticketing systems.

Sources: Check Point Research

6) Google launches AI Vulnerability Reward Program

What happened: Google introduced a dedicated AI bug bounty, offering up to $30,000 for findings such as prompt-injection exploits or unsafe agent actions in Gemini, Search, Workspace, and Google Home.

Why it matters: It formalizes a security-testing channel for generative AI acknowledging that AI-driven features are now a core enterprise attack surface.

Action: Include prompt-injection testing in application security programs, adopt policy-as-code to restrict model tool access, and define AI QA gates before production release.

Sources: The Verge, Google Security Blog

🎯 Cloud Security Topic of the Week:

SOC 2025: From Alert Queues to Detection Engineering and Task-Specific AI Agents

Traditional L1/L2/L3 SOC tiers are collapsing under alert fatigue and data deluge. Modern teams are shifting to detection engineering, data pipeline optimization, and AI-driven assistance.

No one knows how to secure AI… this is a moment of reset,” said Allie Mellen, noting that GenAI will rewrite how security teams manage data and automation.

Ashish Rajan echoed the same sentiment: “AI should reduce L1 toil so people graduate to L2 work context building, incident narrative, and detection creation.

Key transformation patterns:

  • Flattened SOC structure: Move from queue-based triage to cross-functional detection pods that own rules, telemetry, and automation.

  • Data-driven focus: Align log ingestion to active detections and cost efficiency feed only what fuels high-confidence analytics.

  • AI as augmentation: Build specialized agents for triage and enrichment, not chatbots; each must have human oversight and explainability.

  • Governance for AI tools: Enforce “least agency” for AI restrict agent tool access, monitor decisions, and track data provenance.

30-minute actions:

  1. Map detections to telemetry sources; drop unused feeds to a cheaper lake tier.

  2. Define least-agency policies for internal or vendor AI integrations.

  3. Convert repetitive alert playbooks (phishing or brute-force triage) into task agents with QA oversight.

Definitions and Core Concepts 📚

Before diving into our insights, let's clarify some key terms:

  • Detection Engineering: Analysts who both create and maintain detections; a hybrid of developer and responder.

  • Security Data Lake: Structured, cost-efficient telemetry storage aligned with OCSF and long-term analytics.

  • Agentic AI: Task-specific AI built for defined security workflows with explainability and guardrails.

  • Least Agency: Limiting AI or automation access to only necessary tools and scopes.

  • AI Observability: Capturing prompt inputs, tool calls, and decisions for monitoring and auditing model behavior.

💡Our Insights from this Practitioner 🔍

The Great SOC Reset: From Human Bottlenecks to Human-Guided Automation

Security operations in 2025 are at a breaking point. The sheer velocity of alerts, hybrid data sources, and AI-generated telemetry has outpaced the human SOC’s linear model. As Allie Mellen puts it, “No one knows how to secure AI… this is a moment of reset.”

This reset isn’t about replacing people, it's about re-architecting how they operate. The modern SOC must behave like an engineering team: building, testing, and shipping detections and automations the same way developers ship code.

1️⃣ Why Traditional SOC Models Are Failing

Forrester’s latest field data shows the L1/L2/L3 hierarchy breaks under cloud-scale telemetry. Analysts spend 60–70% of their time on enrichment, correlation, and false-positive filtering tasks that machines can now automate.

Allie notes that “the structure evolving to detection engineer should stay consistent regardless of AI.” This means moving from a tiered hierarchy to cross-functional detection pods that own their use cases end-to-end: data sourcing, rule logic, validation, and metrics.

When incidents like Asahi’s ransomware outbreak occur, success depends not on alert volume but on how fast those pods can pivot detections from IT to OT telemetry proving architecture beats manpower.

2️⃣ The Rise of the Detection Engineer

Detection engineers blend developer discipline with responder intuition. They treat rules like code, apply CI/CD pipelines for detection deployment, and test every change against a known dataset before it hits production.

Ashish Rajan explains that this shift “reduces L1 toil so people graduate to L2 work context building, incident narrative, and detection creation.” In practice, it means every alert pipeline has an owner, every rule has version control, and metrics focus on mean time to learning (MTTL) rather than mean time to respond.

3️⃣ Building Credibility for AI in the SOC

As enterprises experiment with AI triage agents, credibility is the currency. If an AI falsely dismisses a true incident twice, analysts lose trust and automation adoption stalls.
Allie warns that “efficacy is incredibly important… and we don’t really have the automated testing infrastructure” for AI security agents yet.

High-maturity teams are tackling this by building “golden datasets” recorded investigations that serve as QA baselines for AI output. Every week, they replay those cases, compare AI vs. human decisions, and publish variance metrics in the SOC dashboard. The result: agents that actually earn analyst trust through repeatable performance.

4️⃣ The Data Engineering Imperative

AI doesn’t fix bad data pipelines. Both experts emphasized that SIEM performance, false-positive rates, and automation fidelity depend on clean, contextualized telemetry.
Detection engineers are applying data engineering practices: routing, normalization, tokenization, and redaction at ingestion time to ensure that what flows into the SOC is fit for purpose.

This model popularized in cloud-native organizations like Netflix and Block makes data lineage traceable across tools and enables AI triage with reliable context.

5️⃣ From Alert Fatigue to AI-Assisted Focus

When applied correctly, AI agents aren’t replacing analysts they’re removing the noise that prevents analysts from thinking.
A large financial institution Allie recently studied used a task-specific agent for phishing triage: it handled header analysis, URL detonation, and enrichment autonomously, forwarding only 8% of cases to humans. That reclaimed hundreds of analyst hours per week and shifted human focus to incident narrative and adversary tracking.

As Ashish framed it, “AI should help you tell better stories, not just faster ones.”

6️⃣ Governance: The “Least Agency” Principle

AI’s power lies in action; its risk lies in overreach. The best teams are adopting what Allie calls “least agency” a principle limiting agents to the smallest possible tool and data scope needed for their role.
Every AI integration is reviewed like a service account: time-boxed credentials, explicit tool allow-lists, and auditable prompt changes. This policy mindset transforms AI from a security risk to a security multiplier.

🧭 What Security Leaders Should Do Now

Week 1: Identify your top three manual SOC workflows (e.g., phishing triage, credential abuse, log correlation). Document inputs/outputs.
Week 2: Build a sandbox AI agent to automate 50% of steps no production access yet.
Week 3: Implement “golden dataset” QA tests and add AI observability (log prompts, actions, decisions).
Week 4: Formalize a least-agency governance policy for any AI automation or vendor integration.

The leaders who operationalize these four steps aren’t just adopting AI they’re redesigning their SOC to be human-directed, AI-accelerated, and data-driven.

As Allie concluded, “This is a reset that redefines what we even mean by security operations.”

Question for you? (Reply to this email)

⚙️ If you could automate just one SOC playbook with AI today, which would it be and why?

Next week, we'll explore another critical aspect of cloud security. Stay tuned!

📬 Want weekly expert takes on AI & Cloud Security? [Subscribe here]”

We would love to hear from you📢 for a feature or topic request or if you would like to sponsor an edition of Cloud Security Newsletter.

Thank you for continuing to subscribe and Welcome to the new members in tis newsletter community💙

Peace!

Was this forwarded to you? You can Sign up here, to join our growing readership.

Want to sponsor the next newsletter edition! Lets make it happen

Have you joined our FREE Monthly Cloud Security Bootcamp yet?

checkout our sister podcast AI Security Podcast