- Cloud Security Newsletter
- Posts
- 🚨 Palo Alto's $3.35B Observability Bet Why Palo Alto’s $3.35B Observability Bet Signals the End of Vulnerability Management
🚨 Palo Alto's $3.35B Observability Bet Why Palo Alto’s $3.35B Observability Bet Signals the End of Vulnerability Management
This week's newsletter explores the strategic shift from siloed vulnerability management to unified exposure management, featuring insights from Brad Hibbert (COO & Chief Strategy Officer at Brinqa) on how enterprises can reduce risk at scale, plus analysis of major security acquisitions that signal the future of platform consolidation and AI-driven security operations.
Hello from the Cloud-verse!
This week’s Cloud Security Newsletter topic: From Vulnerability Chaos to Exposure Clarity: How Enterprises Are Winning the Risk Reduction Game (continue reading)
Incase, this is your 1st Cloud Security Newsletter! You are in good company!
You are reading this issue along with your friends and colleagues from companies like Netflix, Citi, JP Morgan, Linkedin, Reddit, Github, Gitlab, CapitalOne, Robinhood, HSBC, British Airways, Airbnb, Block, Booking Inc & more who subscribe to this newsletter, who like you want to learn what’s new with Cloud Security each week from their industry peers like many others who listen to Cloud Security Podcast & AI Security Podcast every week.
Welcome to this week’s Cloud Security Newsletter
The security industry is quietly abandoning vulnerability management — and Palo Alto’s $3.35B observability acquisition makes that impossible to ignore.
In cloud-native environments, patch lists and CVSS scores no longer reflect real business risk. Exposure now spans infrastructure, identity, APIs, SaaS, containers, and increasingly, autonomous AI systems. Recent acquisitions by Palo Alto and Varonis signal a structural shift: from counting vulnerabilities to orchestrating risk reduction across business services.
This week, we're examining this evolution through two lenses: major vendor consolidation moves that signal where the market is heading (Palo Alto's $3.35B Chronosphere acquisition and Varonis' $125M AllTrue.ai purchase), and practical guidance from Brad Hibbert, a 20-year security veteran who has watched vulnerability management mature from simple patch lists to enterprise-wide exposure orchestration.
Brad brings a unique perspective, having worked across vulnerability management, privileged access management, and third-party risk before landing at exposure management. His insights reveal why the industry is shifting from reporting vulnerabilities to orchestrating remediation, and how organizations can make this transition without losing the domain knowledge they've built over years. [Listen to the episode]
📰 TL;DR for Busy Readers
Palo Alto's $3.35B bet on observability: Unifying security + observability signals shift to AI-driven, autonomous remediation at scale
Varonis acquires AI TRiSM: Data security vendors now own AI agent governance expect policy enforcement over prompts, connectors, and sensitive data access
Exposure management is the new standard: Moving beyond risk-based vulnerability management to business service-aligned risk reduction
Trust before automation: AI-powered remediation only works when built on normalized data and explainable models
Start small, prove impact: Pick critical services, demonstrate risk reduction, then scale don't boil the ocean
“Exposure management isn’t about more data. It’s about deciding what actually matters and mobilizing teams to fix it.” — Brad Hibbert, COO & CSO, Brinqa
📰 THIS WEEK'S TOP 5 SECURITY HEADLINES
Each story includes why it matters and what to do next — no vendor fluff.
1. 📈 Palo Alto Networks Completes $3.35B Chronosphere Acquisition
What Happened: Palo Alto Networks finalized its acquisition of Chronosphere, a cloud-native observability platform and 2025 Gartner Magic Quadrant leader, in a $3.35 billion deal originally announced in November 2025. Chronosphere's telemetry pipeline reduces data volumes by approximately 30% and requires 20x less infrastructure than legacy observability tools. The company plans to integrate Chronosphere with its Cortex AgentiX platform, enabling AI agents to automatically detect and remediate security and IT issues across applications, infrastructure, and AI systems.
Why This Matters: This acquisition represents Palo Alto's strategic response to the "data tax" problem plaguing modern security operations. For cloud security teams managing massive telemetry volumes across containers, microservices, and distributed systems, the integration of observability and security signals a fundamental shift from reactive detection to proactive, AI-driven remediation.
Consider the implications for your security architecture:
Platform consolidation pressures: Organizations will face strategic decisions about continuing standalone observability tools (Datadog, New Relic, Dynatrace) versus adopting Palo Alto's unified platform. This creates both opportunity (simplified vendor management) and risk (increased vendor lock-in).
Cost optimization at scale: The 30% data reduction capability directly addresses escalating SIEM/SOAR data ingestion costs a pain point for every enterprise dealing with petabytes of telemetry. Security teams should evaluate how this affects their data pipeline strategies and storage costs.
Autonomous incident response: The Cortex AgentiX integration promises to reduce MTTR through autonomous remediation. However, as Brad Hibbert notes in our feature interview, "You want it to speed the right things up. So you want to make sure that your AI is based on a sound data foundation."
Vendor concentration dynamics: Combined with Palo Alto's pending $25B CyberArk acquisition, this positions them as a dominant force in platform-based security. CISOs should assess dependency risks and negotiate accordingly.
2. 🤖 Varonis Acquires AllTrue.ai for $125M: AI TRiSM Meets Data Security
What Happened: On February 3, 2026, Varonis Systems announced its acquisition of AllTrue.ai, an AI Trust, Risk, and Security Management (AI TRiSM) platform, for $125 million in an all-cash deal. AllTrue.ai provides real-time visibility and governance for AI systems across enterprises, enabling organizations to inventory AI systems, understand their intent and connections, control AI behavior in real-time, and prove accountability for governance and compliance. The acquisition addresses the challenge of autonomous AI systems (models, copilots, and agents) operating at machine speed without clear visibility or guardrails.
Why This Matters: This acquisition positions Varonis at the forefront of an emerging security category: securing AI agents and autonomous systems that act on enterprise data. As organizations deploy GenAI tools, chatbots, and AI agents at scale, these systems move beyond passive data analysis to making autonomous decisions and taking actions creating a fundamentally new risk profile.
For cloud security practitioners, this signals several critical trends:
AI security evolution: The focus is shifting beyond prompt injection and model attacks to include governance, behavior monitoring, and accountability for AI actions. This aligns with Brad Hibbert's observation that "teams are drowning in data" and need better ways to "make connections that as a human is difficult to make."
Data security vendors as AI gatekeepers: Companies with visibility into data access patterns are positioning themselves as the logical owners of AI security. Expect AI governance controls (model risk monitoring, exposure controls, usage policies) to converge with data security posture (SaaS + cloud data stores).
Least-privilege extends to AI: Traditional identity and access management principles must now apply to AI systems, not just users. Organizations should start mapping where LLM/agent workflows touch cloud data (S3, Azure Blob, Google Drive, SaaS apps) and decide whether AI controls belong in DSPM/CASB, IAM, or a dedicated AI security layer.
Compliance and auditability: As AI systems make autonomous decisions affecting business operations, proving compliance becomes critical. Organizations need audit trails for AI decisions, especially where AI touches regulated data.
Action for defenders: Prioritize establishing inventories of AI systems and their data access, implementing real-time monitoring for AI agent behavior, defining policies for acceptable AI actions, and ensuring audit trails for AI decisions.
3. 💰 Thoma Bravo Explores Sale of Imprivata (Up to ~$7B Valuation)
What Happened: Reuters reports that private equity firm Thoma Bravo is exploring a sale of healthcare identity vendor Imprivata, potentially valuing it at up to $7 billion. Imprivata sits at the intersection of healthcare identity, privileged workflows, and regulated access, often integrated into hybrid cloud and SaaS environments.
Why This Matters: A transaction of this scale signals continued consolidation in the IAM space, particularly around specialized identity solutions for regulated industries. For enterprises relying on Imprivata or similar "workflow IAM" solutions, this potential ownership change could impact product roadmaps, pricing models, and integration strategies with major identity platforms like Microsoft Entra ID, Okta, and PAM solutions.
This relates directly to the exposure management theme: as identity becomes increasingly central to security architecture, organizations need to understand their dependencies on identity vendors and ensure they have contingency plans. As Brad Hibbert emphasizes, "It's not just about the server, it's about the services...those services could be spread across multiple processing units and storage units."
Action for defenders: If you rely on Imprivata or similar solutions, confirm exportability of identity and audit data, validate integration roadmaps with major identity providers, and build vendor-change risk into your 2026 identity program planning.
Source: Reuters
4. ☁️ Cloud Provider Security Updates Worth Acting On
What Happened: Several notable security feature updates were announced across major cloud providers this week:
AWS CloudFront: Added mTLS to origins, enabling true end-to-end client authentication patterns beyond viewer-to-edge
AWS EC2/VPC: Introduced "Related resources" view for security groups to reduce misconfiguration risk during rule changes
Microsoft Defender for Cloud: Highlighted Microsoft Security Private Link (public preview) to keep Defender traffic on private connectivity
Google Cloud (Apigee): Shipped Advanced API Security updates supporting richer condition logic in security actions
Why This Matters: These are "quiet" changes that materially improve control-plane security (private telemetry paths), edge-to-origin trust, and safe change management for network controls. In the context of exposure management, these updates represent the kind of incremental security improvements that reduce attack surface without requiring major architectural changes.
Brad Hibbert's perspective is relevant here: exposure management is about "not just about the exposures, but exposures that could impact your business. Are those exposures in your environment? Are they reachable? Are they exploitable? If they are exploited, what's the blast radius?" These cloud provider updates help reduce both reachability and exploitability.
Actions for defenders:
If you operate CloudFront with sensitive origins, evaluate mTLS-to-origin as a hardening lever for internal services and partner integrations
In Azure, assess Private Link options for security tooling connectivity patterns to reduce exposure and simplify egress controls
Review Google Cloud Apigee updates if you're managing API security at scale
5. 🤖 Agentic AI Security: Exposed Control Panels + Prompt Injection = One-Click RCE
What Happened: Axios reported security concerns around an open-source autonomous agent (Moltbot), including exposed/misconfigured control panels and susceptibility to prompt injection. Separately, a high-severity OpenClaw flaw was disclosed enabling one-click RCE via a malicious link (now patched). Security researcher Bruce Schneier also highlighted "indirect prompt injection" as a growing attack class.
Why This Matters: Agents frequently run with broad API keys, SaaS tokens, and cloud credentials. Prompt injection turns "content" (emails, tickets, docs, web pages) into a command channel that can exfiltrate secrets or trigger destructive actions. This is the next wave of supply chain attacks, but at the workflow layer attackers target what the agent reads and what tools it can use.
This connects directly to our main theme: as organizations adopt AI-powered automation in their security operations (including exposure management and remediation), they must understand the security implications. Brad Hibbert cautions: "You have to trust the opinions that these AI models are producing...automation works well for known patterns...but if it's risky change, if it's something where it could have an impact on a critical service...you may not automate it 100%. You still might want a human in the loop."
Actions for defenders:
Enforce tool-scoped permissions with least-privilege API keys for all AI agents
Isolate agent execution environments from production systems
Require human approval gates for high-risk actions (data export, IAM changes, code deployments)
Add monitoring for agent-initiated abnormal access patterns (bulk reads, new destinations, unusual OAuth scopes)
Sources: Axios, The Hacker News, Schneier on Security, Tenable
🎯 Cloud Security Topic of the Week:
From Vulnerability Chaos to Exposure Clarity: How Enterprises Are Winning the Risk Reduction Game
The security industry has a dirty secret: most organizations are drowning in vulnerability data but starving for actionable insights. According to Brad Hibbert, who has spent 20 years watching vulnerability management evolve from quarterly server scans to AI-powered exposure orchestration, the fundamental problem isn't finding vulnerabilities, it's deciding what to do about them.
"It really wasn't about the tools," Hibbert explains. "It was really about decision clarity and kind of moving beyond the tools."
This week, we're exploring why leading enterprises are moving from risk-based vulnerability management to exposure management, and what this means for cloud security teams trying to protect increasingly complex environments spanning infrastructure, applications, containers, identities, and AI systems.
Featured Experts This Week 🎤
Brad Hibbert - COO & Chief Strategy Officer, Brinqa
Ashish Rajan - CISO | Co-Host AI Security Podcast , Host of Cloud Security Podcast
Definitions and Core Concepts 📚
Before diving into our insights, let's clarify some key terms:
Risk-Based Vulnerability Management (RBVM): An approach that prioritizes vulnerabilities based on context like asset criticality, threat intelligence, and CVSS scores. Typically operates within individual tools and teams, providing prioritized lists of exposures but remaining siloed by domain (infrastructure, applications, cloud, etc.).
Exposure Management: A holistic, enterprise-wide approach that sits above individual security tools to normalize, correlate, and prioritize risks across all domains. Focuses on business outcomes (risk reduction to critical services) rather than metrics (number of vulnerabilities closed). Emphasizes decision orchestration and remediation coordination across teams.
Business Services vs. Assets: In exposure management, the unit of analysis shifts from individual assets (servers, containers, VMs) to business services (customer-facing applications, internal workflows, revenue-generating systems). A single business service may span multiple assets, and a single asset may support multiple services.
Remediation Owner vs. Risk Owner: Risk owners (typically service owners or business unit leaders) decide what risk to accept or remediate based on business impact. Remediation owners (infrastructure, cloud, AppSec teams) perform the actual fixes. This separation is critical for effective exposure management.
Decision Orchestration vs. Data Orchestration: Data orchestration normalizes and aggregates vulnerability data from multiple sources. Decision orchestration provides opinionated recommendations on what to fix based on business context, reachability, exploitability, and blast radius enabling faster, more confident action.
Explainable AI in Security: AI models that provide clear reasoning for their recommendations, allowing security teams to understand why a particular vulnerability or exposure is prioritized. Critical for building trust in AI-driven security decisions.
This week's issue is sponsored by Prowler
The world’s most widely adopted open cloud security platform
Trusted by modern cloud security teams, Prowler detects vulnerabilities and misconfigurations, prioritizes risk, accelerates remediation, and delivers audit-ready compliance reports.
With 44M+ downloads, 12K+ GitHub stars, and 300+ contributors, Prowler is the open standard for cloud security.
Ask Lighthouse AI security questions just like a trusted colleague. Get actionable insights and remediation plans instantly. Secure your cloud programmatically with the Prowler MCP server.
💡Our Insights from this Practitioner 🔍
From Vulnerability Chaos to Exposure Clarity: How Enterprises Are Winning the Risk Reduction Game (Full Episode here)
The Evolution: From Patch Lists to Business Impact
Twenty years ago, vulnerability management was straightforward. Organizations scanned servers quarterly, generated reports sorted by CVSS score, and worked down the critical/high list until the next audit cycle. "We were dealing with large customers...it was all about the servers and patching servers", Hibbert recalls. "They might have done a scan once a quarter to meet their compliance requirement. Then they moved it to monthly".
But the cloud changed everything.
"Today, assets are a lot more dynamic and interconnected than they were in the past", Hibbert explains. "It's not just about a server, it's about the business services that server is supporting. A server could be supporting multiple services. Those services could be spread across multiple processing units and storage units and those sorts of things".
This interconnectedness creates a fundamental challenge: traditional vulnerability management approaches, even risk-based ones, remain siloed within individual teams and tools. Infrastructure teams manage server vulnerabilities. AppSec teams manage code vulnerabilities. Cloud teams manage container and configuration issues. Each has their own priorities, their own dashboards, their own understanding of "critical".
"The team's really drowning in data", Hibbert observes. "It's not, you don't have a telemetry, it's how do you kind of work through that data? I come to you as a remediation owner, so you crack open a spreadsheet and start arguing that your data's different from my data. Right? That happens a lot".
The Exposure Management Difference: Context Over Volume
The shift to exposure management represents a fundamental change in how organizations think about security risk. Instead of optimizing for vulnerability closure rates or compliance metrics, exposure management focuses on a more strategic question: What exposures could actually impact the business, and how do we coordinate their removal across teams?
Hibbert breaks this down into several key questions that exposure management must answer:
Do I have this exposure in my environment? (Discovery still important but no longer sufficient)
Is it reachable in my environment? (Network context, segmentation, access controls)
Is it exploitable? (Threat intelligence, exploit availability, attack complexity)
What's the blast radius if exploited? (Connected services, data access, lateral movement paths)
Which business services does this affect? (Service mapping, criticality assessment)
Who owns the risk and who fixes it? (Service owners vs. remediation teams)
"It's a shift from reporting to remediation", Hibbert emphasizes. "Exposure management is all about getting that prioritization done at a finer level, but then mobilizing and orchestrating the remediation that needs to happen to remove that risk from the environment across the different teams".
This distinction is critical. Risk-based vulnerability management gives you better prioritized lists. Exposure management gives you coordinated action.
The Ownership Problem: Service Owners vs. Remediation Teams
One of the most persistent challenges in vulnerability management has always been ownership. Who's responsible for fixing what? In traditional environments, the answer was simple: if it's on your server, it's your problem. But modern architectures don't work that way.
"The server team's not gonna know which one of these services is most important to the business", Hibbert points out. "It would be the service owners that understand, am I willing to accept this risk?"
He provides a concrete example: "If a server has multiple applications or is supporting multiple services, one might be a service that's providing a service to your internal employees. It might also be providing support for a service that's providing a service to your customers. The server team's not gonna know which one of these services is most important to the business".
This leads to a crucial distinction in exposure management: risk owners versus remediation owners.
Risk owners (typically service owners or business unit leaders) understand business impact and make decisions about risk acceptance or mitigation priority. They can answer: "Is this service revenue-generating? Is it customer-facing? What's the SLA? What's the business impact if it goes down?"
Remediation owners (infrastructure, cloud, AppSec teams) have the technical expertise and access to implement fixes. They can answer: "How do we patch this? What's the change control process? What's the testing requirement? What's the rollback plan?"
"I own the risk as the service owner", Hibbert explains. "But when exposures happen that could impact my service, I have to mobilize those remediation teams to go fix it. Those teams could be, if it's in the code, could be the code team. If it's in the cloud, it could be the cloud team. If it's on the server, it could be a server team".
This separation enables more effective decision-making because business context and technical execution are properly aligned.
The AI Opportunity (and Its Limits)
Given the massive volumes of security data organizations must process vulnerabilities, configurations, threat intelligence, asset context, network topology, identity permissions AI seems like an obvious solution. And Hibbert agrees, with important caveats.
"AI is certainly a great way to help you kind of do that", he says. "It can help you make the connections that as a human are difficult to make. So if you start to look at attacker behavior, attack techniques, the way that these exploits are being leveraged, what mitigating controls you have in place...there's just a number of different data elements that could be tied together. And the use of AI certainly helps you make those connections very quickly".
But here's the critical insight: AI is only as good as the data foundation it's built on.
"You have to trust the opinions that these AI models are producing", Hibbert warns. "And so, for me, you know, I always tell people AI's great. Automation's always great. It can speed things up. You want it to speed the right things up. So you want to make sure that your AI is based on a sound data foundation".
This means:
Normalize data from all sources: Multiple scanners, multiple CMDBs, multiple threat feeds must speak a common language
Establish shared risk models: Teams must agree on how risk is calculated and prioritized before AI starts making recommendations
Ensure explainability: AI must show its work why is this vulnerability prioritized? What factors contributed to the score?
Build trust gradually: "If you don't believe in the data, if you're suspect over the opinions coming out of the AI, then you're not gonna take action based on those outputs"
Regarding the holy grail of automated remediation, Hibbert is pragmatic: "Automation works well for known patterns. Like if it's a simple fix or configuration change or something you've done in the past where the AI can look back and see previous behavior and say, 'Hey, I have a 90% confidence that this is the right thing to do.'"
But for high-impact changes? "If it's a risky change, if it's something where it could have an impact on a critical service...or it could have significant damage in some way, then you may not automate it 100%. You still might want a human in the loop where you automate everything, but a human has to click the button".
This aligns perfectly with the emerging threats we're seeing around AI agents and prompt injection attacks. As AI systems gain more autonomy in security operations, the blast radius of a compromised or misdirected AI agent increases dramatically. The guardrails matter.
Compliance vs. Impact: A Critical Distinction
One of the most persistent challenges in vulnerability management is the compliance-driven approach many organizations are forced to adopt. PCI DSS requires critical vulnerabilities to be remediated within 30 days. Other frameworks have similar requirements. But Hibbert argues these metrics miss the point.
"Compliance in many cases just proves activity", he observes. "Hey, I closed my critical vulnerability in 21 days, but it's not tied to the actual risk".
The problem is that compliance frameworks tend to lag behind the reality of modern threat landscapes. They focus on metrics that are easy to measure (number of vulnerabilities, time to patch, coverage percentages) rather than outcomes that actually matter (risk reduction to business-critical services, prevention of real-world attacks).
"Compliance proves activity", Hibbert says. "Exposure management proves impact risk reduction in the environment. So you tie it back to, again, I removed this much risk from this particular business service as opposed to like, 'Hey, I cleared off my critical vulnerabilities off my servers.'"
This doesn't mean ignoring compliance requirements. It means understanding that compliance is a floor, not a ceiling. Organizations that only optimize for compliance metrics are playing a dangerous game: they're measuring activity while adversaries are measuring opportunity.
The Maturity Path: How to Start Without Boiling the Ocean
For organizations looking to evolve from risk-based vulnerability management to exposure management, Hibbert's advice is consistent: start small, prove value, then scale.
"Don't try to boil the ocean", he cautions. "Pick an area of the business that you want to evolve beyond risk-based vulnerability management to an area where risks are visible, where they're painful in the organization today. So that could be on a certain set of services. Maybe it's one service or two services that you want to focus on getting that next level of prescription. Maybe it's your external attack surface...whatever the area is within your organization, focus on that first".
The key is demonstrating actual risk reduction to business services, not just improved metrics: "Show that you have success. We have many customers of ours who have dozens of different applications that they've kind of brought into the program, but they started with two. And then kind of once they showed some value and showed how they're making better decisions that better aligned to the business itself, then they expanded".
He recommends several prerequisites before beginning an exposure management initiative:
1. Executive commitment: "There has to be executive commitment at the top layer that exposure management is a discipline that an organization wants to embrace".
2. Shared understanding of risk: "You have to have that shared understanding of risk across the different teams...in many cases, I would say there needs to be a shared incentive program. So they're all working towards the same goal across the different teams".
3. Dedicated resources: "Most of the programs that we've seen...just to drive the accountability and the movement across the different teams and coordinate the activities across the teams, it does require some resourcing and some investment".
4. Start with friendlies: Begin with teams that understand the limitations of current approaches and are motivated to try something new. Success with early adopters builds momentum.
5. Accommodate existing workflows: "You don't wanna force them to some sort of corporate way of doing things. We've got customers who use 20 different flavors of Jira, and we can accommodate that. We're not asking them all to change the way they develop code, but we're showing them where in their code that they should fix exposures by using a broader, more business-aligned prioritization model".
The Data Consistency Challenge
One recurring theme in Hibbert's experience is the "competing spreadsheets" problem. Different teams use different tools, which produce different data, which leads to endless debates about whose numbers are correct.
"What you don't wanna do in an exposure management program is I come up with this great list of things I need to get done, but then I come to you as the remediation owner and I say, 'Hey, can you go fix this?' And you crack open a spreadsheet and start arguing that your data's different from my data. Right? That happens a lot".
The solution isn't to force everyone onto the same tooling that's neither practical nor desirable, as different teams need different capabilities. Instead, exposure management platforms sit above the tools, normalizing and correlating data to create a single source of truth for prioritization decisions.
"Having that shared understanding of risk and how you're gonna calculate risk is critical", Hibbert explains. "And then having explainability on why...you feel that your data has given you the best decisioning capability that you can, is pretty important".
This normalization layer also helps address another common challenge: legacy systems and technical debt. "You're never gonna start with perfect data, perfect systems", Hibbert acknowledges. "But pick an area of the business that you want to focus on...and then focus on the decisions. Don't focus on the scope of the program".
AI TRiSM Overview by Gartner - Understanding the AI Trust, Risk, and Security Management category
AWS Security Best Practices - Official AWS security guidance
Microsoft Cloud Security Benchmark - Azure security baseline recommendations
Google Cloud Security Foundations Guide - GCP security architecture patterns
Brinqa Exposure Management Platform - Enterprise-grade exposure management and decision orchestration
Cloud Security Podcast
Question for you? (Reply to this email)
🤔 Is your team still fighting spreadsheet wars over vulnerability priorities which service owner will you pilot exposure management with first?
Next week, we'll explore another critical aspect of cloud security. Stay tuned!
📬 Want weekly expert takes on AI & Cloud Security? [Subscribe here]”
We would love to hear from you📢 for a feature or topic request or if you would like to sponsor an edition of Cloud Security Newsletter.
Thank you for continuing to subscribe and Welcome to the new members in tis newsletter community💙
Peace!
Was this forwarded to you? You can Sign up here, to join our growing readership.
Want to sponsor the next newsletter edition! Lets make it happen
Have you joined our FREE Monthly Cloud Security Bootcamp yet?
checkout our sister podcast AI Security Podcast


