• Cloud Security Newsletter
  • Posts
  • Netskope $5 Billion Potential IPO & AI-Powered Threats Meet Traditional Security Gaps: When Copilots Become Attack Vectors

Netskope $5 Billion Potential IPO & AI-Powered Threats Meet Traditional Security Gaps: When Copilots Become Attack Vectors

This week's newsletter examines the explosive growth of AI security risks in enterprise environments, featuring expert insights on how Microsoft Copilot and agentic AI are fundamentally changing the threat landscape. We also cover critical zero-day exploitations, nation-state campaigns targeting cloud infrastructure, and the largest healthcare data breaches of 2025.

Hello from the Cloud-verse!

This week’s Cloud Security Newsletter Topic is - AI-Powered Threats Meet Traditional Security Gaps: When Copilots Become Attack Vectors (continue reading) 

This image was generated by AI. It's still experimental, so it might not be a perfect match!

Incase, this is your 1st Cloud Security Newsletter! You are in good company!
You are reading this issue along with your friends and colleagues from companies like Netflix, Citi, JP Morgan, Linkedin, Reddit, Github, Gitlab, CapitalOne, Robinhood, HSBC, British Airways, Airbnb, Block, Booking Inc & more who subscribe to this newsletter, who like you want to learn what’s new with Cloud Security each week from their industry peers like many others who listen to Cloud Security Podcast & AI Security Podcast every week.

Welcome to this week's edition of the Cloud Security Newsletter!

As artificial intelligence becomes deeply embedded in enterprise workflows, we're witnessing a fundamental shift in cybersecurity threats that goes far beyond traditional attack vectors. This week, we explore how AI tools like Microsoft Copilot are creating unprecedented security challenges, transforming insider threats, and forcing organizations to rethink their approach to data protection.

Our featured expert, Matthew Radolec from Varonis, brings eight years of experience running systems engineering, incident response, and managed data detection and response teams. His insights reveal how the convergence of AI capabilities and traditional security weaknesses is creating a perfect storm for data exposure.

📰 THIS WEEK'S SECURITY NEWS

🏢 Netskope IPO Preparation Signals Market Confidence

Cybersecurity firm Netskope has hired Morgan Stanley to lead preparations for a U.S. initial public offering that could raise more than $500 million, with the IPO potentially valuing the company at more than $5 billion. The cloud security provider is targeting Q3 2025 for the public offering.

Why It Matters: This represents the largest cybersecurity IPO preparation since the market downturn, signaling renewed investor appetite for cloud security solutions. For cloud security professionals, this validates the strategic importance of SASE and cloud-native security platforms. The IPO timing could catalyze increased M&A activity across the sector as competitors seek to strengthen their positions.

Source: Reuters

⚠️ Active Zero-Day Exploitations Demand Immediate Action

Google released emergency patches for Chrome, including CVE-2025-5419, an actively exploited zero-day vulnerability in the V8 JavaScript engine. Simultaneously, Microsoft's latest Patch Tuesday addressed five actively exploited zero-days, including CVE-2025-29813 (CVSS 10.0) affecting Azure DevOps Server.

Why It Matters: The Azure DevOps Server vulnerability represents a critical risk to DevSecOps pipelines, potentially allowing attackers to compromise entire software development lifecycles. Browser-based attacks remain a primary enterprise threat vector, particularly for cloud-first organizations. These active exploitations emphasize the need for automated update management and enhanced endpoint detection capabilities.

🌐 China-Linked APTs Target Critical Infrastructure Through Business Applications

EclecticIQ analysts report that China-nexus nation-state APTs launched exploitation campaigns against critical infrastructure by targeting SAP NetWeaver Visual Composer, affecting 581 compromised systems globally. Separately, APT41 has been abusing Google Calendar for command and control operations in cyber-espionage campaigns targeting government entities.

Why It Matters: This represents a significant escalation in targeting enterprise business systems beyond traditional IT infrastructure. SAP systems often contain the most sensitive business data and are deeply integrated into cloud environments. The abuse of legitimate cloud services like Google Calendar for C2 operations underscores the need for comprehensive cloud service monitoring and anomaly detection.

🏥 Healthcare Sector Faces Unprecedented Data Exposure

Yale New Haven Health System suffered the largest healthcare data breach of 2025, affecting more than 5.5 million individuals, shortly after Blue Shield of California announced its 4.7 million-record breach. These incidents highlight the ongoing crisis in healthcare data security.

Why It Matters: The healthcare sector continues facing unprecedented data exposure risks, with cloud misconfigurations and third-party integrations being primary attack vectors. Cloud security professionals should prioritize healthcare environments with enhanced data loss prevention and cloud access security broker (CASB) implementations.

🤖 Social Engineering Attacks Target Cloud Environments

Google's Threat Intelligence Group exposed a sophisticated campaign (UNC6040) where attackers used voice phishing to trick employees into installing a malicious version of Salesforce's Data Loader. Approximately 20 organizations across Europe and the Americas were affected, with successful data theft confirmed at multiple targets.

Why It Matters: This campaign demonstrates how attackers are combining traditional social engineering with cloud-native tools to bypass technical security controls. The targeting of legitimate cloud management tools highlights the need for strict application verification processes and enhanced employee training on cloud security practices.

Source: Reuters

CLOUD SECURITY TOPIC OF THE WEEK

The AI Security Revolution Continues: The AI Insider Threat Revolution

The rise of AI assistants and copilots in enterprise environments is fundamentally changing the nature of insider threats, creating what experts are calling the "blast radius problem" – where a single compromised account can now access and exfiltrate vast amounts of data with unprecedented ease.

Definitions and Core Concepts 📚

Before diving into our insights, let's clarify some key terms:

  • Blast Radius - The scope of data and systems that one compromised user account can access, significantly amplified by AI tools that remove technical barriers to data discovery and extraction.

  • Agentic AI - Autonomous AI systems that can perform tasks and make decisions without direct human intervention, increasingly used in business processes and customer service.

  • Conversational Bait - A sophisticated social engineering technique where attackers use AI to conduct extended conversations with targets before launching actual phishing attempts.

  • Model Poisoning - The practice of introducing malicious or corrupted data into AI training datasets to compromise the model's integrity and decision-making capabilities.

This week's issue is sponsored by Varonis.

Redefining Data Security Strategies for a Gen AI World

AI is transforming how we work — but is your data security keeping up?

Learn from our data security experts to better understand the AI risk landscape, how to protect your data without slowing down company progress, and better yet - how to use AI to your advantage for even better data protection.

Sign up today for our free session and get access to a free Generative AI risk assessment when you attend. 

💡Our Insights from this Practitioner 🔍

The Fundamental Shift in Insider Threats

Matthew Radolec presents a sobering reality about modern cybersecurity: "Hackers aren't just like drop in malware and establishing persistence anymore. A lot of times they're reusing credentials – they're not breaking in, they're logging in. 86% of attacks are coming from some type of credential misuse or credential theft."

This statistic becomes even more alarming when combined with AI capabilities. As Matthew explains, "Now you put a Copilot in their hand or an agentic AI in their hands and you've removed the need to be technical to get data out. So now your insider threats, like an insider equipped with a Copilot, is just as good as a nation state actor in terms of accessing and exfiltrating data."

The implications are staggering. Traditional security awareness training focused on preventing external threats, but AI tools have democratized data access in ways that most organizations haven't anticipated. A curious employee can now ask simple questions in plain English to discover salary information, merger details, or other sensitive data that was previously protected by technical barriers.

Real-World AI Security Incidents

The conversation reveals a pattern that's becoming disturbingly common in enterprise environments. Matthew describes typical scenarios: "They do a pilot. They maybe buy a dozen licenses for Copilot, or 30 licenses for Copilot or a hundred licenses for Copilot. And they turn it on. They realize they can get to salary information, they can get passwords, they can get merger and acquisition details, all with a couple of keystrokes."

This creates a business tension that many organizations are struggling to navigate. "There's tension that builds between security and the business. And they want to move forward... they don't want security to be the reason that they don't roll out Copilot."

The most common "aha moments" organizations experience during AI pilots involve discovering that employees can easily access highly sensitive information. "Almost every time it's salary information. It's like the review cycle or the bonuses. Or it's an upcoming merger and acquisition. Or even like layoffs and things like that."

The Integrity Crisis in AI-Driven Decision Making

Beyond data confidentiality and availability, Matthew highlights a critical concern that many organizations overlook: data integrity. "AI draws the need for us to talk about integrity more now, more than ever. You need to make sure that the data you feed into the LLMs you're building is good clean data."

The consequences of compromised data integrity in AI systems can be far more severe than traditional data breaches. "If that data got altered and then fed into an AI, the outcomes of that and the consequences of that are far worse than that data getting publicly leaked because now decisions are gonna be made based on faulty data."

This insight points to an emerging threat vector: "I predict the trend where we start to see the ransomware actors start to try to poison the data that goes into the models as opposed to just capture the data that comes out of the model."

Practical Security Approaches for AI Adoption

Rather than blocking AI adoption – which Matthew considers "a career limiting maneuver" – organizations need to implement practical security measures. The key principle he advocates is maintaining monitoring capabilities: "We sometimes in security need to enable the business. Then we shouldn't give up the monitoring. We should demand that we do the monitoring."

This approach involves prompt monitoring and behavioral analysis: "You can still look at what people are using the prompts for, look at the responses that they're getting and you can still say, 'Hey, why did you look up salary information for the CEO? You didn't need to do that for your job.'"

The foundation remains traditional access management principles: "They've come down to an access management problem, which is the same we've been facing in security for as long as I've been in the industry... if the access management is tight, they probably don't have a data breach by AI or by their own employees or by an external threat actor."

Security Teams Must Adopt AI to Stay Relevant

Perhaps most importantly, Matthew emphasizes that security teams themselves must embrace AI tools to remain effective: "How are you using AI? As a security professional, how are you arming your teams with AI powered tool sets? Because if you're talking about enabling your business with AI, are you enabling your security team with AI?"

He describes implementing AI in his own incident response team: "We could feed all those tickets into a large language model to write playbooks on how to handle certain types of incidents. Tremendously successful at that." The AI provides risk scores and recommendations that human analysts then combine with their own judgment to make faster, more accurate decisions.

The message is clear: "It's my job as a leader to give my team the tools they need to be the best possible at doing their job. I have to give them an AI powered tool set. Or I'm lagging behind."

Question for you? (Reply to this email)

Do you agree the use of AI breaking traditional Security Barriers?

Next week, we'll explore another critical aspect of cloud security. Stay tuned!

We would love to hear from you📢 for a feature or topic request or if you would like to sponsor an edition of Cloud Security Newsletter.

Thank you for continuing to subscribe and Welcome to the new members in tis newsletter community💙

Peace!

Was this forwarded to you? You can Sign up here, to join our growing readership.

Want to sponsor the next newsletter edition! Lets make it happen

Have you joined our FREE Monthly Cloud Security Bootcamp yet?

checkout our sister podcast AI Security Podcast