AI Secrets Surge 140% as Shadow AI Opens New Front in Cyber Risk
NEW YORK – A new report reveals a dramatic 140% surge in AI-specific credentials over the past year, driven by the rapid, often ungoverned integration of artificial intelligence into enterprise systems. The findings, from SentinelOne’s AI and Cloud Verified Exploit Paths and Secrets Scanning Report, underscore a fundamental shift in the risk landscape: AI adoption is now the primary driver of cloud security threats.
Nearly 88% of organizations now use AI in at least one business function, according to the report, which analyzed telemetry from more than 11,000 anonymized customer environments. This level of integration has outpaced traditional security controls, creating a complex attack surface where exposed API keys and shadow AI deployments are the new battleground.
“The explosion of AI-specific credentials is not just a numbers story—it’s a risk story,” said Dr. Elena Voss, a cloud security researcher at SentinelOne. “Every new AI integration introduces a potential entry point that standard secrets management often misses.”
The Rise of ‘Shadow AI’ and Credential Sprawl
The report identifies a sharp increase in secrets tied to AI services, including OpenAI API keys, Azure OpenAI API keys, and other LLM credentials. This growth correlates directly with the embedding of AI into customer support systems, internal tooling, financial platforms, and product experiences.

Ubiquitous deployment has spawned a pattern known as “shadow AI”—the unsanctioned use of AI tools without formal IT approval or security oversight. Developers and internal teams frequently use unmanaged or personal LLM keys to process corporate data outside sanctioned channels. These same keys are often duplicated across code repositories, SaaS configurations, and scripts, without proper access controls or rotation schedules.
“The sprawl renders these credentials nearly invisible to standard secrets management protocols,” noted Maria Chen, a senior threat analyst. “Centralized governance over how AI keys are issued and used is no longer optional—it’s critical.”
Distinct Risk Vectors from Unmanaged AI Credentials
Unlike traditional cloud credentials that primarily enable resource manipulation, compromised AI keys introduce unique attack paths. AI services operate at the intersection of CRM platforms, ticketing systems, and analytics tools, giving a single stolen key broad visibility into diverse datasets.
The report categorizes risks from exposed AI keys into two primary areas:

- Data exposure and leakage: Unauthorized access can expose sensitive datasets, proprietary business logic, internal prompts, and outputs, enabling attackers to harvest corporate conversations at scale.
- Prompt injection and data poisoning: Unmanaged AI keys allow threat actors to manipulate model behavior, potentially corrupting outputs or extracting hidden information.
Background
Traditional security guardrails were designed for a world where cloud risks centered on infrastructure misconfigurations. The rapid adoption of AI has shifted that paradigm, as 88% of organizations now embed large language models into daily operations—often without commensurate security controls. SentinelOne’s report draws on anonymized telemetry from over 11,000 environments to provide the first comprehensive view of how threat actors exploit modern cloud and AI infrastructures.
The 140% increase in AI-specific secrets is not an anomaly; it mirrors the explosive growth in AI tooling. With each new integration, the attack surface expands, and the line between sanctioned and shadow use blurs.
What This Means
For security teams, the message is clear: the era of treating AI keys like any other password is over. Credential sprawl and shadow AI demand new governance frameworks that track, rotate, and centrally manage every AI API key—whether issued officially or not. Without this, organizations risk data leaks, model manipulation, and widespread exposure.
“This is not a problem that can be solved with a single tool or policy,” said Dr. Voss. “It requires a fundamental shift in how we think about access, monitoring, and incident response in an AI-driven enterprise.” As 88% of organizations continue to deepen their AI reliance, the convergence of secrets and risk will only accelerate.
Related Articles
- Linux Kernel 7.0.6 Patches Dirty Frag Vulnerability: What You Need to Know
- How Russian State Hackers Exploit Old Routers to Hijack Office Authentication Tokens
- CopyFail: The Critical Linux Vulnerability That Sparks Urgent Security Alarms
- Brazilian DDoS Protection Firm's Infrastructure Hijacked to Attack Local ISPs
- How to Fortify Your German Business Against the 2025 Surge in Cyber Extortion
- Q1 2026 Cybersecurity: Vulnerability Trends and Exploitation Analysis
- Vault Secrets Operator Becomes New Standard for Kubernetes Secret Management as HashiCorp and Red Hat Deepen Partnership
- Debian's Upcoming Release Strengthens Binary Integrity Checks to Thwart Supply Chain Attacks