A developer gets a LinkedIn message from a recruiter. The role looks legitimate. The coding assessment requires installing a package. That package exfiltrates all cloud credentials from the developer’s machine — GitHub personal access tokens, AWS API keys, Azure service principals and more — are exfiltrated, and the adversary is inside the cloud environment within minutes.
Your email security never saw it. Your dependency scanner might have flagged the package. Nobody was watching what happened next.
The attack chain is quickly becoming known as the identity and access management (IAM) pivot, and it represents a fundamental gap in how enterprises monitor identity-based attacks. CrowdStrike Intelligence research published on January 29 documents how adversary groups operationalized this attack chain at an industrial scale. Threat actors are cloaking the delivery of trojanized Python and npm packages through recruitment fraud, then pivoting from stolen developer credentials to full cloud IAM compromise.
In one late-2024 case, attackers delivered malicious Python packages to a European FinTech company through recruitment-themed lures, pivoted to cloud IAM configurations and diverted cryptocurrency to adversary-controlled wallets.
Entry to exit never touched the corporate email gateway, and there is no digital evidence to go on.
On a recent episode of CrowdStrike’s Adversary Universe podcast, Adam Meyers, the company's SVP of intelligence and head of counter adversary operations, described the scale: More than $2 billion associated with cryptocurrency operations run by one adversary unit. Decentralized currency, Meyers explained, is ideal because it allows attackers to avoid sanctions and detection simultaneously. CrowdStrike's field CTO of the Americas, Cristian Rodriguez, explained that revenue success has driven organizational specialization. What was once a single threat group has split into three distinct units targeting cryptocurrency, fintech and espionage objectives.
That case wasn’t isolated. The Cybersecurity and Infrastructure Security Agency (CISA) and security company JFrog have tracked overlapping campaigns across the npm ecosystem, with JFrog identifying 796 compromised packages in a self-replicating worm that spread through infected dependencies. The research further documents WhatsApp messaging as a primary initial compromise vector, with adversaries delivering malicious ZIP files containing trojanized applications through the platform. Corporate email security never intercepts this channel.
Most security stacks are optimized for an entry point that these attackers abandoned entirely.
When dependency scanning isn’t enough
Adversaries are shifting entry vectors in real-time. Trojanized packages aren’t arriving through typosquatting as in the past — they’re hand-delivered via personal messaging channels and social platforms that corporate email gateways don’t touch. CrowdStrike documented adversaries tailoring employment-themed lures to specific industries and roles, and observed deployments of specialized malware at FinTech firms as recently as June 2025.
CISA documented this at scale in September, issuing an advisory on a widespread npm supply chain compromise targeting GitHub personal access tokens and AWS, GCP and Azure API keys. Malicious code was scanned for credentials during package installation and exfiltrated to external domains.
Dependency scanning catches the package. That’s the first control, and most organizations have it. Almost none have the second, which is runtime behavioral monitoring that detects credential exfiltration during the install process itself.
“When you strip this attack down to its essentials, what stands out isn’t a breakthrough technique,” Shane Barney, CISO at Keeper Security, said in an analysis of a recent cloud attack chain. “It’s how little resistance the environment offered once the attacker obtained legitimate access.”
Adversaries are getting better at creating lethal, unmonitored pivots
Google Cloud’s Threat Horizons Report found that weak or absent credentials accounted for 47.1% of cloud incidents in the first half of 2025, with misconfigurations adding another 29.4%. Those numbers have held steady across consecutive reporting periods. This is a chronic condition, not an emerging threat. Attackers with valid credentials don’t need to exploit anything. They log in.
Research published earlier this month demonstrated exactly how fast this pivot executes. Sysdig documented an attack chain where compromised credentials reached cloud administrator privileges in eight minutes, traversing 19 IAM roles before enumerating Amazon Bedrock AI models and disabling model invocation logging.
Eight minutes. No malware. No exploit. Just a valid credential and the absence of IAM behavioral baselines.
Ram Varadarajan, CEO at Acalvio, put it bluntly: Breach speed has shifted from days to minutes, and defending against this class of attack demands technology that can reason and respond at the same speed as automated attackers.
Identity threat detection and response (ITDR) addresses this gap by monitoring how identities behave inside cloud environments, not just whether they authenticate successfully. KuppingerCole’s 2025 Leadership Compass on ITDR found that the majority of identity breaches now originate from compromised non-human identities, yet enterprise ITDR adoption remains uneven.
Morgan Adamski, PwC's deputy leader for cyber, data and tech risk, put the stakes in operational terms. Getting identity right, including AI agents, means controlling who can do what at machine speed. Firefighting alerts from everywhere won’t keep up with multicloud sprawl and identity-centric attacks.
Why AI gateways don’t stop this
AI gateways excel at validating authentication. They check whether the identity requesting access to a model endpoint or training pipeline holds the right token and has privileges for the timeframe defined by administrators and governance policies. They don’t check whether that identity is behaving consistently with its historical pattern or is randomly probing across infrastructure.
Consider a developer who normally queries a code-completion model twice a day, suddenly enumerating every Bedrock model in the account, disabling logging first. An AI gateway sees a valid token. ITDR sees an anomaly.
A blog post from CrowdStrike underscores why this matters now. The adversary groups it tracks have evolved from opportunistic credential theft into cloud-conscious intrusion operators. They are pivoting from compromised developer workstations directly into cloud IAM configurations, the same configurations that govern AI infrastructure access. The shared tooling across distinct units and specialized malware for cloud environments indicate this isn’t experimental. It’s industrialized.
Google Cloud’s office of the CISO addressed this directly in their December 2025 cybersecurity forecast, noting that boards now ask about business resilience against machine-speed attacks. Managing both human and non-human identities is essential to mitigating risks from non-deterministic systems.
No air gap separates compute IAM from AI infrastructure. When a developer’s cloud identity is hijacked, the attacker can reach model weights, training data, inference endpoints and whatever tools those models connect to through protocols like model context protocol (MCP).
That MCP connection is no longer theoretical. OpenClaw, an open-source autonomous AI agent that crossed 180,000 GitHub stars in a single week, connects to email, messaging platforms, calendars and code execution environments through MCP and direct integrations. Developers are installing it on corporate machines without a security review.
Cisco’s AI security research team called the tool “groundbreaking” from a capability standpoint and “an absolute nightmare” from a security one, reflecting exactly the kind of agentic infrastructure a hijacked cloud identity could reach.
The IAM implications are direct. In an analysis published February 4, CrowdStrike CTO Elia Zaitsev warned that "a successful prompt injection against an AI agent isn't just a data leak vector. It's a potential foothold for automated lateral movement, where the compromised agent continues executing attacker objectives across infrastructure."
The agent's legitimate access to APIs, databases and business systems becomes the adversary's access. This attack chain doesn't end at the model endpoint. If an agentic tool sits behind it, the blast radius extends to everything the agent can reach.
Where the control gaps are
This attack chain maps to three stages, each with a distinct control gap and a specific action.
Entry: Trojanized packages delivered through WhatsApp, LinkedIn and other non-email channels bypass email security entirely. CrowdStrike documented employment-themed lures tailored to specific industries, with WhatsApp as a primary delivery mechanism. The gap: Dependency scanning catches the package, but not the runtime credential exfiltration. Suggested action: Deploy runtime behavioral monitoring on developer workstations that flags credential access patterns during package installation.
Pivot: Stolen credentials enable IAM role assumption invisible to perimeter-based security. In CrowdStrike's documented European FinTech case, attackers moved from a compromised developer environment directly to cloud IAM configurations and associated resources. The gap: No behavioral baselines exist for cloud identity usage. Suggested action: Deploy ITDR that monitors identity behavior across cloud environments, flagging lateral movement patterns like the 19-role traversal documented in the Sysdig research.
Objective: AI infrastructure trusts the authenticated identity without evaluating behavioral consistency. The gap: AI gateways validate tokens but not usage patterns. Suggested action: Implement AI-specific access controls that correlate model access requests with identity behavioral profiles, and enforce logging that the accessing identity cannot disable.
Jason Soroko, senior fellow at Sectigo, identified the root cause: Look past the novelty of AI assistance, and the mundane error is what enabled it. Valid credentials are exposed in public S3 buckets. A stubborn refusal to master security fundamentals.
What to validate in the next 30 days
Audit your IAM monitoring stack against this three-stage chain. If you have dependency scanning but no runtime behavioral monitoring, you can catch the malicious package but miss the credential theft. If you authenticate cloud identities but don't baseline their behavior, you won't see the lateral movement. If your AI gateway checks tokens but not usage patterns, a hijacked credential walks straight to your models.
The perimeter isn't where this fight happens anymore. Identity is.