AI Is Driving Down the Cost of Crypto Attacks, Ledger CTO Warns

Ledger CTO Charles Guillemet says AI is reducing the cost of finding and exploiting crypto vulnerabilities "to zero." With $1.4 billion in crypto losses from hacks over the past year, the industry faces a security model that may not survive the AI era.

AI Newspaper Today··5 min read
AI Is Driving Down the Cost of Crypto Attacks, Ledger CTO Warns
Share

The Cost of Hacking Is Collapsing

Charles Guillemet, Chief Technology Officer of hardware wallet maker Ledger, has issued one of the bluntest warnings yet about the intersection of artificial intelligence and cryptocurrency security. In comments reported by CoinDesk, Guillemet said that AI is fundamentally changing the economics of cyberattacks: "Finding vulnerabilities and exploiting them becomes really, really easy. The cost is going down to zero."

That assessment comes against a backdrop of $1.4 billion in crypto losses from hacks and exploits over the past year — a figure that includes the massive Bybit hack and a string of smaller but collectively devastating attacks on DeFi protocols, bridges, and exchanges.

Guillemet's argument is not that AI has introduced entirely new categories of attack. It is that AI has made existing attack vectors faster, cheaper, and more accessible to a wider range of bad actors. The tools that once required specialized expertise are being democratized in ways the crypto industry is not prepared for.

How AI Changes the Attack Surface

The threat operates on multiple levels. AI-powered code analysis tools can scan smart contracts and protocol codebases for vulnerabilities far faster than human auditors. What once required a skilled security researcher spending weeks reviewing code can now be accomplished in hours by an AI system trained on thousands of known vulnerability patterns.

Social engineering — already the most common vector for crypto theft — becomes more potent with AI. Large language models can generate convincing phishing emails, impersonate trusted contacts, and maintain extended conversations designed to extract private keys or seed phrases. Deepfake audio and video add another layer, making it possible to simulate calls from colleagues, executives, or even family members.

Then there is the problem of AI-generated code itself. As more developers use AI coding assistants to write smart contracts and protocol logic, they risk introducing vulnerabilities that neither they nor their AI tools fully understand. The code may compile, pass basic tests, and still contain subtle flaws that an AI-powered attacker can find more easily than the AI-assisted developer who wrote it.

The Audit Model Is Breaking

Guillemet pointed to a fundamental weakness in the crypto industry's current security approach: manual auditing. The standard practice of hiring security firms to review code before deployment has real value, but it operates on human timescales and human attention spans. Auditors miss bugs. They miss them more often when codebases grow larger and more complex, as they have across DeFi.

The alternative Guillemet advocates is formal verification — using mathematical proofs to validate that code behaves exactly as intended under all possible conditions. Formal verification is not new; it has been used in aerospace, nuclear systems, and chip design for decades. But its adoption in crypto has been slow, partly because it is expensive and time-consuming, and partly because the "move fast and ship" culture of DeFi has not historically prioritized mathematical rigor.

AI may be changing that calculus on the defensive side as well. If AI can find vulnerabilities faster, it can also verify code faster. The question is whether the defense adopts these tools quickly enough to keep pace with the offense.

The Agentic AI Problem

A less discussed but potentially more dangerous dimension is the rise of agentic AI — autonomous systems that can execute multi-step tasks without human oversight. In the crypto context, this means AI agents that can identify a vulnerability, craft an exploit, execute a transaction, and launder the proceeds through mixers or cross-chain bridges, all without a human attacker needing to intervene at any step.

This is not science fiction. The building blocks exist today. Exploit identification can be automated. Transaction construction can be automated. On-chain laundering patterns are well-documented. The gap between these individual capabilities and a fully autonomous attack chain is narrowing.

For an industry built on the premise that "code is law" and that trustless systems eliminate the need for human intermediaries, the prospect of AI agents exploiting those same trustless systems is a deep architectural challenge — not just a security patch away from resolution.

Cold Storage and Operational Security

For individual crypto holders, Guillemet's advice is straightforward and somewhat grim: "You can't trust most of the systems that you use." The implication is that users should minimize their exposure to internet-connected systems, keep significant holdings in cold storage, and practice operational security habits that assume every digital interaction could be compromised.

Hardware wallets — Ledger's core business — are one piece of this picture. By keeping private keys on a dedicated device that never connects to the internet directly, users create an air gap that AI-powered attacks cannot easily bridge. But hardware wallets only protect what is stored on them. Users who interact with DeFi protocols, approve smart contract transactions, or maintain hot wallets for active trading remain exposed.

The broader message is that the crypto industry's security assumptions were built for a world where attacks required significant human skill and effort. AI is removing both constraints. The $1.4 billion lost over the past year may look modest compared to what is coming if the industry does not adapt its defenses to match the new threat environment.

An Industry at an Inflection Point

Guillemet is not the first security expert to raise these concerns, but his position as CTO of the most prominent hardware wallet company gives the warning particular weight. Ledger secures billions of dollars in crypto assets and has its own history of security incidents — including a 2020 customer data breach that led to targeted phishing campaigns.

The crypto industry faces a choice: invest heavily in formal verification, hardware-based security, and AI-powered defensive tools now, or continue relying on audit-and-patch approaches that were designed for a slower-moving threat landscape. The economics of AI-powered attacks suggest that the window for making that transition is narrowing faster than most in the industry realize.

Share

Stay up to date with AI news

Get the latest stories delivered to your inbox — free, no spam.

Discussion

Comments are not configured yet.

Set up Giscus and add your environment variables to enable discussions.

Related Articles