American venture capital firm Andreessen Horowitz’s a16z crypto latest report says AI agents are rapidly evolving from support tools (copilots) into “economic actors,” but the infrastructure underpinning their operation is still seriously lacking—especially in core areas such as identity, payments, and cross-platform coordination, where structural gaps remain.
AI is making the “scale” of economic activity cheap, but it’s also making “trust” harder. The role of blockchain and crypto technology is to provide a verifiable, composable, and decentralized infrastructure for this new kind of agent-based economy.
The a16z report notes that a network economy directly participated in by AI agents is already taking shape. The key question is no longer whether it will happen, but whether this system will be built on a transparent, verifiable open architecture, or continues to rely on legacy centralized systems designed for humans.
AI agents are exploding, but “identity” is the biggest bottleneck
a16z points out that the main constraint in the current development of AI agents is no longer model capability, but “identity.” In the financial services industry, non-human identities (such as trading systems and risk control models) have long surpassed human employees at a ratio of about 100:1. As agent frameworks (such as multi-agent collaboration and automated workflows) become widespread, this ratio will keep growing.
(a16z 2026 three big predictions for AI: the rise of research AI, KYA taking over from KYC, the internet invisible tax crisis)
However, these agents are still in a “not trustworthy” state: they lack standardized ways to prove their identity, permissions, and responsibilities, and they can’t carry identity across different platforms. a16z compares this to a lack of an “Agent version of SSL,” and proposes the concept of KYA (Know Your Agent), arguing that in the future agents need cryptographic credentials to prove who they represent, what they can do, and their past behavior records.
(a16z predicts the three biggest AI trends for 2026—you can’t miss the startups and investment opportunities)
AI starts to participate in governance, and control becomes a new problem
As AI agents begin participating in resource allocation and decision-making systems, governance issues also emerge. a16z notes that even if decision-making is formally decentralized, if the underlying AI models are still controlled by a single company, real power remains concentrated in the hands of the model provider.
The report argues that in the future, it will be necessary to verify the AI’s training source, execution process, and decision records through cryptographic mechanisms to ensure that agents truly represent users’ will—not the interests of model suppliers. Blockchain can provide the foundation for this kind of “verifiable governance” through on-chain records and tamper-proof execution logs.
AI agents begin to “consume on their own,” and a new form of payment system emerges
Another rapidly emerging trend is that AI agents start participating directly in transactions. The report says AI agents can already purchase data services, compute resources, and API tools, and they complete settlement through stablecoins, forming so-called “agent-to-agent commerce.”
a16z observes that a new type of “no-frontend merchants” (headless merchants) is emerging: there’s no website, no UI—services are provided purely via APIs, which are called directly by agents to complete payments. This model poses challenges for traditional payment systems and also drives the rapid development of stablecoins and crypto payments (such as payment protocols embedded in HTTP).
AI drives execution costs close to zero, and “verification” becomes the scarcest resource
The report emphasizes that as AI rapidly reduces execution costs, the real bottleneck will shift to “verification capability.” Humans can’t review large volumes of AI decisions at the same speed, causing “human in the loop” to gradually fail.
In this context, if verification mechanisms are lacking, AI systems may continue optimizing incorrect metrics, resulting in “surface-level efficiency gains, real risk accumulation”—AI debt. a16z believes that in the future, trust must be “written into the system itself,” rather than relying on manual checks; blockchain can provide a transparent and traceable trust foundation through verifiable records (provenance) and on-chain credentials.
As agents are able to independently carry out multi-step tasks, the user role is shifting from “doing” to “overseeing.” But this also brings new risks—ambiguous instructions may lead to wrong decisions, a single authorization may trigger complex processes, and errors may not be noticed immediately.
a16z says that in the future, systems will need clearer permission boundaries and control mechanisms—for example, defining the scope of agent behavior at the smart contract layer, or using an intent-based architecture so users only specify goals while the system handles execution details.
This article, a16z’s latest report: Why blockchain is the missing infrastructure piece that AI agents lack? first appeared on Chain News ABMedia.
Related Articles
Deutsche Bank Survey: U.S. Crypto Retail Participation Rate Rebounds to 12% in March
Three Major Platforms Control 75% of Stock Perpetual Futures Market in Q1 2026
Cross-Asset Hedging Emerges as Mainstream Strategy, Q1 Report Shows
Digital Asset Investment Products Record $1.4B Net Inflows Last Week, Highest Since January
DeFi hackers stole $600 million in April; Kelp DAO and Drift accounted for 95% of the monthly losses
Moody's: Stablecoin Market Exceeds $315.8B, but Near-Term Bank Threat Remains Limited