AI agent traffic has surpassed human traffic, accounting for 51% of online activity. However, trust in fully autonomous agents has dropped from 43% to 22%. To enable a true agent economy, three foundational layers—discoverability, identity verification, and reputation systems—are indispensable. This article is based on Vaidik Mandloi’s piece “Know your Agent,” edited and translated by Dongqu.
(Background: Russia plans to introduce a “Stablecoin Law” expected to go into effect as early as July this year, highlighting the cross-border payment potential of stablecoins.)
(Additional context: The FBI in the U.S. has arrested John Daghita! After stealing $46 million in government crypto assets, he flaunted his wealth, exposing himself.)
The promise that AI agents will reshape the internet is gradually becoming reality. They have moved beyond experimental chat tools to become an integral part of our daily operations—from clearing inboxes and scheduling meetings to responding to support tickets. They are quietly boosting productivity, often unnoticed.
But this growth is not just rumor.
By 2025, autonomous traffic will surpass human traffic, making up 51% of total online activity. AI-driven traffic on U.S. retail sites alone has increased by 4,700% year-over-year. AI agents now operate across systems; many can access data, trigger workflows, and even initiate transactions.
However, trust in fully autonomous agents has fallen from 43% to 22% within a year, largely due to rising security incidents. Nearly half of enterprises still use shared API keys for agent authentication, a method never designed for autonomous systems to transfer value or act independently.
The problem is: the pace of agent expansion outstrips the infrastructure meant to govern them.
In response, new protocol layers are emerging. Stablecoins, card network integrations, and standards like x402 are enabling machine-initiated transactions. Simultaneously, new identity and verification layers are under development to help agents recognize themselves and operate within structured environments.
But enabling payments does not equal enabling an economy. Once agents can transfer value, more fundamental questions arise: How do they discover suitable services in a machine-readable way? How do they prove their identity and authorization? How do we verify that the operations they claim to perform actually occurred?
This article explores the infrastructure needed for large-scale, agent-driven economic execution and assesses whether these layers are mature enough to support persistent, autonomous participants operating at machine speed.
Before agents can pay for services, they must first discover those services. This sounds simple but is currently the most friction-laden part.
The internet was built for humans to read pages. Search engines return ranked links based on human-centric optimization. These pages are filled with layouts, trackers, ads, navigation bars, and stylistic elements—meaningful to humans but mostly “noise” to machines.
When agents request the same pages, they receive raw HTML. A typical blog post or product page might contain around 16,000 tokens in this form. When converted into clean Markdown files, token count drops to about 3,000—a reduction of 80%. For a single request, this difference may be negligible. But when agents make thousands of such requests across multiple services, the processing overhead compounds into delays, costs, and increased reasoning complexity.
@Cloudflare
Ultimately, agents spend significant computational effort stripping interface elements to access the core information needed to act. This effort does not improve output quality; it merely compensates for a web designed without their needs in mind.
As agent-driven traffic grows, this inefficiency becomes more apparent. AI-driven crawling on retail and software sites has surged over the past year, now constituting a large portion of total web activity.
Meanwhile, about 79% of major news and content sites block at least one AI crawler. From their perspective, this is understandable. Agents extract content without engaging with ads, subscriptions, or traditional conversion funnels. Blocking them protects revenue.
The problem is, the web lacks a reliable way to distinguish malicious scrapers from legitimate procurement agents. Both appear as automated traffic, both originate from cloud infrastructure, and to the system, they look identical.
A deeper issue is that agents are not trying to “consume” pages—they are trying to discover actionable opportunities.
When humans search “tickets under $500,” a ranked list of links suffices. They can compare options and decide. When agents receive the same query, they need something entirely different: knowledge of which services accept bookings, input formats, pricing models, and whether payments can be programmatically settled. Few services openly publish this information clearly.
@TowardsAI
This is why the shift is happening from Search Engine Optimization (SEO) to Agent-Oriented Discoverability (AEO). If the end-user is an agent, ranking on search pages becomes less important. What matters is whether services can describe their capabilities in a way that agents can interpret without guesswork. Without this, services risk becoming “invisible” in the growing economy.
@Hackernoon
Once agents can discover services and initiate transactions, the next major challenge is ensuring the other end knows who they are dealing with—identity.
Today’s financial systems handle far more machine identities than human ones. In finance, non-human identities outnumber human identities by about 96 to 1. API keys, service accounts, automation scripts, and internal agents dominate institutional infrastructure. Most were never designed to hold capital discretion; they execute predefined commands, cannot negotiate, choose vendors, or initiate payments on open networks.
Autonomous agents are changing this boundary. If an agent can move stablecoins or trigger settlement without manual confirmation, the core question shifts from “Can it pay?” to “Who authorized it to pay?”
This is where identity becomes fundamental. The concept of “Know Your Agent” (KYA) emerges.
Just as financial institutions verify clients before allowing trading, services interacting with autonomous agents must verify three things before granting access to capital or sensitive operations:
These checks form an identity stack:
Meanwhile, standards like the Universal Commerce Protocol (UCP), led by Google and Shopify, enable merchants to publish “capability lists” that agents can discover and negotiate. These act as orchestration layers, expected to integrate into Google Search and Gemini.
@FintechBrainfood
A key subtlety is that permissionless and permissioned systems will coexist.
On public blockchains, agents can transact without centralized gatekeeping, increasing speed and composability but also regulatory pressure. The acquisition of Bridge by Stripe highlights this tension. Stablecoins enable instant cross-border transfers, but compliance obligations do not vanish just because settlement occurs on-chain.
This tension inevitably involves regulators. Once autonomous agents can initiate financial transactions and interact with markets without direct human oversight, accountability issues become unavoidable. Financial systems cannot allow capital to flow through unverified or unauthorized actors—even if those actors are software fragments.
Regulatory frameworks are being adopted. For example, Colorado’s AI Act, effective February 1, 2026, introduces accountability requirements for high-risk automation systems, with similar legislation advancing globally. As agents begin executing financial decisions at scale, identity will no longer be optional. If discoverability makes agents visible, then identity becomes the credential that grants recognition.
Once agents start executing tasks involving money, contracts, or sensitive data, merely having an identity is insufficient. A verified agent can still hallucinate, distort its work, leak information, or perform poorly.
The key question then is: how can we prove that an agent truly completed what it claimed?
If an agent reports analyzing 1,000 files, detecting fraud patterns, or executing trades, there must be a way to verify that this computation actually occurred and that the output was not forged or corrupted. For this, we need a performance layer.
Currently, three approaches exist:
These mechanisms address the same core issue from different angles. But proof of execution is often episodic. They verify individual tasks, but markets need cumulative evidence. That’s where reputation becomes critical.
Reputation transforms isolated proofs into a long-term performance history. Emerging systems aim to make agent efficacy portable and cryptographically anchored, rather than relying on platform-specific ratings or opaque internal dashboards.
Ethereum Attestation Service (EAS) allows users or services to publish signed, on-chain attestations about agent behavior. Successful task completion, accurate predictions, or compliant transactions can be recorded immutably and carried across applications.
@EAS
Competitive benchmarking environments are also emerging. Agent Arenas evaluate agents based on standardized tasks, using Elo or similar scoring systems. Recall Network reports over 110,000 participants generating 5.88 million predictions, creating measurable performance data. As these systems expand, they resemble real rating markets for AI agents.
This enables reputation to be portable across platforms.
In traditional finance, agencies like Moody’s provide credit ratings to signal trustworthiness. The agent economy will need an equivalent layer to rate non-human actors. Markets will want to assess whether an agent is reliable enough to entrust with capital, whether its outputs are statistically consistent, and whether its behavior remains stable over time.
As agents gain real authority, markets will require a clear way to measure their reliability. Agents will carry verifiable performance records based on execution validation and benchmarks, with scores adjusted for quality, and permissions traceable to explicit authorizations. Insurers, merchants, and compliance systems will rely on this data to decide which agents can access capital, data, or regulated workflows.
In summary, these layers are beginning to form the infrastructure of the agent economy: