When regulators begin testing AI trading risks, the market has already entered the "era of systemic competition."

robot
Abstract generation in progress

Over the past week, several key news items about AI and financial markets have been sending out an underestimated but extremely important signal: the role of AI in trading has shifted from being a tool to becoming system infrastructure.

On the one hand, traditional financial institutions are directly integrating AI into trading execution and liquidity networks. For example, Liquidnet under TP ICAP is deploying an AI assistant system to activate its daily trading liquidity of more than one trillion dollars. These systems no longer remain at the analysis layer; they directly participate in trade matching and execution.

On the other hand, regulators have also started to visibly accelerate their attention to AI trading behavior. The Bank of England recently stated clearly that it is testing AI’s behavior in markets through scenario simulations, focusing on whether “herding” trading will amplify market volatility in stressed environments.

When “markets are using AI” and “regulators are testing AI” happen at the same time, it essentially points to one core issue: AI is no longer a marginal variable—it is moving into the core of the financial system.

AI begins entering the execution layer, and trading structures are changing

In the past, AI was used more as an assistive tool—for example, generating signals or optimizing strategies—while the core decisions were still made by humans. But based on current market developments, this structure is changing.

Taking Liquidnet as a representative case, AI has begun to directly participate in liquidity matching and trade execution. This means trading is no longer “human decides, system executes,” but “the system directly participates in decision-making and execution.”

This kind of change also exists in the crypto market. The share of derivatives trading continues to rise, with automated market-making and arbitrage systems present in large numbers. In the short term, prices are increasingly driven by order flow and capital structure rather than a single-direction judgment. This structure turns counterparties from “people” into “systems.”

When trading becomes a competition between systems and systems, the logic of the entire market has already changed.

The core of regulatory attention is not AI itself, but “system behavior”

The Bank of England’s focus in its AI risk testing is not the model itself, but the way AI behaves in the market. For example, regulators are especially concerned about whether multiple AI systems will make similar decisions at the same time, thereby amplifying market volatility.

This point is extremely critical.

Because once the market is driven by large numbers of systems operating on similar logic, a new risk structure emerges: individual behavior is rational, but the overall outcome is amplified. This phenomenon has already appeared in traditional quantitative trading, and could be further reinforced in AI systems.

When regulators begin focusing on this issue, at its core they are acknowledging one thing: AI has become a market participant rather than a tool.

“Predictive capability” is losing its central position

In such a market environment, a capability that has long been considered a core competitive advantage is gradually being marginalized—prediction.

Traditional trading logic holds that as long as you get the direction right, you can make profits. But under the current structure, prices are no longer determined solely by direction; they are jointly determined by liquidity, leverage structure, order paths, and execution efficiency.

This is also why a situation frequently occurs in real trading: the judgment is correct, but the result is wrong.

The problem is not “seeing it wrong,” but “doing it wrong.”

Research also shows that in today’s market environment, the advantages of AI trading systems are increasingly reflected in execution efficiency and consistency, rather than in predictive capability itself.

This means the core of trading is shifting.

The real turning point: system control capability

When AI enters the execution layer, and the market is driven by systems, the essence of competition changes as well.

In the past, it was about who was smarter; now, it is about who is more stable.

AI does not automatically create an advantage; it is merely an amplifier. If the system structure is stable, AI will amplify the advantage. If the system has flaws, AI will accelerate the exposure of risks.

This is also why, in actual trading, many models perform well in backtests but fail quickly in live trading. The problem is not the model—it is that the system cannot control execution and risk.

From this perspective, the true core capability of AI quantitative trading is not prediction, but system control capability. This includes execution stability, risk constraint mechanisms, and the ability to survive in extreme market environments.

Conclusion

When the market starts being dominated by systems, and when regulators begin testing AI behavior, trading has entered a new stage.

The core of this stage is no longer “who understands the market better,” but “who can control the system.”

In the past, trading was a competition of cognition; now, it is turning into a competition of systems.

And in this process, the real dividing line is not in strategies, nor in models, but in whether:

The system can continue to operate and remain stable in complex markets.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin