
Bloomberg reported on April 16 that OpenAI, Anthropic PBC, and Google have launched cooperation through the industry nonprofit organization Frontier Model Forum, sharing information to detect and prevent adversarial AI model data distillation that violates terms of service. OpenAI confirmed to Bloomberg that it has participated in the information-sharing effort involving adversarial distillation.
Frontier Model Forum was co-founded in 2023 by OpenAI, Anthropic, Google, and Microsoft. According to Bloomberg, the three companies currently exchange information through this organization, with the goal of identifying large-scale adversarial data requests, tracking distillation attempts that violate terms of service, and coordinating to block related activity. Google previously published a blog post saying it has found that attempts to extract model information have increased.
Bloomberg reports that this type of information-sharing model aligns with established practices in the cybersecurity industry, which has long strengthened defenses by exchanging attack data and adversary strategies across companies.
According to a memorandum submitted by OpenAI to the U.S. House Select Committee on China (February 2026), OpenAI accuses the Chinese AI company DeepSeek of attempting to “ride on the OpenAI and other U.S. frontier lab-developed technology,” and says it continues to use data distillation techniques to extract results from U.S. models to develop a new version of chatbots, with the methods it uses becoming increasingly complex.
Anthropic issued a statement in February 2026 saying it identified three Chinese AI labs—DeepSeek, Moonshot, and MiniMax—and accused them of unlawfully extracting Claude model capabilities through adversarial distillation. Anthropic previously barred Chinese companies from using Claude in 2025. Bloomberg has also previously reported that after DeepSeek released its R1 reasoning model in January 2025, Microsoft and OpenAI launched investigations into whether it illegally extracted U.S. model data.
According to publicly released policy documents, officials in the Trump administration have indicated they are willing to promote information sharing among AI companies to address the threat of adversarial distillation. The “Artificial Intelligence Action Plan” unveiled by President Trump explicitly calls for establishing information-sharing and analytics centers, and part of its purpose is to address the aforementioned activity.
Bloomberg reported that the scope of the three companies’ current information-sharing on technical distillation remains limited because they are not sure which information can be shared in compliance under the existing antitrust guidance. The three companies have said they intend to expand the scope of cooperation after the U.S. government clearly lays out the relevant guidance framework.
According to Bloomberg, the three companies share information through Frontier Model Forum, an organization co-founded in 2023 by OpenAI, Anthropic, Google, and Microsoft as an industry nonprofit.
According to information published by OpenAI, OpenAI submitted a memorandum (February 2026) to the U.S. House Select Committee on China, accusing DeepSeek of continuing to extract results from U.S. models using data distillation techniques and developing a new version of its chatbot.
According to Anthropic’s public statement, Anthropic barred Chinese companies from using Claude models in 2025 and, in February 2026, identified three Chinese AI labs—DeepSeek, Moonshot, and MiniMax—accusing them of unlawfully extracting Claude model capabilities through adversarial distillation.
Related Articles
Global Robotaxi Market Projected to Reach $168B by 2035, Driven by US and China
OpenAI Launches GPT-4.5 with Enhanced Reasoning Capabilities, Pricing at $75 per Million Input Tokens
US House Proposes Bill to Penalize Chinese AI Firms for Model Copying
Aave Labs Launches Aave Checkpoint, an AI-Driven Governance Security System