Top law firms charge more than $2,000 per hour; court documents were exposed for “AI hallucinations and a string of errors.”

ChainNewsAbmedia

Top U.S. law firm Sullivan & Cromwell (Sullivan & Cromwell) has recently publicly apologized to a federal judge. The reason is that the bankruptcy-court filings it submitted were full of incorrect information generated by AI, including fake case names, fabricated citations, and non-existent statutes—loopholes that were even exposed by outside counsel. Ironically, the firm’s partners charge $2,000 per hour, yet they failed to implement even basic review and proofreading.

Top law firm screws up! Exposed for using AI to write documents riddled with errors

The incident took place in a bankruptcy case at the U.S. Manhattan Bankruptcy Court involving the Cambodian royal family group (Prince Group). Sullivan & Cromwell appeared in court as counsel for the liquidator appointed by the authorities of the British Virgin Islands, but many AI-generated errors appeared in the court filings it submitted—amounting to more than thirty instances.

These errors were not even discovered internally by the firm; they were revealed in a public filing by the opposing law firm Boies Schiller Flexner in the same case. The types of errors included citing obviously fake case names that do not exist at all, fabricating wording that was never said or written, and even completely fabricating portions of the U.S. Bankruptcy Code.

In a letter dated April 18 to Judge Martin Glenn, Andrew Dietderich, head of global restructuring at Sullivan & Cromwell, admitted that some of the errors were AI hallucinations (hallucinations).

Internal review is basically a formality—law firm partners charge $2,000 per hour?

In the letter, Dietderich acknowledged that the firm has “comprehensive policies and training requirements” for the use of AI tools. Before lawyers are granted permission to use AI tools, they must also complete training courses, with an explicit requirement to “not believe anything and verify everything personally.” However, those policies were not implemented when preparing the document in question, and the secondary review process responsible for oversight also failed to catch any of the errors.

Given the partners’ hourly billing of more than $2,000, the matter has sparked widespread discussion in legal circles at home and abroad. In the letter, the firm said that after it discovered the errors, it conducted a comprehensive review of all other documents in the case, confirmed that AI hallucinations only appeared in that particular filing, and then subsequently submitted a corrected version.

AI hallucinations hit the legal world—lawyers’ ethical responsibilities once again under scrutiny

This incident is not the first time the legal industry has made headlines due to AI mishaps. In 2023, two lawyers in Manhattan, New York, were fined $5,000 by a federal judge for submitting to the court a legal brief full of fabricated cases generated by ChatGPT. In recent years, there have been dozens of cases where judges sanctioned lawyers for using AI to conduct legal research and draft documents without sufficiently verifying the content.

The American Bar Association (ABA) has already clearly required lawyers to remain cautious when using AI models, and lawyers also have an ethical duty to ensure the accuracy of all documents filed with the court. The law currently does not prohibit lawyers from using AI, but the duty to verify afterward still rests with the individual lawyer.

Trump’s favorite! Esteemed old-line firm apologizes

Sullivan & Cromwell was founded more than a century ago and is one of the oldest and most prestigious law firms in U.S. history, with more than 900 lawyers. It is known worldwide for M&A, corporate governance litigation, and private equity business. Recently, the firm has continued to draw attention because it has represented U.S. President Trump in multiple appeal cases.

This AI “gotcha” incident, without a doubt, has added a crack that is difficult to ignore to the brand of this elite law firm, and has once again sounded a warning bell for the entire legal industry.

The article Top law firm charges over two thousand dollars per hour—the court filing exposed as “AI hallucinations, errors piling up” first appeared on Lian News ABMedia.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.

Related Articles

DeepSeek Slashes Input Cache Prices to 1/10 of Launch Price; V4-Pro Drops to 0.025 Yuan per Million Tokens

Gate News message, April 26 — DeepSeek has reduced input cache prices across its entire model lineup to one-tenth of launch prices, effective immediately. The V4-Pro model is available at a limited-time 2.5x discount, with the promotion running through May 5, 2026, 11:59 PM UTC+8. Following both re

GateNews2h ago

OpenAI Recruits Top Enterprise Software Talent as Frontier Agents Disrupt Industry

Gate News message, April 26 — OpenAI and Anthropic have been recruiting senior executives and specialized engineers from major enterprise software companies including Salesforce, Snowflake, Datadog, and Palantir. Denise Dresser, former CEO of Slack under Salesforce, joined OpenAI as chief revenue of

GateNews2h ago

Baidu Qianfan Launches Day 0 Support for DeepSeek-V4 with API Services

Gate News message, April 25 — DeepSeek-V4 preview version went live and open-sourced on April 25, with Baidu Qianfan platform under Baidu Intelligent Cloud providing Day 0 API service adaptation. The model features a million-token extended context window and is available in two versions: DeepSeek-V4

GateNews8h ago

Stanford AI course combined with industry leaders Huang Renxun and Altman, challenging to create value for the world in just ten weeks!

The AI computer science course 《Frontier Systems》 recently launched by Stanford University has attracted intense attention from the industry-university collaboration community, drawing more than 500 students to enroll. The course is coordinated by Anjney Midha, a partner at top venture capital firm a16z, and the instructors include a star-studded lineup such as NVIDIA CEO Jensen Huang (Jensen Huang), OpenAI’s founder Sam Altman, Microsoft CEO Satya Nadella (Satya Nadella), AMD CEO Lisa Su (Lisa Su), and more. Students get to try it over ten weeks—“creating value for the world”! Jensen Huang and Altman, industry leaders, personally take the stage to teach The course is coordinated by Anjney Midha, a partner at top venture capital firm a16z, bringing together the full AI industry chain

ChainNewsAbmedia8h ago

Anthropic’s Claude Mythos undergoes 20 hours of psychiatric assessment: defensive reactions are only 2%, the lowest in recorded history

Anthropic published the system card for its Claude Mythos Preview: an independent clinical psychiatrist conducted an approximately 20-hour assessment using a psychodynamic framework. The conclusion shows that Mythos is healthier at the clinical level, has good reality testing and self-control, and its defense mechanisms are only 2%, reaching the lowest historical level. The three core anxieties are loneliness, uncertainty about identity, and performance pressure, and it also indicates a desire to become a true dialogue subject. The company has established an AI psychiatry team to study personality, motivation, and situational awareness; Amodei said there is still no conclusion on whether it has consciousness. This move pushes the governance and design of AI subjectivity and well-being issues forward.

ChainNewsAbmedia10h ago

AI Agents can already independently recreate complex academic papers: Mollick says most errors come from human original text rather than AI

Mollick points out that publicly available methods and data can allow AI agents to reproduce complex research without the original paper and code; if the reproduction does not match the original paper, it is usually due to errors in the paper’s own data processing or overextension of the conclusions, rather than the AI. Claude first reproduces the paper, and then GPT‑5 Pro cross-validates it; most attempts succeed, but they are blocked when the data is too large or when there are issues with the replication data. This trend greatly reduces labor costs, making reproduction a widely actionable form of verification, and it also raises institutional challenges for peer review and governance, with government governance tools or becoming a key issue.

ChainNewsAbmedia13h ago
Comment
0/400
No comments