Claude account exposed in large-scale fraud and unauthorized charges! Victims in Taiwan and Canada lose tens of thousands, follow these three steps to protect yourself immediately

ChainNewsAbmedia

Recently, multiple Claude AI users have issued warnings in Facebook groups and Reddit posts, stating that the credit cards linked to their Anthropic accounts are being fraudulently charged frequently. Attackers carry out large-scale purchases through the platform’s “Gift Subscriptions (Gift)” feature. Several victims from Taiwan, Canada, and the United States reported losses of more than ten thousand TWD, drawing outside attention.

Google’s malicious extension has been lurking for three years, secretly bypassing passwords and two-factor authentication

A Taiwanese victim, Mr. Hong, posted in a Facebook group for Claude Taiwan, revealing that the root cause of the incident stemmed from his downloading of software in April 2023. During that process, without knowing it, he installed a malicious Chrome extension called “Start New Tab Search.” The program belongs to the Adware.NewTab family and has been lurking for as long as three years.

This extension has permission to intercept HTTP requests, continuously stealing users’ cookies and session tokens in the background. Once the attacker obtains a valid session token, they do not need the account password or to complete two-factor authentication (2FA) at all; they can directly make purchases using the user’s account. This is also why all the measures taken by the victims afterward—pausing the card, changing passwords, enabling 2FA, and so on—failed to stop the fraud.

Four charges within three days, changing cards didn’t help—Anthropic interface flaw exposed

Mr. Hong said that in the early hours of April 16, he found his account had been automatically charged to purchase the “Gift Max 5X” plan. Even though he immediately took all standard security measures—pausing the card, changing his password, enabling two-factor authentication, logging out of all devices, revoking API Keys, and switching to a new payment method—the fraudulent charges continued occurring until April 20.

In the end, Mr. Hong was successfully charged for four transactions, with losses totaling $400. During that time, his phone continued to receive Mastercard 3D verification messages and Stripe verification codes, indicating that the attackers kept trying to charge again with the new card.

He worries that Anthropic’s billing interface does not have a “remove credit card” option—only “update payment method (Update)”—making it impossible for users to detach the card from their account.

Victims at home and abroad speak up simultaneously, and Reddit also shares card-fraud cases

Notably, another Canadian user also posted on Reddit’s r/ClaudeAI forum, saying their account was used to purchase a “Gift Max 20x” gift subscription with a credit card, resulting in losses of about 950 CAD (about $700 USD). Multiple charges were also continuously made.

He pointed out that on the consumer review site Trustpilot, multiple users from the Netherlands, the UK, and the US reported similar cases.

Anthropic customer support is essentially useless—contacting the credit card company is the fastest way for users to self-rescue

Both victims faced the same dilemma: Anthropic’s general support at support@anthropic.com could hardly provide any timely assistance. After Mr. Hong reported the issue on April 18, he later sent four more explanation emails, but within 72 hours there was still no response from any real person—only automated replies from a Fin AI Agent. The Canadian user also said the support from Fin AI was extremely poor.

At present, both have turned to their credit card companies to file a dispute chargeback (chargeback), which has become the quickest self-rescue method available to the victims right now. Mr. Hong also suggested that if you want to contact the Anthropic team, you can send emails to both usersafety@anthropic.com and disclosure@anthropic.com at the same time, which may offer a better chance of receiving a more direct response.

How to protect yourself? Three steps to immediately check your Claude account

In response to this ongoing spreading attack, victims are calling on all Claude users to immediately take the following protective measures.

First, log in to claude.ai right away, go to “Settings → Billing → Invoices,” and check whether there are any unauthorized “Gift Max” related charge records. If you find any, immediately contact the issuing bank to file a dispute chargeback—do not wait for Anthropic customer support to respond.

Next, open Chrome’s extensions management page (chrome://extensions/), carefully review all installed extensions, and remove any that you don’t recognize, that are from suspicious developers, or that you don’t remember installing yourself. These malicious programs often disguise themselves under names like “enhancing or beautifying the interface.”

Finally, submit an official support ticket to Anthropic, and at the same time email both usersafety@anthropic.com and disclosure@anthropic.com to improve your chances of getting a real person to handle it.

The victims also hope that Anthropic can quickly strengthen the platform’s protection measures, including enabling users to truly remove payment methods, adding second-factor verification for Gift transactions made in a short time window, and automatically freezing accounts after users report scams.

This article: Claude account exposed massive-scale card fraud! Taiwan and Canada victims lose over ten thousand—three steps to protect yourself immediately. First appeared on Chain News ABMedia.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.

Related Articles

DeepSeek V4 Pro with Ollama Cloud: One-click integration with Claude Code

According to an Ollama tweet, DeepSeek V4 Pro was released on 4/24, has been added to the Ollama catalog in cloud mode, and can call tools like Claude Code, Hermes, OpenClaw, OpenCode, Codex, etc. with just a single line of command. V4 Pro: 1.6T params, 1M context, Mixture-of-Experts; cloud inference does not download local weights. If you want to run it locally, you need to obtain the weights yourself and run it with INT4/GGUF and multi-card GPUs. Early speed tests were affected by cloud load; typical performance is about 30 tok/s, with a peak of 1.1 tok/s. It is recommended to use the cloud prototype first, and for production later, run inference yourself or use a commercial API.

ChainNewsAbmedia19m ago

DeepSeek Cuts V4-Pro Prices by 75%, Slashes API Cache Costs to One-Tenth

Gate News message, April 27 — DeepSeek announced a 75% discount on its new V4-Pro model for developers and reduced input cache hit prices across its API lineup to one-tenth of previous levels. The V4 model, released on April 25 in Pro and Flash versions, has been optimized for Huawei's Ascend

GateNews22m ago

Coachella turns to Google’s DeepMind AI to reimagine concerts beyond the stage

Coachella has partnered with Google DeepMind to test new AI tools that reshape how live music performances are created and experienced. Summary Coachella has tested AI tools with Google DeepMind to turn live performances into interactive digital environments. Three prototypes were built,

Cryptonews28m ago

Guo Ming-chi: OpenAI wants to build an AI Agent phone; MediaTek, Qualcomm, and Luxshare Precision are key in the supply chain

Guo Ming-chi claims that OpenAI is working with MediaTek, Qualcomm, and Luxshare Precision to develop an AI Agent phone, with mass production expected in 2028. The new phone will be centered on task completion: an AI agent will understand and execute requests, combining cloud and on-device computing, with a focus on sensing and contextual understanding. The specifications and supply chain list are expected to be finalized in 2026–2027; if it takes shape, it could bring a new upgrade cycle to the high-end market, and Luxshare may become a major beneficiary.

ChainNewsAbmedia38m ago

IEA: AI infrastructure spending has already surpassed investment in oil and gas production, and is expected to increase another 75% in 2026

According to analysis and market data published by the International Energy Agency (IEA) on April 26, the combined capital expenditures of the world’s top five technology companies in 2025 exceed $400 billion, with most of the spending going toward building AI infrastructure. The scale has already surpassed the annual investment level of global oil and natural gas production. The IEA estimates that the related capital expenditures may further increase by 75% in 2026.

MarketWhisper1h ago

Senator Bernie Sanders Issues Warning on AI's Existential Threat

Sanders stressed that even as most AI scientists acknowledge the possibility of AI escaping control and becoming a danger to our existence, no major measures have been taken to avoid it. “We must make certain that Al benefits humanity, not hurts us,” he stated. Key Takeaways: Bernie Sanders

Coinpedia1h ago
Comment
0/400
No comments