Su Zifeng and Her "Leaving NVIDIA" Battle

When Lisa Su took over as CEO of AMD in 2014, the company’s market value was less than $3 billion. Today, that number has surpassed $315 billion, a more than hundredfold increase.

AMD’s market cap explosion began in 2018. The year prior, eight Google scientists published a paper that opened a new chapter in artificial intelligence—“Attention Is All You Need”—which proposed the Transformer architecture based on attention mechanisms. Its parallel computing capabilities indirectly elevated GPU companies to new heights.

A large number of new GPU startups from China emerged during this period. Meanwhile, industry giants like Google took a different approach, pioneering the development of custom ASIC chips to optimize total cost of ownership for computing power.

Today, AI chips as the infrastructure for computing have evolved into a multi-dimensional competition. This includes not only the race for performance, capacity, and cost optimization but also reliance on sustainable energy. In the most critical supply chain segments, leading foundries like TSMC, along with Samsung and SK Hynix for memory, determine the intensity of the AI chip war based on their production capacities.

From performance to customers to capacity, Lisa Su faces an unprecedented challenge.

Regarding customers, an industry insider revealed, “Lisa Su is still very active in China, visiting clients more often than even Huang [Lei].”

According to our understanding, publicly, Su has two trips to China planned for 2025: the first in March at the AIPC Innovation Summit, and the second to visit partners, both of which involved Lenovo in the initial stops.

“Su’s New Path”


For most of the past decade, AMD’s main competitor was Intel.

AMD gradually eroded Intel’s market share in desktops, servers, and workstations, but the AI era has changed the rules.

In 2018, AMD made a significant shift toward cloud computing, launching the Instinct series data center GPUs—their first chips designed specifically for AI workloads. For years afterward, they remained in the role of followers.

At the Taipei Computex two years ago, when asked about being a follower, Su did not shy away from the reality of being somewhat behind. She said: “It’s obvious that the demand for AI is accelerating rapidly,” and “We are just beginning a ten-year AI supercycle.”

That year, Su also mentioned a detail often overlooked: “Last year (2023), we launched the MI300X series with a leading inference advantage.”

This indicates that Su had recognized the value of inference early on, and the subsequent developments reflect her foresight—by 2025, inference became a frequent topic, with Nvidia even acquiring the $20 billion inference chip supplier Groq.

AMD’s turning point came in 2025.

In June 2025, at the “Advancing AI” event in San Jose, Su made a bold announcement: AMD’s MI350 series (led by the MI355X) had begun shipping, with inference performance 35 times faster than the previous generation.

Lisa Su introduces AMD’s MI350 series chips at Advancing AI

At this event, Su revised her market size expectations.

Previously, she predicted the global AI processor market would reach $500 billion by 2028. By mid-2025, she expected this threshold to be surpassed earlier.

“People used to think $500 billion was a huge number,” she said after her speech. “Now, it seems within reach.”

Her new forecast is less aggressive than the capital expenditure plans of Silicon Valley giants, which aim to exceed $600 billion by 2026. However, Su’s bold claim is that the MI400 series debuting in 2026 will significantly surpass Nvidia.

Lisa Su introduces AMD’s MI400 series chips at Advancing AI

“When this series (MI400) first launches, AMD will be clearly ahead of Nvidia in existing technology.”

At that time, OpenAI CEO Sam Altman appeared on stage with her. He remarked that the initial specs of MI400 were overly ambitious, to the point that he thought it was “impossible.” He confirmed that OpenAI was collaborating with AMD on developing the MI450 chip.

Their partnership extended beyond chip R&D—remarkably, AMD and OpenAI exchanged equity for orders. But that’s a story for another time.

At CES 2026, Su officially announced the MI450 series. She described it as a combination of MI300X and MI350—a stepwise leap in performance—using HBM memory to expand memory, bandwidth, and compute power, breaking the “memory wall” in AI inference.

The real highlight is the MI455X.

Su announced that the MI455X offers ten times the performance of the MI355X. It powers AMD’s latest open server platform, the 72-card “Helios.”

Lisa Su introduces the new AI computing rack Helios at CES 2026

Helios features 18 compute trays, each equipped with one Venice CPU and four MI455X GPUs. The Venice CPU is built on 2nm process technology, with 4,600 cores; the MI455X GPU uses 3nm process, with 18,000 cores. The entire system includes 31TB of HBM4 memory and 43TB/s bandwidth, delivering 2.9 Exaflops of FP8 compute.

AMD emphasizes that Helios is an open rack platform paving the way toward Yotta-scale computing. (Note: 1 Yotta Flops = 1,000,000 ExaFLOPS, or 10^24 floating-point operations per second.)

In AMD’s roadmap, the 2027 release of the MI500 series will be based on the cDNA6 architecture, manufactured with advanced 2nm process technology, and equipped with high-speed HBM4e memory. Su stated that the MI500 will deliver another major leap in AI performance, powering the next wave of large-scale multimodal models.

“In the next four years, we aim for a 1,000-fold increase in AI performance,” she said, though this time she did not specify that this refers solely to inference performance.

Plan B, C


“The future of AI will not be built by any single company or within a closed ecosystem,” she predicted, “but through open collaboration across the industry.”

The most tangible example of open collaboration is industry alliances.

Opportunities stem from diversified compute procurement strategies among leading large-model developers— from OpenAI to Google and Meta. These companies not only purchase Nvidia products but also include AMD in their supply chains, effectively buying all available compute power on the market. If that’s still not enough, they develop their own systems.

In October 2025, AMD signed a 6 GW GPU supply agreement with OpenAI, using multiple generations of AMD Instinct GPUs to support OpenAI’s next-generation AI infrastructure. The first 1 GW deployment will use AMD Instinct MI450 GPUs, expected to start in late 2026.

What’s unique about this deal is that AMD granted OpenAI warrants to purchase up to 160 million AMD common shares at $0.01 per share.

The warrants will vest in tranches based on milestones: the first upon completing the initial 1 GW deployment, with subsequent tranches as procurement scales to 6 GW. Vesting is also tied to AMD reaching certain stock price targets and OpenAI achieving technical and commercial milestones.

Partnerships between compute providers and large model clients are seen as “equity-for-orders” cycles, enabling ongoing financing.

Five months later, AMD nearly replicated this partnership model with Meta.

In February 2026, AMD and Meta announced a 6 GW agreement to support Meta’s next-generation AI infrastructure with multiple AMD Instinct GPU generations.

Support for the initial gigawatt deployment is expected to begin in late 2026, using custom MI450-based AMD Instinct GPUs, codenamed “Venice,” along with sixth-generation AMD EPYC CPUs, running ROCm software, and built on the AMD Helios rack architecture.

AMD granted Meta performance-based warrants to purchase up to 160 million AMD shares, roughly 10% of the company, similar to the OpenAI deal.

Chip analyst Ben Bajarin estimates that this agreement is worth hundreds of billions of dollars over at least four years, given that deploying 6 GW takes considerable time.

Su later said in an interview that this warrant structure is a “win-win” for shareholders, supporting an “ambitious” plan and financial model. She considers this one of AMD’s most transformative deals in expanding AI capabilities.

At the Morgan Stanley conference on March 3, Su further explained the rationale behind issuing warrants. She said warrants can accelerate purchasing activity and help build AMD’s ecosystem. The value of warrants is performance-based, motivating both companies to help each other meet targets.

Su emphasized that such deals accelerate procurement and also boost the development of the broader AMD ecosystem, not just with Meta but across the board.

Who might Su Su invite next with this “equity-for-orders” approach? Could it be Anthropic?

If AMD+OpenAI and AMD+Meta are seen as “realistic” Plan B options, then Su Su also has a future Plan C—“Investment Bank AMD.”

AMD’s venture arm, AMD Ventures, made its most notable investment in May 2025, co-investing in AI cloud startup TensorWave, mirroring Nvidia’s investment in CoreWeave.

Frequent ecosystem investments are on AMD Ventures’ to-do list, including multiple rounds into Li FF Fie Fie’s World Labs project, and investments in transformer challenger Liquid AI and other innovative teams.

Additionally, publicly available info shows AMD has participated in Series D and E funding rounds for optical AI chip company D and E.

AMD’s investment “circle of friends”

According to AMD’s official website and public reports, its investment portfolio also includes AI drug discovery firm Absci, data annotation platform Scale AI, generative video AI company Runway, and multimodal model firm Luma AI, among others.

From foundational compute to mid-layer and top-layer applications, from base models to vertical-specific models, Su’s Plan C around Instinct GPUs sketches out AMD’s future ecosystem landscape.

Supply Chain Warfare


Capacity has always been a core issue in the semiconductor industry, involving both high-volume foundry capacity and key component supplies.

Simply put, products are ready, customers are signed, but without sufficient capacity, everything is moot.

Chip foundry capacity mainly refers to advanced CoWoS packaging. As we disclosed in January, TSMC’s CoWoS capacity in 2026 is expected to be around 1.15 million wafers, with AMD reserving about 8%, roughly 90,000 wafers.

At the March Morgan Stanley conference, a reporter asked Su whether CoWoS capacity was sufficient. She replied: “We definitely have enough CoWoS capacity. I know many are verifying this. The best answer I can give is: we have the capacity, the technology, strong customer relationships, and data center providers have allocated space for us.”

Estimating from a single MI400 package of 4,800mm², one wafer can produce about 8-10 chips. Using the upper limit of 10 chips per wafer (not all will be used for MI400), AMD expects to produce about 900,000 MI400 chips in 2026, which translates to roughly 12,500 Helios racks.

900,000 MI400 chips means AMD will need at least 10.8 million HBM memory units this year.

With OpenAI and Meta’s combined orders totaling 16 GW, the stability of HBM supply directly impacts delivery. The problem is, Samsung, a key supplier, is also Nvidia’s HBM supplier. Locking in ideal capacity is more challenging than ever.

In this context, Su plans to visit South Korea soon.

South Korean media reports that Su will visit Korea on March 18, her first visit in over a decade since becoming CEO. Insiders say she may meet Samsung Electronics Chairman Lee Jae-yong and Naver CEO Choi Su-youn to discuss data center collaborations.

Industry analysts expect Su to request Samsung to increase high-bandwidth memory supply. Samsung announced last month that it has begun mass production of HBM4, claiming it’s the first in the world to ship this advanced AI accelerator chip.

Besides HBM, supply of DRAM and NAND flash is also tight.

Amid global storage chip shortages, securing capacity for AMD’s server lineup is urgent. There are reports that AMD is exploring wafer foundry collaborations with Samsung, including discussions on 2nm EPYC Venice CPUs.

At the Morgan Stanley conference, Su also addressed the memory market.

“We’ve been planning with suppliers years in advance. We’ve prepared for the volume ramp of MI450 and the switch to HBM4. Regarding HBM supply, we feel good,” she said.

But she also acknowledged that the memory market is experiencing ripple effects. The pricing of DDR4, DDR5, and consumer memory chips is pushing up overall system costs.

Meeting with Samsung’s leadership concerns capacity, while meeting Naver’s CEO relates to market strategy.

Naver is executing a clear “multi-supplier” strategy, gradually reducing dependence on Nvidia. As a key player in East Asia’s server market, Naver’s shifting demands open opportunities for AMD.

Industry insiders believe Naver is diversifying its supply chain to optimize infrastructure, increasing verified AMD accelerators in its second data center, aligning with AMD’s Korea expansion goals.

Su’s Korea visit coincides with Nvidia’s GTC 2026, where Nvidia maintains a dominant 90% market share. Su and AMD are gradually “poaching” market share, but this relies heavily on agile moves in the invisible battlefield of supply chains.

Su’s straightforward approach: fly there personally, meet face-to-face, lock in capacity.

In the Q4 2025 earnings call on February 4, Su summarized AMD’s 2025: revenue, net profit, and free cash flow all hit record highs. Data center revenue rose 39% YoY to $5.4 billion.

“2025 was an excellent year for AMD, marking the start of a new growth trajectory,” she said. “We are entering a supercycle of high-performance and AI computing, creating huge growth opportunities across our business.”

Last November, Su set a target at the AMD Investor Day: achieve over 35% CAGR over the next three to five years, significantly expand profit margins, and generate over $20 earnings per share within the strategic timeframe.

“All driven by growth across our divisions and rapid expansion of data center AI,” she said.

Unsurprisingly, the main engine of growth and expansion will be the data center business. To achieve this, AMD must do several things simultaneously: deepen ties with major Silicon Valley clients, invest in future technologies, build a resilient supply chain ecosystem, and always prioritize product performance and stability.

Only then can Su lead the company to gradually push forward the “de-Nvidia-ization” of the data center market.

Source: Tencent Tech

Risk Disclaimer

Market risks are inherent; investments should be cautious. This article does not constitute personal investment advice and does not consider individual users’ specific investment goals, financial situations, or needs. Users should evaluate whether any opinions, viewpoints, or conclusions herein are suitable for their circumstances. Investment is at their own risk.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin