AMD Unveils MI350 AI Chips and Cloud Access to Tackle Nvidia’s Dominance
New AI Hardware and Developer Platform Signal AMD’s Bold Push into the AI Race
Advanced Micro Devices (AMD) is stepping up its competition in the artificial intelligence sector with the official release of its MI350 series of AI accelerators, along with the introduction of a new cloud-based development environment designed for researchers and developers. These announcements were made at the company’s “AI Forward” event held in San Jose, California.
MI350 Chips Promise Higher Performance and Expanded Memory
The MI350 lineup, which features the MI350X and MI355X chips, marks AMD’s latest effort to take on Nvidia’s Blackwell architecture, widely considered the industry leader in AI GPU design.
According to AMD, these new chips offer dramatic boosts in performance, including:
- Up to 4x faster AI training speed than their predecessors
- Massive gains in inference efficiency, making them ideal for powering large-scale language models and computer vision systems
Each chip includes 288GB of HBM3E high-speed memory, which outpaces the base capacity of Nvidia’s single-chip configurations. However, Nvidia’s high-end designs like the GB200 superchip pair two GPUs together, giving it a slight edge in raw memory.
AMD’s new chips can be deployed individually or within powerful compute platforms, where eight GPUs work in tandem, delivering 2.3TB of unified memory. Depending on configuration, these systems can be cooled using traditional air systems or advanced liquid cooling technologies, catering to a wide range of data center environments.
AMD Looks Ahead with MI400 GPUs
In addition to the MI350 launch, AMD shared a glimpse into the MI400 series, its next-gen GPU architecture scheduled for release in 2026.
Highlights of the MI400 chips include:
- Up to 432GB of HBM4 memory
- Memory throughput reaching 19.6 terabytes per second
These chips are expected to go toe-to-toe with Nvidia’s future products like the Blackwell Ultra (GB300) and the Rubin GPU line, which will push the boundaries of AI model training capabilities even further.
AMD Developer Cloud Offers On-Demand Access to AI Hardware
AMD’s cloud ambitions were also front and center at the event, with the debut of the AMD Developer Cloud, a platform designed to provide instant access to AMD’s AI accelerators.
Developers can now:
- Connect remotely to AMD-powered GPU clusters
- Test, train, and deploy AI models without purchasing physical hardware
- Scale their workloads using high-performance MI300 and MI350 chips
This offering opens doors for smaller companies and research teams who want to utilize AI hardware without committing to expensive infrastructure. It also places AMD in direct competition with cloud services offering Nvidia-based GPUs.
AMD Faces Market Pressure Despite Technological Gains
Despite these technological leaps, AMD’s stock performance has been underwhelming compared to Nvidia’s. Over the last year:
- AMD shares have dipped by 24%
- Nvidia stock climbed over 19%
As of mid-2025, AMD remains flat for the year, while Nvidia continues to post gains. A significant contributor to this disparity is the impact of U.S. export restrictions on AI chips sold to China. AMD has projected an $800 million revenue loss due to the policy, while Nvidia’s losses are expected to exceed $8 billion in its current quarter.
The Bigger Picture: A High-Stakes Battle in AI Infrastructure
With the release of MI350 and the preview of the MI400 line, AMD has made it clear that it’s not backing down in the competition for dominance in AI infrastructure. The company’s hardware innovations, combined with the accessibility of its new cloud services, suggest that AMD is strategically positioning itself to close the gap with Nvidia.
As generative AI, large-scale training, and inference tasks become the backbone of modern computing, the fight for GPU supremacy will only intensify. AMD is betting big that its performance-driven chips and flexible access models will attract a growing segment of the AI development market.