stub AMD: An AI Hardware Push to Challenge Nvidia’s Dominance – Securities.io
Connect with us

Computing

AMD: An AI Hardware Push to Challenge Nvidia’s Dominance

mm

Securities.io maintains rigorous editorial standards and may receive compensation from reviewed links. We are not a registered investment adviser and this is not investment advice. Please view our affiliate disclosure.

Summary:
  • AMD is expanding its AI presence: The company now competes across GPUs, CPUs, FPGAs, and edge AI hardware instead of relying on one segment.
  • Hyperscalers want alternatives to Nvidia: Large cloud companies are increasingly diversifying AI hardware suppliers.
  • The company’s role in AI is evolving: AMD is positioning itself as a broader AI infrastructure provider rather than just a GPU competitor.

As the AI boom keeps going, so does Nvidia’s  (NVDA +2.52%) fortunes in the stock market, making it the world’s largest company by market capitalization.

But this was not always so. There was a time not so long ago when Nvidia was simply a GPU (Graphics Processing Units) company, a type of computing hardware specialized in rendering graphics.

GPUs are specialized for performing thousands of parallel, simpler calculations at once, rather than fewer but more complex calculations like a CPU (Central Processing Unit). It turns out that this parallel capability was essential to crypto mining and AI, hence Nvidia’s success.

However, Nvidia and its GeForce series were not the only GPU company, and had always had to contend with the competition from AMD and its Radeon GPUs, even if the company never had as high a market share.

AMD was slower than Nvidia to embrace the use of GPUs for non-graphical applications, which has cost the company a potential leadership position when AI was novel and a race to find out what hardware to use.

However, the AI hardware market is now maturing, with hyperscalers looking for an alternative to Nvidia’s hardware, be it novel AI-focused hardware like TPUs, XPUs, etc, or an alternative supply of AI-dedicated GPUs.

As such, AMD is now positioned to catch up, and its current market capitalization, less than 1/10th of Nvidia’s, might not reflect AMD’s potential to become a serious rival to the GPU-making leader once again.

Advanced Micro Devices, Inc. (AMD +2.81%)

AMD Company History and Evolution

Advanced Micro Devices, or AMD, was founded in 1969, mostly from disgruntled Fairchild Semiconductors employees, a company that was a pioneer in the manufacturing of transistors and integrated circuits.

The company started with the production of logic chips and then entered the RAM market in 1971 and the microprocessor market in 1975. It was the 2006 acquisition of graphics company ATI Technologies for $4.3B that led AMD to enter the high-performance GPUs (Radeon) market.

To this day, AMD is present in both the CPU market, competing with the likes of Intel (INTC +4.46%), and the GPU market, competing with Nvidia.

In the 2020s, it also acquired Xilinx for a record-breaking $49B, as well as in 2024 a 4.7B acquisition of data center hardware company ZT Systems and a $665M acquisition of Silo AI, the largest private AI lab in Europe, to strengthen its position in AI, data centers, and embedded computing.

“Xilinx offers industry-leading FPGAs, adaptive SoCs, AI inference engines and software expertise that enable AMD to offer the strongest portfolio of high-performance and adaptive computing solutions in the industry and capture a larger share of the approximately $135 billion market opportunity we see across cloud, edge, and intelligent devices.”

Dr. Lisa Su – Chair & CEO at AMD

So AMD has been an essential part of Silicon Valley history for more than half a century, and has been growing through a mix of internal R&D as well as key strategic acquisitions that are today essential to the company’s strategic position.

AMD By The Numbers

General AMD Stats

AMD employs around 31,000 people and is headquartered in Santa Clara, California, with major operations in Austin, Texas. Outside of the USA, the company has a large and recently expanded with a new 209,000 sq. ft. engineering lab in Penang, Malaysia, and a significant facility in Markham, Ontario, with a total of 100 office locations worldwide, in 32 countries.

Like Nvidia, AMD is a “fabless” chip maker, focused on design, with TSMC (TSM +1.17%) its key partner for advanced nodes (2-3nm) and GlobalFoundries for older designs.

The company also expanded in March 2026 its partnership with Flex to manufacture AMD Instinct MI355X AI platforms at Flex’s 1.4 million-square-foot facility in Austin, Texas.

AMD’s Financials

In 2025, AMD controlled 36.5% of the CPU market, but declined to just 5% of the PC GPU market (more on that topic below). Overall, AMD holds an estimated 28% revenue share in the client PC market (up from 20% in 2024) and targets 40% over the next 3–5 years.

AMD generated $34.6B in revenues in 2025, up 34% from the previous year, with a net income of $2.5B, up 42% year-to-year. The growth was driven by growth in the data center, client, and gaming segments. The data center market was the largest revenue generator, with $16.6B in revenues (up 32%), followed closely by client & gaming at $14.5B (up 51%).

AMD Current Business Position

AMD is currently present in most of the key markets for high-end semiconductor products, including CPUs, GPUs, and specialized semiconductors for industries like the automotive, automation, and robotic sectors.

Source: AMD

The company strategy has recently mostly focused on AI, which is not a surprise, as the same could be said about anyone in the industry in the past 3 years.

To win in the race to provide enough and the right kind of AI hardware, AMD is focused on growing in the data center segment, including rack-scale solutions and providing an integrated option for a complete set of matching CPUs, GPUs, FPGA (Field Programmable Gate Array, or custom digital logic circuits), packaging, and networking.

It is also making a concentrated effort in edge AI (AI computed on site instead of in the cloud and data centers) and adaptive AI custom platforms, notably hardware for AI agents (see more below).

Swipe to scroll →

Category AMD Position Why It Matters
AI GPUs Instinct accelerators target AI training and inference in data centers. Direct competition with Nvidia in hyperscaler infrastructure.
Server CPUs EPYC processors compete strongly with Intel in data center servers. CPUs orchestrate AI workloads and manage large data pipelines.
Adaptive Computing Xilinx technology provides FPGAs and adaptive SoCs. Useful for specialized AI workloads and edge deployments.
Edge AI Ryzen AI and embedded processors enable on-device AI computing. Important for robotics, industrial systems, and AI PCs.
Market Dynamics Cloud providers increasingly seek second-source AI hardware. Supplier diversification may benefit AMD’s long-term growth.

AMD’s Strategy For Future Growth

AMD’s Strategy: Energy-Efficient AI Hardware

As mentioned before, AMD has done in the past few years a few key acquisitions, like with Xilinx, ZT Systems, and Silo AI, to improve its position in the AI market. As a result, while it is still working on clawing back its position in GPUs, it is already a serious player in data center racks, FPGAs, Adaptive SoCs (System-on-Chip), and EU markets.

This presence is important as FPGAs, SoCs, and other such hardware are being reconsidered for AI computation. They might not be as powerful, but they are a lot more efficient, requiring a lot less energy for the same amount of computing performed.

As AI data centers’ deployments are increasingly being slowed down not by a lack of hardware, but energy supply, more efficient hardware might become a concern increasingly and favor different designs than the GPU-heavy approach adopted so far.

AMD Struggling With GPUs?

AMD has been known for a long time by PC gamers as a viable, cheaper alternative to Nvidia GPUs, albeit a somewhat slightly lagging one in terms of performance.

However, AMD has slowly lost ground to Nvidia in this market, with a new low at the end of 2025, where AMD’s GPUs made only 5% of total sales of add-in-boards (AIBs) graphics cards.

This was in part due to a decline in supply, with AMD’s latest Radeon RX 9000-series GPUs not available in sufficient amounts early in their lifecycle, leading to a somewhat failed launch.

In addition, GPUs driven by the ever-expanding demand by AI hyperscalers have caused prices to rise, putting them out of reach of most PC users, while the price of other PC components like memory has also exploded.

Overall, the market of graphics cards for desktops will decline by 10% year-over-year.

“The AIB market, largely supported by gamers, is being squeezed from the bottom by powerful new notebooks and CPU integrated graphics, and from the high end by rising pricing due to competition (supply and demand), memory prices, and Trump administration tariffs that bounce around,”

Dr. Jon Peddie – President of Jon Peddie Research.

As AMD currently has no high-end competition this generation, the customers who are most dedicated and willing to pay for top-performing GPUs, no matter the price, are fully ignoring AMD.

However, the separate GPU sales do not fully reflect the position of AMD in this market. For example, AMD controls a significant part of the integrated GPU market, as nearly all Ryzen processors for desktops carry an iGPU, with hardware integrated into the CPU already performing a lot of graphics-related calculations.

Source: TechPowerUp

So for most PC users, the option of a decently priced CPU able to avoid the overpriced GPUs entirely is a good one, and led AMD to reinforce its position in the CPU market at the detriment of Intel.

Or as AMD puts it: “AMD Gives Consumers and Businesses More AI PC Options with Expanded Ryzen™ AI 400 Series Portfolio.” This means that the sales of GPUs, which are definitely not looking good for AMD at the end of 2025, are not really anymore a relevant metric for the sales of AI-enabled hardware, especially at the consumer level.

The AMD Ryzen AI 400 Series now enables users to run AI applications and LLMs locally and tackle compute-intensive applications, including those for design and engineering. It also contains a neural processing unit (NPU)

“The desktop PC is evolving from a tool you use to an intelligent assistant that works alongside you. With the Ryzen AI 400 Series processors – the world’s first designed to power new Copilot+ experiences on the desktop – we’re bringing powerful AI acceleration that enables our partners to build systems that empower both enterprises and consumers to do more and create more.”

Jack Huynh – Senior vice president and general manager of the Computing and Graphics Group at AMD

A Shifting AI Hardware Competitive Landscape

It is no secret that in the race to provide AI hardware to hyperscalers, the biggest winner turned out to be Nvidia. However, this success is also causing a lot of issues and potential future problems for the company.

In most of the history of the semiconductor industry, any given hardware type has ended up being controlled by an oligopoly of a handful of large-scale designers and manufacturers, but never has only one become a monopoly.

The key reason is that if a specific part of the supply chain became a monopoly, it would give this company too much pricing power and control, and other companies with a similar enough skillset can step in and provide much-needed competition.

And this is the case for AI hardware. On one hand, some of the largest hyperscalers like Google (GOOGL +0.84%) are now looking to produce their own AI hardware with TPUs (Tensor Processing Units). On the other hand, many of the largest AI companies that are not looking to build their own hardware are still wary of their oversized dependence on Nvidia and are looking for alternatives.

AMD’s Big Partnership Deals

In October 2025, AMD signed a chip-supply agreement with OpenAI worth 6GW of computing capacity, using AMD GPUs. This is part of a wider effort of OpenAI to diversify its suppliers for a massive 33 GW compute commitments split between Nvidia (10 GW), AMD (6GW), Broadcom (10GW for custom AI accelerators) (AVGO +1.92%), and Oracle (ORCL +0.15%).

This will use the upcoming AMD MI450 chips — featuring 432 GB of HBM4 memory with nearly 20 TB/s bandwidth and up to 40 PFLOPS of FP4 compute per GPU.

The deal represents up to $90B in cumulative hardware revenue potential. It also allowed OpenAI to acquire up to a 10% stake in AMD, depending on how much of the computing capacity is built, locking together the two companies in a very close relationship.

The MI450 chips also landed another major win in the form of a $100B deal with Meta for another 6GW of compute capacity. It will use a custom version of the chip optimized for Meta’s workloads dubbed “Instinct”.

Source: DigWatch

Here, too, the rationale for Meta was to “diversify our compute” according to Mark Zuckerberg. And here too, AMD has issued the same amount of performance-based warrant (up to 160 million shares of AMD common stock), structured to vest as specific milestones for GPU shipments to Meta, making potentially both Meta and OpenAI owning up to 20% of the company in the future.

“This multi-year, multi-generation collaboration across Instinct GPUs, EPYC CPUs and rack-scale AI systems aligns our roadmaps to deliver high-performance, energy-efficient infrastructure optimized for Meta’s workloads, accelerating one of the industry’s largest AI deployments and placing AMD at the center of the global AI buildout.”

Dr. Lisa Su – Chair & CEO at AMD

Meanwhile, even the US Department of Energy is going to AMD for building a $1B supercomputer to help harness fusion energy or treat cancer with newly developed drugs.

“We’re going to get just massively faster progress using the computation from these AI systems that I believe will have practical pathways to harness fusion energy in the next two or three years. My hope is in the next five or eight years, we will turn most cancers, many of which today are ultimate death sentences, into manageable conditions.”

Energy Secretary Wright

Edge AI

Lastly, AI is slowly moving from ultra-compute-intensive generalized AI running in giant data centers to being performed for narrower tasks on the fly in localized hardware, a method called “edge computing”. This is especially important for physical AI in mobile robotics, self-driving cars, drones, wearables, industrial sites, etc.

For these tasks, lower computing power but performed more efficiently is to be preferred.

For this purpose, AMD has released in March 2026 its new processor Ryzen AI Embedded P100 Series, with up to 2x higher CPU core counts, up to 8x higher graphics processing unit (GPU), all on a single chip.

“The AMD Ryzen™ AI Embedded platform is a game changer for industrial and AI-driven applications at the edge. Our P100 based K4131-Px mITX will be equipped with four-core to 12-core APUs allowing us to offer customers an array of solutions that deliver high compute performance and AI acceleration in the same compact footprint.”

Thomas Stanik, senior sales & business development manager, Kontron

AI Agents and the Shift Toward CPU-Driven Inference

Progressively, generalist AI is being replaced by “AI agents”, a subdivision of AI models creating more specialized tools centered on a given task exclusively. After all, there is little need for the AIs driving a car, cleaning a database, or moving a robotic arm to be able to write a novel, offer psychological counseling, or generate an image on demand.

Agentic AI is believed to be more heavily reliant on CPUs than GPUs, compared to full AI models. As such, AI agents are likely to cause a resurgence in demand for CPU computing capacity, after years of GPU dominating the headlines and sales growth numbers.

“Modern AI deployments depend on balanced systems. CPUs, GPUs, networking, and software each play distinct roles in delivering performance at scale. Within these environments, CPUs orchestrate workloads, manage memory and data movement, and support the enterprise applications that run alongside AI models in production.

So while the era of mass training was centered on GPUs, the era of running AI to solve real-world problems (inference) might be more CPU-centered, which would benefit the leaders of this market: AMD and Intel.

The Investing Case For AMD

AMD is a less discussed and much less richly valued chip maker than its eternal arch-rival in the GPU market: Nvidia. But it is quickly catching up in the AI data center market and has a strong advantage in AI inference, be it in the cloud or edge computing, as AMD benefits from a more diversified business activity, with a strong presence in CPUs and specialized semiconductors like FPGAs.

In addition, many hyperscalers are eager to diversify their supplier of AI chips, both because of repeated delays in delivery by Nvidia, and to mitigate the risk of one actor becoming too much of a monopoly. While companies like Google might take the matter of producing AI hardware into their own hands, others like Meta and OpenAI are picking AMD and building a long-term strategic partnership, including through strategic participation to the company’s stock.

Finally, AMD will also benefit from the global shift of the AI industry from a GPU-centric approach to custom designs, more energy-efficient chips, and a larger role for CPUs, all sectors where AMD can outperform Nvidia or go toe-to-toe with Intel or Broadcom.

This changes the profile of the company, from a profitable, but lagging semiconductor chip designer, to an emerging AI leader, while still having a market capitalization mostly reflecting its previous profile.

(You can also read more about AI hardware in our dedicated report, as well as the reports covering AI hardware companies like Nvidia, Intel, and Broadcom)

Investor Takeaway

AMD provides exposure to the rapidly expanding AI hardware market through multiple technologies, including data center GPUs, EPYC server CPUs, adaptive chips from Xilinx, and edge AI processors. While Nvidia currently dominates AI accelerators, AMD’s diversified compute portfolio and growing hyperscaler partnerships could allow it to capture a meaningful share of future AI infrastructure spending.

Latest AMD (AMD) Stock News and Developments

Jonathan is a former biochemist researcher who worked in genetic analysis and clinical trials. He is now a stock analyst and finance writer with a focus on innovation, market cycles and geopolitics in his publication 'The Eurasian Century".

Advertiser Disclosure: Securities.io is committed to rigorous editorial standards to provide our readers with accurate reviews and ratings. We may receive compensation when you click on links to products we reviewed.

ESMA: CFDs are complex instruments and come with a high risk of losing money rapidly due to leverage. Between 74-89% of retail investor accounts lose money when trading CFDs. You should consider whether you understand how CFDs work and whether you can afford to take the high risk of losing your money.

Investment advice disclaimer: The information contained on this website is provided for educational purposes, and does not constitute investment advice.

Trading Risk Disclaimer: There is a very high degree of risk involved in trading securities. Trading in any type of financial product including forex, CFDs, stocks, and cryptocurrencies.

This risk is higher with Cryptocurrencies due to markets being decentralized and non-regulated. You should be aware that you may lose a significant portion of your portfolio.

Securities.io is not a registered broker, analyst, or investment advisor.