Ask Runable forDesign-Driven General AI AgentTry Runable For Free
Runable
Back to Blog
Technology6 min read

Nvidia invests $2 billion in Marvell to integrate custom chips into its AI ecosystem and rack-scale NVLink Fusion network | TechRadar

Nvidia expands AI ecosystem with $2 billion Marvell deal Discover insights about nvidia invests $2 billion in marvell to integrate custom chips into its ai ecos

TechnologyInnovationBest PracticesGuideTutorial
Nvidia invests $2 billion in Marvell to integrate custom chips into its AI ecosystem and rack-scale NVLink Fusion network | TechRadar
Listen to Article
0:00
0:00
0:00

Nvidia invests $2 billion in Marvell to integrate custom chips into its AI ecosystem and rack-scale NVLink Fusion network | Tech Radar

Overview

News, deals, reviews, guides and more on the newest smartphones

News, deals, reviews, guides and more on the newest computing gadgets

Details

Start exploring exclusive deals, expert advice and more

Unlock and manage exclusive Techradar member rewards.

'The inference inflection has arrived': Nvidia pumps $2 billion into chipmaker Marvell to boost its AI factories to the next level — so does this mean it'll be working with Amazon Trainium soon?

Nvidia expands AI ecosystem with $2 billion Marvell deal

When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

Unlock instant access to exclusive member features.

Get full access to premium articles, exclusive features and a growing list of member rewards.

Nvidia invests $2 billion to bring Marvell into the NVLink Fusion ecosystem

NVLink Fusion enables third-party accelerators to communicate with Nvidia GPUs efficiently

Marvell provides custom XPUs and scale-up networking for heterogeneous AI infrastructure

Nvidia has invested $2 billion in Marvell Technology and entered a strategic partnership that connects the custom chip designer to Nvidia’s AI factory ecosystem through NVLink Fusion.

NVLink Fusion enables third-party accelerators to communicate with Nvidia components over a high-bandwidth, low-latency interconnect, while maintaining compatibility with Nvidia’s rack-scale AI platforms.

The move integrates Marvell’s capabilities in high-performance analog, optical DSP, silicon photonics, and custom XPUs with Nvidia’s GPU, CPU, and networking infrastructure.

'AI has reinvented computing and is driving the largest computing infrastructure buildout in history': Nvidia invests $4 billion in photonics firms as it looks to boost its next generation of AI chips

'AI is entering its next frontier... the foundation of the AI industrial revolution': Nvidia confirms Core Weave will be among the first to get Vera Rubin chips as it doubles down on financial commitments

'No one deploys AI at Meta’s scale': Meta signs up Nvidia to power its next big AI projects — so what exactly do Mark Zuckerberg and Jensen Huang have planned?

“The inference inflection has arrived. Token generation demand is surging, and the world is racing to build AI factories,” Jensen Huang, founder and CEO of Nvidia said.

“Together with Marvell, we are enabling customers to leverage Nvidia’s AI infrastructure ecosystem and scale to build specialized AI compute.”

NVLink Fusion was first launched in May 2025 as a platform for heterogeneous AI infrastructure.

It allows non-Nvidia accelerators to communicate with Nvidia GPUs over a high-bandwidth fabric.

Marvell will provide custom XPUs and NVLink Fusion-compatible scale-up networking for the partnership.

Nvidia will supply Vera CPUs, Connect X NICs, Blue Field DPUs, and NVLink interconnect components.

Every NVLink Fusion platform must include at least one Nvidia product to function properly, and this means Marvell-designed ASICs still generate revenue for Nvidia despite using custom silicon.

Nvidia wants to have your cake and eat it: Jensen Huang describes the AI layered stack and hints at what world's most valuable firm will do next

Samsung-backed Rebellions launches AI inference racks to compete with Nvidia

The new Nvidia age has begun — first Vera Rubin AI chips are rolling out to customers, now let's see what they can do with it

"By connecting Marvell's leadership in high-performance analog, optical DSP, silicon photonics and custom silicon to Nvidia's expanding AI ecosystem through NVLink Fusion, we are enabling customers to build scalable, efficient AI infrastructure," said Matt Murphy, chairman and CEO of Marvell.

Marvell reported $8.2 billion in revenue for its fiscal year 2026, which ended January 2026, with data center revenue accounting for more than 74% of the total.

The company’s acquisition of Celestial AI late last year added photonic fabric technology to its portfolio, and this deal now places that capability inside Nvidia’s ecosystem.

The two companies will also collaborate on silicon photonics technology and on transforming the world’s telecommunication network into AI infrastructure using Nvidia’s Aerial AI-RAN for 5G and 6G networks.

Marvell is not the only company joining Nvidia's proprietary ecosystem, as Samsung Foundry joined the NVLink Fusion program in October last year.

Arm followed shortly after, entering the program in November and enabling its licensees to build NVLink-compatible CPUs for a wider range of applications.

However, not every major chipmaker has signed on, as Nvidia rivals AMD, Intel, and Broadcom remain notably absent from the program.

These competitors have instead chosen to back the open UALink standard as a competing rack-scale interconnect, creating a clear divide in the industry.

The absence of these rivals matters because Marvell already helps Amazon develop its Trainium series of AI accelerators, placing the company in an unusual position.

That existing relationship with Amazon predates this new partnership with Nvidia by several years, creating potential tension between the two deals.

The announcement from Nvidia and Marvell does not address whether Trainium collaboration will emerge from this deal, leaving the question unanswered.

However, Nvidia's $2 billion investment successfully pulls a key custom silicon designer into a proprietary ecosystem where it controls the interconnect standard.

Follow Tech Radar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

And of course you can also follow Tech Radar on Tik Tok for news, reviews, unboxings in video form, and get regular updates from us on Whats App too.

Efosa has been writing about technology for over 7 years, initially driven by curiosity but now fueled by a strong passion for the field. He holds both a Master's and a Ph D in sciences, which provided him with a solid foundation in analytical thinking.

You must confirm your public display name before commenting

2'Up to 75% higher': Experts say your next SSD or memory upgrade will get a lot more expensive — and I fear the Iran war will make it even worse

3 The Mac Book Air M5 is the 'best mix of winning design, near-pro-level performance, and battery life' — get the 15-inch model for a record-low price

4I compared Artemis II mission's historic dark side of the moon photo with my Sony Alpha A6000, and the differences just blew me away

5 Stickers, live lobster, plush toys, and queues: No, it's not a rock concert, it's China's insatiable appetite for AI's new superstar, Open Claw

Tech Radar is part of Future US Inc, an international media group and leading digital publisher. Visit our corporate site.

© Future US, Inc. Full 7th Floor, 130 West 42nd Street, New York, NY 10036.

Key Takeaways

  • News, deals, reviews, guides and more on the newest smartphones
  • News, deals, reviews, guides and more on the newest computing gadgets
  • Start exploring exclusive deals, expert advice and more
  • Unlock and manage exclusive Techradar member rewards
  • 'The inference inflection has arrived': Nvidia pumps $2 billion into chipmaker Marvell to boost its AI factories to the next level — so does this mean it'll be working with Amazon Trainium soon

Cut Costs with Runable

Cost savings are based on average monthly price per user for each app.

Which apps do you use?

Apps to replace

ChatGPTChatGPT
$20 / month
LovableLovable
$25 / month
Gamma AIGamma AI
$25 / month
HiggsFieldHiggsField
$49 / month
Leonardo AILeonardo AI
$12 / month
TOTAL$131 / month

Runable price = $9 / month

Saves $122 / month

Runable can save upto $1464 per year compared to the non-enterprise price of your apps.