Ask Runable forDesign-Driven General AI AgentTry Runable For Free
Runable
Back to Blog
Technology8 min read

The RAMpocalypse is exposing a flaw in how we think about endpoints | TechRadar

RAM chaos is forcing smarter endpoint strategies Discover insights about the rampocalypse is exposing a flaw in how we think about endpoints | techradar.

TechnologyInnovationBest PracticesGuideTutorial
The RAMpocalypse is exposing a flaw in how we think about endpoints | TechRadar
Listen to Article
0:00
0:00
0:00

The RAMpocalypse is exposing a flaw in how we think about endpoints | Tech Radar

Overview

News, deals, reviews, guides and more on the newest computing gadgets

Start exploring exclusive deals, expert advice and more

Details

Unlock and manage exclusive Techradar member rewards.

Unlock instant access to exclusive member features.

Get full access to premium articles, exclusive features and a growing list of member rewards.

The RAMpocalypse is exposing a flaw in how we think about endpoints

When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

For years, enterprise IT has followed a familiar pattern. Devices age, performance starts to lag, operating systems evolve, and a hardware refresh follows. The cycle became so routine that many organizations stopped questioning it. Replacing fleets every few years simply came to be seen as the cost of staying current.

That logic is much harder to defend in today’s market.

The rapid expansion of AI infrastructure is reshaping the global memory market in ways that now affect endpoint strategy. As suppliers prioritize memory for high-growth AI and data center demand, traditional DRAM pricing has become more volatile and endpoint costs have become harder to predict.

RAMageddon: How IT leaders are adapting PC refresh strategy to manage the 2026 memory crunch

How the memory crisis is strangling the UK's data center boom

The global memory shortage: The hidden bottleneck behind the AI boom

For IT leaders, that creates a serious budgeting problem. The price of a new PC is increasingly influenced by memory market pressures that have little to do with the day-to-day needs of the average employee.

This is why the current memory squeeze matters. It is not just making refreshes more expensive. It is exposing how much of the conventional PC lifecycle is based on habit rather than necessity.

For many organizations, the economics of refresh are starting to look out of balance. A new device may cost noticeably more than it did a year ago, yet still offer only marginal gains for users whose workloads are centered on browsers, collaboration tools, Saa S platforms, and virtual desktops.

That disconnect forces a different kind of question. Instead of asking whether a newer device is available, IT teams are asking whether a replacement is justified at all. If the user experience is already shaped mostly by cloud services and hosted applications, then buying more local hardware at inflated prices can start to look like a poor trade.

Rising memory costs are making it harder to default to wholesale replacement, and that pressure is encouraging a more grounded conversation about what employees actually need from an endpoint.

For many employees, the endpoint is no longer the primary place where work is processed. It is the place where work is accessed. Applications increasingly run in the browser. Files are stored in cloud environments.

Desktops are delivered virtually. In this model, the endpoint acts as a secure and reliable connection point rather than a standalone computing engine.

RAM crisis reaches new heights — and our only hope is a consumer rebellion

Looking for a new PC? With RAM prices and shortages showing no sign of disappearing, is might be time to buy a secondhand laptop

Once organizations recognize this shift, the logic behind hardware planning changes. The device on the desk does not need to carry the full burden of performance. In many cases, compute happens in the data center or the cloud, while the endpoint simply provides access.

What matters most is stability, security, connectivity, and a consistent user experience.

That realization reframes the refresh conversation. If the endpoint is primarily an access layer, it does not need to be replaced on a rigid schedule tied to traditional assumptions about local compute power.

This shift is making repurposing far more relevant than it was even a few years ago. Many older laptops, desktops and even aging thin clients are still capable of supporting modern work when they are used in a way that aligns with today’s computing model.

Instead of running heavy, resource-intensive operating systems locally, those devices can be paired with lightweight software and redeployed as thin clients. This approach extends the life of existing hardware while still providing users with secure, reliable access to virtual desktops, Saa S applications, and cloud environments.

The result is a more efficient use of resources. Devices that might otherwise be retired can continue to deliver meaningful value, particularly when compute is handled centrally rather than on the endpoint itself.

Thin clients as a buffer against market volatility

Thin and zero clients are often associated with simplicity and centralized management, but their relevance is growing in the current environment.

They reduce reliance on local components such as DRAM, which are subject to price swings and supply constraints. By shifting compute to centralized environments, organizations can insulate themselves from volatility in the memory market and avoid overpaying for incremental hardware gains.

This creates a more predictable cost structure and allows IT teams to align spending with actual workload requirements. Some users will still need full PCs, but many will not. Thin clients make it easier to match endpoint strategy to real usage patterns instead of applying a uniform refresh approach across the organization.

Extending lifecycle without sacrificing experience

A common concern with delaying refresh cycles is that it will negatively impact user experience. That concern was valid when performance depended heavily on local hardware.

Today, cloud-delivered desktops and applications change that dynamic.

With Daa S and virtual desktop platforms such as Windows 365, Azure Virtual Desktop, Citrix, Omnissa or Parallels, performance is largely determined by the cloud environment rather than the endpoint itself. Users can access the same experience from a range of devices.

This allows organizations to extend lifecycle timelines without sacrificing productivity. It also gives IT leaders flexibility during procurement cycles that are affected by memory pricing and supply constraints.

Extending the life of existing hardware also has clear environmental benefits. Short refresh cycles increase e-waste and expand the carbon footprint associated with manufacturing and disposal.

Lifecycle analyses from Interzero and Fraunhofer UMSICHT show that reuse can reduce emissions by up to 37 percent. This makes repurposing a practical way to support sustainability goals while also controlling costs.

For many organizations, sustainability is no longer a secondary consideration. It is becoming part of the core decision-making process around endpoint strategy.

The pressure on memory supply is unlikely to ease quickly. AI demand continues to grow, and resource allocation will continue to favor high-value data center workloads.

In response, enterprise IT is moving toward a more flexible model. Instead of treating refresh cycles as fixed, organizations are evaluating actual needs, exploring repurposing opportunities, and adopting alternative endpoint approaches where appropriate.

The endpoint is still important, but its role has changed. It is no longer defined by local performance alone, but by how effectively it connects users to modern, cloud-driven work environments.

The RAMpocalypse may be creating short-term challenges, but it is also pushing organizations toward smarter, more efficient, and more sustainable ways of thinking about endpoint computing.

This article was produced as part of Tech Radar Pro Perspectives, our channel to feature the best and brightest minds in the technology industry today.

The views expressed here are those of the author and are not necessarily those of Tech Radar Pro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/pro/perspectives-how-to-submit

You must confirm your public display name before commenting

1 The Punisher: One Last Kill ending explained — what happens to Frank Castle, how does it set up Spider-Man: Brand New Day, and more on the Marvel TV Special's finale

2i OS 27 Siri 2.0 details leaked — new chat interface, ‌Dynamic Island integration, and more

3 Secure your family’s future and save 10% on Legal Zoom services

4I thought Google Notebook LM was just an AI research tool — now it organizes my entire life

5 Claude AI has started telling some users to sleep, drink water, and stop working — and people can’t stop talking about it

Tech Radar is part of Future US Inc, an international media group and leading digital publisher. Visit our corporate site.

© Future US, Inc. Full 7th Floor, 130 West 42nd Street, New York, NY 10036.

Key Takeaways

  • News, deals, reviews, guides and more on the newest computing gadgets
  • Start exploring exclusive deals, expert advice and more
  • Unlock and manage exclusive Techradar member rewards
  • Unlock instant access to exclusive member features
  • Get full access to premium articles, exclusive features and a growing list of member rewards

Cut Costs with Runable

Cost savings are based on average monthly price per user for each app.

Which apps do you use?

Apps to replace

ChatGPTChatGPT
$20 / month
LovableLovable
$25 / month
Gamma AIGamma AI
$25 / month
HiggsFieldHiggsField
$49 / month
Leonardo AILeonardo AI
$12 / month
TOTAL$131 / month

Runable price = $9 / month

Saves $122 / month

Runable can save upto $1464 per year compared to the non-enterprise price of your apps.