Turbo Quant isn't the RAM crisis savior you're hoping for, analysts say — as memory prices continue to look bleak | Tech Radar
Overview
News, deals, reviews, guides and more on the newest smartphones
News, deals, reviews, guides and more on the newest computing gadgets
Details
Start exploring exclusive deals, expert advice and more
Unlock and manage exclusive Techradar member rewards.
Turbo Quant isn't the RAM crisis savior you're hoping for, analysts say — as memory prices continue to look bleak
There's sadly no super-speedy 'turbo' fix for the memory crisis…
When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.
Unlock instant access to exclusive member features.
Get full access to premium articles, exclusive features and a growing list of member rewards.
Is Turbo Quant a silicon bullet to solve the RAM crisis? No, it isn't, and if you were hoping that the compression algorithm that Google recently announced would be a major turning point for AI memory-related woes, I'm afraid you might have to think again.
Sadly, for me, the RAM crisis remains a towering specter that'll likely continue to be a considerable blight on the PC landscape for a long time to come yet. Certainly, a good deal of online opinion has crystallized around the notion that Google hasn't got an ace up its sleeve with Turbo Quant.
Need a quick Turbo Quant refresher? AI in the form of LLMs (Large Language Models) represents a huge RAM hoover, as we've seen, and that demand for memory has been a major driver in the current crisis. What Google's Turbo Quant does is to reduce the memory use of AI, and not just by a little, but by a huge amount in a specific area: key-value cache memory usage.
'It really is the craziest time ever': RAM crisis will hit consumers hard
DDR4 RAM price falls — but don't get carried away with any optimism yet
Hold your horses — it's still not time to buy RAM despite price drops
That's reduced by a factor of six, in fact, and this cache is the LLM's short-term memory (to store the ongoing conversation, and give context in future replies), so it's an important breakthrough on the face of it. The compression used in the tech shouldn't degrade the quality of the output (answers to queries) noticeably, either.
In theory, then, with Turbo Quant, an AI could keep performing at the same level while using just a sixth of the memory resources it did previously without Google's tech. That's why when Turbo Quant was unveiled on March 24, the stock price of memory makers really tanked for a while, as it was seen by investors as a potential big hit to the future profits of those companies.
Okay, that's all well and good, but as this report in the Korea Times makes clear, analysts feel differently to Google (and indeed those investors) about the impact that Turbo Quant might have, and what it'll mean for AI and RAM supply more broadly.
An analyst for Samsung Securities, Lee Jong-wook, observed that: "There have been efforts to improve AI models to optimize chip usage, but more efficient models tend to lower overall costs and, in turn, drive greater demand for AI computing. Rather than reducing semiconductor demand, such optimized models are being used to deliver higher-performance AI services with the same chip resources."
In other words, we won't see less RAM being used in AI data centers thanks to Turbo Quant, but memory will continue to be gobbled up at the same rate, with the LLMs getting better performance instead. That improved performance will be preferred to achieving better efficiency for models (and any potential cost savings therein).
After that, better AI will drive more people to use that AI, and make these LLMs more accessible, creating more demand to satiate, requiring yet more data center resources (including memory).
Lee observes: "As long as AI companies compete on performance rather than cost, optimization will not weigh on semiconductor demand." And in the current climate, where the AI bubble is still expanding, and competition between the giant LLMs out there is fierce, the prevailing opinion is that tech like Turbo Quant is not going to ease the pace of RAM consumption with the big AI players.
SK Group chairman's bleak warning on RAM crisis: it could last until 2030
Micron is 'trying to help consumers' in RAM crisis despite Crucial closure
Google’s new compression drastically shrinks AI memory use while quietly speeding up performance
Another analyst, Kim Rok-ho of Hana Securities, adds some further thoughts: "Compression technologies are not new, and it remains uncertain whether they will be widely adopted across the [AI] industry. Even if such technologies become more widely used over the mid to long term, it will lower memory cost barriers, expanding overall AI use. There are limited chances of decline in demand for DRAM and storage."
The fact that Turbo Quant isn't alone, and similar tricks have been tried with AI over the past couple of years, is a good point. Yes, Google is claiming it has something very different here – in terms of the tech not lessening the quality of the AI's output – but that remains to be seen in action.
There's plenty of skepticism on Reddit about how Google has spun or hyped Turbo Quant, and the reality of the claims made about the tech. And of course, it's still just research at this point, and whether it'll be realized for deployment on a large-scale basis, well, only time will tell.
And again, Kim comes back to the theory that even if Turbo Quant does become widely adopted, it's going to drive more AI usage, as opposed to driving down the amount of RAM used by LLMs.
Even if you accept the arguments that Turbo Quant is not the panacea for AI-driven RAM shortages – and I feel they're compelling and persuasive views myself – you might still point hopefully to other recent developments that suggest the worst of the memory crisis might just be over. Unfortunately, I feel that these hints are red herrings, too.
I'm mainly thinking of another analyst firm, this time Trend Force, which recently published a report talking about DDR5 RAM price drops in the US, Europe and Asia. While prices do appear to be easing currently, this is more about price tags reaching a ridiculous level where consumers just fold their arms and flat-out refuse to buy, than it is anything to do with supply improvements or bolstered stock.
Granted, it's good to see some downward movement with retail prices, I'm not against that – obviously – but nothing's changing regarding the contract prices of RAM for the big memory manufacturers, which suggests the overall picture remains much the same. As Trend Force acknowledges, in the lens of that broader view, this positivity is a "consumer-driven, short-term adjustment" rather than the start of a full-on turnaround.
Another optimistic nugget was laptop maker Framework being able to keep cost increases to a minimum with memory-related hikes this month. However, the company observed that "all indications are that this is a temporary reprieve and that we'll continue to see volatility and cost increases through the rest of 2026".
I don't doubt that, frankly, when we also see the prices of GPUs – which were already expensive – climbing again due to the rising cost of video RAM. Or nasty hikes with gaming laptops due to memory and storage price increases, or Mac computers with painfully long lead times for delivery reportedly thanks to the memory crisis, or… you get the idea.
This just isn't going away, and the predictions that we won't see any meaningful improvement in the well-off-kilter balance of RAM supply and demand until 2028 feel just as entrenched as they were a month or two ago — with Google's Turbo Quant seemingly unlikely to ride in and save the day.
➡️ Read our full guide to the best laptops
- Best overall: Apple Mac Book Air 13-inch M4
- Best budget: Asus Chromebook CM14
- Best Windows 11 laptop Microsoft Surface Laptop 13-inch
- Best gaming: Razer Blade 16
- Best for pros Mac Book Pro 16-inch (M4 Pro)
Follow Tech Radar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course, you can also follow Tech Radar on You Tube and Tik Tok for news, reviews, unboxings in video form, and get regular updates from us on Whats App too.
Darren is a freelancer writing news and features for Tech Radar (and occasionally T3) across a broad range of computing topics including CPUs, GPUs, various other hardware, VPNs, antivirus and more. He has written about tech for the best part of three decades, and writes books in his spare time (his debut novel - 'I Know What You Did Last Supper' - was published by Hachette UK in 2013).
You must confirm your public display name before commenting
1 Not a squat, not a deadlift — the trap bar deadlift 'sits between' them and builds muscle quicker
2 Record Store Day vinyl lands in one week — what you need to know
3 Beyond Paradise season 4 star would love a Traitors crossover episode
4 Interview: Vari talks chairs, desks, and designing furniture for modern workspaces
5I tested Turtle Beach's Mario-themed controller and headset for Nintendo Switch 2 — and they surprised me for 5 key reasons
Tech Radar is part of Future US Inc, an international media group and leading digital publisher. Visit our corporate site.
© Future US, Inc. Full 7th Floor, 130 West 42nd Street, New York, NY 10036.
Key Takeaways
- News, deals, reviews, guides and more on the newest smartphones
- News, deals, reviews, guides and more on the newest computing gadgets
- Start exploring exclusive deals, expert advice and more
- Unlock and manage exclusive Techradar member rewards
-
Turbo Quant isn't the RAM crisis savior you're hoping for, analysts say — as memory prices continue to look bleak



