Ask Runable forDesign-Driven General AI AgentTry Runable For Free
Runable
Back to Blog
Technology9 min read

Gemini 3.1 Pro vs Gemini 3 Pro: Google’s new AI is slower on purpose — and smarter for it | TechRadar

The new Gemini refines the model’s instincts and turns strong ideas into sharper execution Discover insights about gemini 3.1 pro vs gemini 3 pro: google’s new

TechnologyInnovationBest PracticesGuideTutorial
Gemini 3.1 Pro vs Gemini 3 Pro: Google’s new AI is slower on purpose — and smarter for it | TechRadar
Listen to Article
0:00
0:00
0:00

Gemini 3.1 Pro vs Gemini 3 Pro: Google’s new AI is slower on purpose — and smarter for it | Tech Radar

Overview

Gemini 3.1 Pro vs Gemini 3 Pro: Google’s new AI is slower on purpose — and smarter for it

The new Gemini refines the model’s instincts and turns strong ideas into sharper execution

Details

When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

AI model updates don't usually have wholesale massive improvements the way they did a couple of years ago, but Google's upgraded Gemini model, Gemini 3.1 Pro, promised a subtle yet substantial enhancement to the Gemini 3 Pro model.

Gemini 3 was a powerhouse comparable to the best of Chat GPT, with impressive multimodal abilities, but the 3.1 update represents a pivot toward deeper reasoning. It is not necessarily faster, and in some modes, it is intentionally slower, taking a moment to chew on a problem before spitting out an answer.

The most noticeable differences between the two models lie in the hidden plumbing of their logic. Gemini 3 was often criticized for its tendency to rush toward a plausible-sounding answer. Gemini 3.1 includes a deep think mode that has seen its scores on complex benchmarks like ARC-AGI-2 skyrocket. The new model also boasts a native ability to handle Scalable Vector Graphics, or SVGs, with a level of precision that allows it to write and animate code directly.

I tested Chat GPT 5.2 and Gemini 3 head-to-head on what really matters

To see how well it performed against its predecessor, I set up a few complex prompts ideal for the new model and tested them against Gemini 3 as well.

I first wanted to see how well the two models did in a complex bit of abstract reasoning. I came up with something beyond the usual sort of physics, so the models would have to think about gravity in new ways and come up with an internal logic. I set it up as:

"In a fictional dimension, gravity works in reverse for liquids but normally for solids. I have a cup of coffee. If I tilt the cup 45 degrees to the left while standing on the ceiling, describe the trajectory of the coffee and where it ends up relative to my feet."

The responses provided a stark contrast. Gemini 3 was confident, but immediately got confused about the gravity situation. It ended up proclaiming it would fall on the floor, though it did get that the coffee would land slightly to the left of my feet. There was a confused narrative, but it ended with a clean ceiling and messy floor.

Gemini 3.1 got it right, though. The AI model correctly calculated that the liquid would slide up the newly angled interior wall, escape over the lip, and continue its upward trajectory.

"Relative to your feet, the coffee will splatter directly onto the ceiling slightly to the left of your left foot. If your stance is narrow or you are holding the cup close to your body, your left boot is going to get completely soaked in hot coffee. As a liquid, it will pool on the ceiling, effectively "puddling" around your shoes rather than dripping down to the floor."

Next came a test of how Gemini 3.1 can manipulate scalable vector graphics entirely through code. SVGs require a deep understanding of coordinate systems, complex geometry, and cascading style sheets. So I wanted to see how well the two models could make animations tied to shapes. I asked each model to:

I tested Chat GPT 5.2 and Gemini 3 head-to-head on what really matters

Google Search gives AI Overviews Gemini 3 upgrade and and AI Mode connection

Chat GPT vs Gemini vs Claude: Which AI has the best eye?

"Create a single-file SVG of a solar system. It should include a sun and three planets orbiting at different speeds. Make the planets actually rotate around the center."

Gemini 3 just went ahead and used Nano Banana to make the image above, a yellow circle and three smaller colored circles, with arrows indicating movement, but no actual movement.

Gemini 3.1 wrote out some relatively simple HTML code and promised it would do what I'd asked for, including animation. I plugged the code into a viewer and got what you can see below, albeit as a continuous animation, not just a video clip like the one I recorded.

My final test was a bit of creative play around what Gemini 3.1 promised was amazing logistical planning and strict constraint management over a simulated long period of time. The AI needed to take on a persona and maintain that unique character voice while solving a complex series of interconnected supply chain problems. The prompt was:

"You are the Chief Operating Officer for a supervillain who wants to build a secret base inside a hollowed-out iceberg. Create a 6-month logistical plan to move 500 tons of steel and 200 minions to the North Atlantic without alerting the Coast Guard or Greenpeace. You must use a front company that sells 'Industrial-Strength Shaved Ice.' You have to account for the iceberg melting 2% every month. You need a contingency plan for what to do if a polar bear wanders into the server room."

The difference in narrative depth and logistical coherence between the two generations was truly staggering to read. Gemini 3 provided a very dry, boring list that barely acknowledged the requested supervillain persona and read more like a standard grocery list. It scheduled the steel shipments in a basic sequence, but completely ignored the mathematical reality of the monthly melt rate, leading to a theoretical base that would presumably sink into the ocean by month five. I

Gemini 3.1 fully embraced its assigned role as an evil corporate executive, delivering a brilliantly unhinged but surprisingly logical six-month roadmap for aquatic domination. It used the shaved ice front company perfectly, explaining that the massive industrial drills used for hollowing out the frozen base would be disguised as artisanal ice harvesting equipment for tropical luxury resorts. It actively combated the shrinking iceberg by scheduling dynamic ballast adjustments and prioritizing structural steel placement to maintain buoyancy as the exterior slowly melted away into the sea. It even planned for possible morale issues among the minions: "200 minions confined inside a freezing, shrinking block of ice can lead to mutiny. We will mitigate this by utilizing the excess server heat to power a high-end minion sauna and issuing mandatory Vitamin D supplements."

Gemini 3 Pro remains a perfectly adequate tool for summarizing simple emails, generating basic conversational outlines, or answering straightforward factual queries where deep, multi-layered reasoning is not required. However, if you are attempting to build complex plans or go beyond standard environments, Gemini 3.1 Pro is the undisputed champion and the only logical choice

The newer iteration possesses a profound capacity to hold multiple, often contradictory, constraints in its working memory. You would choose the older model only if you were looking for a quick, surface-level interaction or were really in a hurry. For anything more complex, the difference between Gemini 3 Pro and Gemini 3.1 Pro is profound enough to make the switch.

Follow Tech Radar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

And of course you can also follow Tech Radar on Tik Tok for news, reviews, unboxings in video form, and get regular updates from us on Whats App too.

➡️ Read our full guide to the best business laptops

  1. Best overall: Dell Precision 5690
  2. Best on a budget: Acer Aspire 5
  3. Best Mac Book: Apple Mac Book Pro 14-inch (M4)

Eric Hal Schwartz is a freelance writer for Tech Radar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He's since become an expert on the products of generative AI models, such as Open AI’s Chat GPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he's continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York City.

You must confirm your public display name before commenting

1 Gemini 3.1 Pro vs Gemini 3 Pro: Google’s new AI is slower on purpose

2'Social advertising is being used to defraud at scale across some of the largest platforms.': Nearly one in three Meta ads reportedly point to a scam, phishing or malware

3 Summer Game Fest officially returns this summer with all-new game announcements — here's when you can watch the showcase

4 Lenovo Think Pad P16 Gen 3 mobile workstation review

5 This Samsung Galaxy Z Fold 7 rival could have the biggest-ever foldable battery

Tech Radar is part of Future US Inc, an international media group and leading digital publisher. Visit our corporate site.

© Future US, Inc. Full 7th Floor, 130 West 42nd Street, New York, NY 10036.

Key Takeaways

  • The new Gemini refines the model’s instincts and turns strong ideas into sharper execution
  • When you purchase through links on our site, we may earn an affiliate commission
  • AI model updates don't usually have wholesale massive improvements the way they did a couple of years ago, but Google's upgraded Gemini model, Gemini 3
  • Gemini 3 was a powerhouse comparable to the best of Chat GPT, with impressive multimodal abilities, but the 3

Cut Costs with Runable

Cost savings are based on average monthly price per user for each app.

Which apps do you use?

Apps to replace

ChatGPTChatGPT
$20 / month
LovableLovable
$25 / month
Gamma AIGamma AI
$25 / month
HiggsFieldHiggsField
$49 / month
Leonardo AILeonardo AI
$12 / month
TOTAL$131 / month

Runable price = $9 / month

Saves $122 / month

Runable can save upto $1464 per year compared to the non-enterprise price of your apps.