Ask Runable forDesign-Driven General AI AgentTry Runable For Free
Runable
Back to Blog
Technology6 min read

Alibaba's new open source Qwen3.5-Medium models offer Sonnet 4.5 performance on local computers | VentureBeat

This leap is made possible by near-lossless accuracy under 4-bit weight and KV cache quantization, allowing developers to process massive datasets without se...

TechnologyInnovationBest PracticesGuideTutorial
Alibaba's new open source Qwen3.5-Medium models offer Sonnet 4.5 performance on local computers | VentureBeat
Listen to Article
0:00
0:00
0:00

Alibaba's new open source Qwen 3.5-Medium models offer Sonnet 4.5 performance on local computers | Venture Beat

Overview

Alibaba's new open source Qwen 3.5-Medium models offer Sonnet 4.5 performance on local computers

Credit: Venture Beat made with Google Gemini 3 Pro Image

Details

Credit: Venture Beat made with Google Gemini 3 Pro Image

Alibaba's now famed Qwen AI development team has done it again: a little more than a day ago, they released the Qwen 3.5 Medium Model series consisting of four new large language models (LLMs) with support for agentic tool calling, three of which are available for commercial usage by enterprises and indie developers under the standard open source Apache 2.0 license:

Developers can download them now on Hugging Face and Model Scope. A fourth model, Qwen 3.5-Flash, appears to be proprietary and only available through the Alibaba Cloud Model Studio API, but still offers a strong advantage in cost compared to other models in the West (see pricing comparison table below).

But the big twist with the open source models is that they offer comparably high performance on third-party benchmark tests to similarly-sized proprietary models from major U. S. startups like Open AI or Anthropic, actually beating Open AI's GPT-5-mini and Anthropic's Claude Sonnet 4.5 — the latter model which was just released five months ago.

And, the Qwen team says it has engineered these models to remain highly accurate even when "quantized," a process that reduces their footprint further by reducing the numbers by which the model's settings are stored from many values to far fewer.

Crucially, this release brings "frontier-level" context windows to the desktop PC. The flagship Qwen 3.5-35B-A3B can now exceed a 1 million token context length on consumer-grade GPUs with 32GB of VRAM. While not something everyone has access to, this is far less compute than many other comparably-performant options.

This leap is made possible by near-lossless accuracy under 4-bit weight and KV cache quantization, allowing developers to process massive datasets without server-grade infrastructure.

At the heart of Qwen 3.5's performance is a sophisticated hybrid architecture. While many models rely solely on standard Transformer blocks, Qwen 3.5 integrates Gated Delta Networks combined with a sparse Mixture-of-Experts (Mo E) system. The technical specifications for the Qwen 3.5-35B-A3B reveal a highly efficient design:

Parameter Efficiency: While the model houses 35 billion parameters in total, it only activates 3 billion for any given token.

Parameter Efficiency: While the model houses 35 billion parameters in total, it only activates 3 billion for any given token.

Expert Diversity: The Mo E layer utilizes 256 experts, with 8 routed experts and 1 shared expert helping to maintain performance while slashing inference latency.

Expert Diversity: The Mo E layer utilizes 256 experts, with 8 routed experts and 1 shared expert helping to maintain performance while slashing inference latency.

Near-Lossless Quantization: The series maintains high accuracy even when compressed to 4-bit weights, significantly reducing the memory footprint for local deployment.

Near-Lossless Quantization: The series maintains high accuracy even when compressed to 4-bit weights, significantly reducing the memory footprint for local deployment.

Base Model Release: In a move to support the research community, Alibaba has open-sourced the Qwen 3.5-35B-A3B-Base model alongside the instruct-tuned versions.

Base Model Release: In a move to support the research community, Alibaba has open-sourced the Qwen 3.5-35B-A3B-Base model alongside the instruct-tuned versions.

Qwen 3.5 introduces a native "Thinking Mode" as its default state. Before providing a final answer, the model generates an internal reasoning chain—delimited by tags—to work through complex logic. The product lineup is tailored for varying hardware environments:

Qwen 3.5-27B: Optimized for high efficiency, supporting a context length of over 800K tokens.

Qwen 3.5-27B: Optimized for high efficiency, supporting a context length of over 800K tokens.

Qwen 3.5-Flash: The production-grade hosted version, featuring a default 1 million token context length and built-in official tools.

Qwen 3.5-Flash: The production-grade hosted version, featuring a default 1 million token context length and built-in official tools.

Qwen 3.5-122B-A10B: Designed for server-grade GPUs (80GB VRAM), this model supports 1M+ context lengths while narrowing the gap with the world's largest frontier models.

Qwen 3.5-122B-A10B: Designed for server-grade GPUs (80GB VRAM), this model supports 1M+ context lengths while narrowing the gap with the world's largest frontier models.

Benchmark results validate this architectural shift. The 35B-A3B model notably surpasses much larger predecessors, such as Qwen 3-235B, as well as the aforementioned proprietary GPT-5 mini and Sonnet 4.5 in categories including knowledge (MMMLU) and visual reasoning (MMMU-Pro).

Alibaba Qwen 3.5 Medium models benchmark comparison chart. Credit: Alibaba

Alibaba Qwen 3.5 Medium models benchmark comparison chart. Credit: Alibaba

For those not hosting their own weights, Alibaba Cloud Model Studio provides a competitive API for Qwen 3.5-Flash.

The API also features a granular Tool Calling pricing model, with Web Search at $10 per 1,000 calls and Code Interpreter currently offered for a limited time at no cost.

This makes Qwen 3.5-Flash among the most affordable to run over API among all the major LLMs in the world. See a table comparing them below:

What it means for enterprise technical leaders and decision-makers

With the launch of the Qwen 3.5 Medium Models, the rapid iteration and fine-tuning once reserved for well-funded labs is now accessible for on-premise development at many non-technical firms, effectively decoupling sophisticated AI from massive capital expenditure.

Across the organization, this architecture transforms how data is handled and secured. The ability to ingest massive document repositories or hour-scale videos locally allows for deep institutional analysis without the privacy risks of third-party APIs.

By running these specialized "Mixture-of-Experts" models within a private firewall, organizations can maintain sovereign control over their data while utilizing native "thinking" modes and official tool-calling capabilities to build more reliable, autonomous agents.

Early adopters on Hugging Face have specifically lauded the model’s ability to "narrow the gap" in agentic scenarios where previously only the largest closed models could compete.

This shift toward architectural efficiency over raw scale ensures that AI integration remains cost-conscious, secure, and agile enough to keep pace with evolving operational needs.

Deep insights for enterprise AI, data, and security leaders

By submitting your email, you agree to our Terms and Privacy Notice.

Key Takeaways

  • Alibaba's new open source Qwen 3

  • Credit: Venture Beat made with Google Gemini 3 Pro Image

  • Credit: Venture Beat made with Google Gemini 3 Pro Image

  • Alibaba's now famed Qwen AI development team has done it again: a little more than a day ago, they released the Qwen 3

  • Developers can download them now on Hugging Face and Model Scope

Cut Costs with Runable

Cost savings are based on average monthly price per user for each app.

Which apps do you use?

Apps to replace

ChatGPTChatGPT
$20 / month
LovableLovable
$25 / month
Gamma AIGamma AI
$25 / month
HiggsFieldHiggsField
$49 / month
Leonardo AILeonardo AI
$12 / month
TOTAL$131 / month

Runable price = $9 / month

Saves $122 / month

Runable can save upto $1464 per year compared to the non-enterprise price of your apps.