
⥠Loading model data...
The top 30 models open-weight models ranked for self-hosting. Updated live from Artificial Analysis data. Last updated: April 2026
Want the full deep dive? See the dedicated ai models to run locally guide.
Top overall
Kimi K2.6
QI 53.9
Best value
DeepSeek V4 Flash
$0.01/M
Fastest
Llama 3.1 Instruct 8B
2435 tok/s
Largest context
Grok 4.20 0309 v2 (Reasoning)
2.0M
| # | Model | Quality | Price/M | Speed | Context |
|---|---|---|---|---|---|
| 1 | Kimi K2.6Kimi | 53.9 | $1.1 | 215 tok/s | 262K |
| 2 | MiMo-V2.5-ProXiaomi | 53.83 | $1.2 | 64 tok/s | 1.1M |
| 3 | Qwen3.6 Max PreviewAlibaba | 51.81 | $2.9 | 38 tok/s | 256K |
| 4 | DeepSeek V4 ProDeepSeek | 51.51 | $1.4 | 168 tok/s | 1.0M |
| 5 | GLM-5.1 (Reasoning)Z AI | 51.41 | $1.7 | 182 tok/s | 205K |
| 6 | Qwen3.6 PlusAlibaba | 49.98 | $1.1 | 53 tok/s | 1.0M |
| 7 | GLM-5 (Reasoning)Z AI | 49.77 | $1.2 | 236 tok/s | 203K |
| 8 | MiniMax-M2.7MiniMax | 49.62 | $0.52 | 454 tok/s | 205K |
| 9 | MiMo-V2-ProXiaomi | 49.2 | $1.5 | 63 tok/s | 131K |
| 10 | MiMo-V2.5Xiaomi | 49.03 | $0.64 | 99 tok/s | 1.1M |
| 11 | Kimi K2.5 (Reasoning)Kimi | 46.81 | $0.90 | 388 tok/s | 262K |
| 12 | DeepSeek V4 FlashDeepSeek | 46.52 | $0.01 | 84 tok/s | 1.0M |
| 13 | Qwen3.6 27B (Reasoning)Alibaba | 45.82 | $1.0 | 64 tok/s | 262K |
| 14 | Qwen3.5 397B A17B (Reasoning)Alibaba | 45.05 | $1.3 | 293 tok/s | 262K |
| 15 | MiMo-V2-Omni-0327Xiaomi | 44.93 | $0.80 | 109 tok/s | 131K |
| 16 | GLM-5.1Z AI | 43.82 | $2.1 | 149 tok/s | 205K |
| 17 | Qwen3.6 35B A3B (Reasoning)Alibaba | 43.49 | $0.40 | 422 tok/s | 262K |
| 18 | MiMo-V2-OmniXiaomi | 43.4 | Free | 107 tok/s | 256K |
| 19 | GLM-4.7 (Reasoning)Z AI | 42.11 | $0.74 | 1208 tok/s | 205K |
| 20 | Qwen3.5 27B (Reasoning)Alibaba | 42.07 | $0.82 | 93 tok/s | 262K |
| 21 | MiniMax-M2.5MiniMax | 41.93 | $0.40 | 435 tok/s | 205K |
| 22 | Hy3-preview (Reasoning)Tencent | 41.85 | Free | 119 tok/s | 256K |
| 23 | DeepSeek V3.2 (Reasoning)DeepSeek | 41.71 | $0.30 | 248 tok/s | 164K |
| 24 | Qwen3.5 122B A10B (Reasoning)Alibaba | 41.6 | $0.94 | 162 tok/s | 262K |
| 25 | MiMo-V2-FlashXiaomi | 41.46 | $0.15 | 138 tok/s | 256K |
| 26 | Kimi K2 ThinkingKimi | 40.89 | $1.1 | 235 tok/s | 262K |
| 27 | GLM-5Z AI | 40.57 | $0.97 | 235 tok/s | 203K |
| 28 | Qwen3.5 397B A17BAlibaba | 40.1 | $1.3 | 278 tok/s | 262K |
| 29 | Qwen3 Max ThinkingAlibaba | 39.85 | $2.4 | 47 tok/s | 262K |
| 30 | MiniMax-M2.1MiniMax | 39.42 | $0.52 | 87 tok/s | 205K |
Showing top 30 of 207 ranked models
View all in Explore âEach guide goes deeper than the quick filters, with methodology, benchmarks, and picks per scenario.
Model rankings
Browse the latest ranking pages for overall models, coding, open source, Ollama, long context, and agentic workflows.
Current coding leaderboard using LiveCodeBench, Terminal-Bench, and SciCode.
Top open-weight models for self-hosting, Ollama, and low-cost API use.
Best local AI models by hardware tier for self-hosting on Macs, RTX GPUs, and workstations.
Ollama-first picks for coding, chat, reasoning, and low-friction local inference.
Best long-context models for large documents, codebases, and retrieval-heavy workflows.
Rankings for tool use, multi-step execution, and autonomous agent workflows.
Every model is scored using the Artificial Analysis Intelligence Index â a composite of GPQA Diamond, AIME 2025, LiveCodeBench, MMLU-Pro, and other benchmarks, weighted into a single 0-100 quality score. Speed, price, and context window are tracked live across providers.
The overall ranking is a starting point. For production decisions, narrow by use case using the category pages above, then compare finalists head-to-head on Compare.
Kimi K2.6 leads on overall quality right now, but the best model depends on your priorities. Coding, cost, speed, and context length all shift the answer. Use the category rankings above to find the right fit.
DeepSeek V4 Flash currently offers one of the best quality-to-cost ratios. Open-source models on providers like Groq or Together can be even cheaper at strong quality levels.
Start with overall quality index, then narrow by what matters for your workload: cost per million tokens, output speed, context window, or a specific capability like coding or tool use. Use our Compare tool to put finalists head to head.
Llama 3.1 Instruct 8B leads on output speed right now at 2435 tokens/second. Speed matters most for real-time applications and agentic workflows with many sequential steps.
Grok 4.20 0309 v2 (Reasoning) has the biggest context window in this ranking at 2.0M. For a dedicated long-context comparison, see our largest context window page.
Data is pulled from Artificial Analysis and refreshed automatically. New models appear as soon as they have benchmark scores and provider endpoints. The ranking reflects the live state of the leaderboard.