Recommendation tool
Use the selector when you know the job to be done but not the right model. It turns goals like coding, research, chat, long-context analysis, or low-cost inference into a shortlist you can validate on Compare.
A short wizard that recommends models based on your use case and constraints — using WhatLLM’s pricing/speed/latency + quality data.
This is the fastest route if you are asking “what is the best model for coding”, “which LLM should I use for research”, or “what AI model gives the best value for my budget”.
The selector narrows the field. After that, use the evergreen ranking pages or open the finalists in Compare to inspect the tradeoffs in more detail.
Model rankings
Browse the latest ranking pages for overall models, coding, open source, Ollama, long context, and agentic workflows.
Live ranking of the best overall AI models by quality, price, speed, and context window.
Current coding leaderboard using LiveCodeBench, Terminal-Bench, and SciCode.
Top open-weight models for self-hosting, Ollama, and low-cost API use.
Best local AI models by hardware tier for self-hosting on Macs, RTX GPUs, and workstations.
Ollama-first picks for coding, chat, reasoning, and low-friction local inference.
Best long-context models for large documents, codebases, and retrieval-heavy workflows.
Rankings for tool use, multi-step execution, and autonomous agent workflows.