AI Model Comparison Table

Sort by:

GPT-5.3-Codex

OpenAI
2026-02
Context
400K
Max Output
128K
Input $/1M
$1.75
Output $/1M
$14.00
MMLU
HumanEval

Claude Opus 4.6

Anthropic
2026-02
Context
200K
Max Output
128K
Input $/1M
$5.00
Output $/1M
$25.00
MMLU
HumanEval

GPT-5.2

OpenAI
2025-12
Context
400K
Max Output
128K
Input $/1M
$1.75
Output $/1M
$14.00
MMLU
HumanEval

Mistral Large 3

Mistral
2025-12
Context
262K
Max Output
33K
Input $/1M
$0.50
Output $/1M
$1.50
MMLU
HumanEval

Claude Haiku 4.5

Anthropic
2025-10
Context
200K
Max Output
64K
Input $/1M
$1.00
Output $/1M
$5.00
MMLU
HumanEval
88.1

Claude Sonnet 4.5

Anthropic
2025-09
Context
200K
Max Output
64K
Input $/1M
$3.00
Output $/1M
$15.00
MMLU
HumanEval
93.0

Gemini 2.5 Flash

Google
2025-09
Context
1.0M
Max Output
66K
Input $/1M
$0.30
Output $/1M
$2.50
MMLU
HumanEval

Grok 4

xAI
2025-09
Context
262K
Max Output
33K
Input $/1M
$3.00
Output $/1M
$15.00
MMLU
HumanEval

Grok 4 Fast

xAI
2025-09
Context
2.0M
Max Output
33K
Input $/1M
$0.20
Output $/1M
$0.50
MMLU
HumanEval

o4-mini

OpenAI
2025-07
Context
200K
Max Output
100K
Input $/1M
$1.10
Output $/1M
$4.40
MMLU
HumanEval

o3

OpenAI
2025-06
Context
200K
Max Output
100K
Input $/1M
$2.00
Output $/1M
$8.00
MMLU
HumanEval

GPT-4.1

OpenAI
2025-04
Context
1.0M
Max Output
33K
Input $/1M
$2.00
Output $/1M
$8.00
MMLU
HumanEval

GPT-4.1-mini

OpenAI
2025-04
Context
1.0M
Max Output
33K
Input $/1M
$0.40
Output $/1M
$1.60
MMLU
HumanEval

GPT-4.1-nano

OpenAI
2025-04
Context
1.0M
Max Output
33K
Input $/1M
$0.10
Output $/1M
$0.40
MMLU
HumanEval

Llama 4 Maverick (Groq)

Meta/Groq
2025-04
Context
524K
Max Output
8K
Input $/1M
$0.50
Output $/1M
$0.77
MMLU
HumanEval

Llama 4 Scout (Groq)

Meta/Groq
2025-04
Context
10.0M
Max Output
8K
Input $/1M
$0.11
Output $/1M
$0.34
MMLU
HumanEval

Gemini 2.5 Pro

Google
2025-03
Context
1.0M
Max Output
66K
Input $/1M
$1.25
Output $/1M
$10.00
MMLU
HumanEval

Mistral Small 3.1

Mistral
2025-03
Context
128K
Max Output
8K
Input $/1M
$0.03
Output $/1M
$0.11
MMLU
HumanEval

Gemini 2.0 Flash

Google
2025-02
Context
1.0M
Max Output
8K
Input $/1M
$0.10
Output $/1M
$0.40
MMLU
HumanEval

o3-mini

OpenAI
2025-01
Context
200K
Max Output
100K
Input $/1M
$1.10
Output $/1M
$4.40
MMLU
HumanEval

DeepSeek R1

DeepSeek
2025-01
Context
128K
Max Output
33K
Input $/1M
$0.55
Output $/1M
$2.19
MMLU
90.8
HumanEval

DeepSeek V3

DeepSeek
2024-12
Context
128K
Max Output
8K
Input $/1M
$0.27
Output $/1M
$1.10
MMLU
88.5
HumanEval
Showing 22 of 26 models (4 legacy hidden)Last updated: February 2026

This tool is provided as-is for convenience. Output should be verified before use in any production or critical context.