LLM Token Counter
Estimated Token Counts
Estimates based on average characters per token. Actual counts vary with content.
Estimated Cost (for this text as input)
| Model ↕ | Est. Tokens | Input Cost ↑ | Output Cost ↕ |
|---|---|---|---|
| GPT-5.4 | ~0 | $0.0000 | $0.0000 |
| GPT-5.4 Pro | ~0 | $0.0000 | $0.0000 |
| GPT-5.3-Codex | ~0 | $0.0000 | $0.0000 |
| GPT-5.2 | ~0 | $0.0000 | $0.0000 |
| GPT-5.1 | ~0 | $0.0000 | $0.0000 |
| GPT-5-mini | ~0 | $0.0000 | $0.0000 |
| GPT-5-nano | ~0 | $0.0000 | $0.0000 |
| o3 | ~0 | $0.0000 | $0.0000 |
| GPT-4.1 | ~0 | $0.0000 | $0.0000 |
| GPT-4.1-mini | ~0 | $0.0000 | $0.0000 |
| GPT-4.1-nano | ~0 | $0.0000 | $0.0000 |
| GPT-5 | ~0 | $0.0000 | $0.0000 |
| o3-pro | ~0 | $0.0000 | $0.0000 |
| o4-mini | ~0 | $0.0000 | $0.0000 |
| o3-mini | ~0 | $0.0000 | $0.0000 |
| Claude Opus 4.6 | ~0 | $0.0000 | $0.0000 |
| Claude Sonnet 4.6 | ~0 | $0.0000 | $0.0000 |
| Claude Sonnet 4.5 | ~0 | $0.0000 | $0.0000 |
| Claude Haiku 4.5 | ~0 | $0.0000 | $0.0000 |
| Claude Opus 4.5 | ~0 | $0.0000 | $0.0000 |
| Claude Sonnet 4 | ~0 | $0.0000 | $0.0000 |
| Gemini 3.1 Pro | ~0 | $0.0000 | $0.0000 |
| Gemini 3 Flash | ~0 | $0.0000 | $0.0000 |
| Gemini 3.1 Flash-Lite | ~0 | $0.0000 | $0.0000 |
| Gemini 2.5 Pro | ~0 | $0.0000 | $0.0000 |
| Gemini 2.5 Flash | ~0 | $0.0000 | $0.0000 |
| Gemini 2.0 Flash | ~0 | $0.0000 | $0.0000 |
| DeepSeek Chat (V3.2) | ~0 | $0.0000 | $0.0000 |
| DeepSeek Reasoner (V3.2) | ~0 | $0.0000 | $0.0000 |
| Mistral Large 3 | ~0 | $0.0000 | $0.0000 |
| Mistral Medium 3.1 | ~0 | $0.0000 | $0.0000 |
| Mistral Small 3.2 | ~0 | $0.0000 | $0.0000 |
| Llama 4 Maverick (Groq) | ~0 | $0.0000 | $0.0000 |
| Llama 4 Scout (Groq) | ~0 | $0.0000 | $0.0000 |
| Grok 4 | ~0 | $0.0000 | $0.0000 |
| Grok 4.1 Fast | ~0 | $0.0000 | $0.0000 |
| Grok 4 Fast | ~0 | $0.0000 | $0.0000 |
Prices per 1M tokens. Input cost = cost if this text is sent as input. Output cost = cost if this text were generated as output. Data as of April 12, 2026.
Quick Cost Calculator
What This Tool Does
LLM Token Counter is built for deterministic developer and agent workflows.
Estimate token counts for GPT, Claude, Gemini, Llama, and DeepSeek. Compare costs across AI models instantly.
Use How to Use for execution steps and FAQ for constraints, policies, and edge cases.
Last updated:
This tool is provided as-is for convenience. Output should be verified before use in any production or critical context.
Agent Invocation
Best Path For Builders
Dedicated API endpoint
Deterministic outputs, machine-safe contracts, and production-ready examples.
Dedicated API
https://aidevhub.io/api/token-counter/ OpenAPI: https://aidevhub.io/api/openapi.yaml
Unified Runtime API
https://aidevhub.io/api/tools/run/?toolId=token-counter&a=...
GET and POST are supported at /api/tools/run/ with identical validation and limits.
Limit: req / s, input max 512 KB.
REST API
Base URL
https://aidevhub.io/api/token-counter/ 50 requests/day per IP. No authentication required. CORS enabled. OpenAPI spec
Endpoints
Example
curl "https://aidevhub.io/api/token-counter/?text=Hello+world"
Example Response
{
"text_length": 13,
"tokens": {
"gpt-4o": {
"model": "gpt-4o",
"tokens": 4,
"method": "estimate",
"cost": {
"input": 0.00001,
"output": 0.00004
}
}
},
"model_count": 32
} How to Use LLM Token Counter
- 1
Paste text to count
Paste any content: prompt text, article, code snippet, or conversation. The tool counts tokens using the actual tokenizer for each model.
- 2
Select your target model
Choose from GPT, Claude, Gemini, Llama, and others. Token counts vary by model due to different tokenizers.
- 3
See token estimate and cost breakdown
Get total token count, approximate cost at current pricing, and cost per 1M tokens. Useful for budgeting API calls.
- 4
Count multiple texts in batch
Paste multiple prompts or messages separated by '---'. Get individual counts and total across all inputs.
- 5
Test text reduction strategies
Use the counter to compare token usage before and after shortening. Remove redundant words or summarize to optimize API costs.