LLM Workflow Cost Calculator

Load preset:

Presets load a starter pipeline. You can modify, add, or remove steps freely.

Pipeline Steps

1
$0.000030
gpt-4o-mini: $0.15/M in, $0.6/M out
2
$0.000480
gpt-4.1-nano: $0.1/M in, $0.4/M out
3
$0.022
claude-sonnet-4: $1.5/M in, $15/M out(cached: 50% off input)

Cost Breakdown

#StepModelInputOutputInput CostOutput CostTotal
1Embed Querygpt-4o-mini2000$0.000030$0.00$0.000030
2Rerank Chunksgpt-4.1-nano4.0K200$0.000400$0.000080$0.000480
3Generate Answercachedclaude-sonnet-45.0K1.0K$0.0075$0.015$0.022
Total per execution$0.023

Batch Projections

Daily
$2.30
100 executions
Monthly
$69.03
3,000 executions
Yearly
$839.86
36,500 executions

Provider Comparison

What if every step used the cheapest model from one provider? Shows best-case cost per provider.

ProviderCost / ExecutionDaily (100x)Monthlyvs Current
OpenAI$0.0010$0.103$3.08-95.5%
Google$0.0010$0.103$3.08-95.5%
DeepSeek$0.0028$0.280$8.41-87.8%
Meta (via providers)$0.0047$0.474$14.22-79.4%
Anthropic$0.0086$0.856$25.68-62.8%
Mistral$0.021$2.06$61.80-10.5%

What This Tool Does

LLM Workflow Cost Calculator is built for deterministic developer and agent workflows.

Model multi-step AI pipelines end-to-end and see total cost per execution across providers. Includes caching simulation and monthly projections.

Use How to Use for execution steps and FAQ for constraints, policies, and edge cases.

Last updated:

This tool is provided as-is for convenience. Output should be verified before use in any production or critical context.

Agent Invocation

Best Path For Builders

Browser workflow

Runs instantly in the browser with private local processing and copy/export-ready output.

Browser Workflow

This tool is optimized for instant in-browser execution with local data handling. Run it here and copy/export the output directly.

/llm-workflow-cost-calculator/

For automation planning, fetch the canonical contract at /api/tool/llm-workflow-cost-calculator.json.

How to Use LLM Workflow Cost Calculator

  1. 1

    Add pipeline steps

    Click Add Step to build your AI workflow. For each step, set a name (e.g., 'Embed documents'), select a step type and model, and enter the expected input and output token counts.

  2. 2

    Configure caching

    Toggle the caching checkbox on steps where prompt caching applies to see the cost reduction. The tool uses provider-specific cached input pricing.

  3. 3

    Set batch volume

    Enter how many times this workflow runs per day to see daily, monthly, and yearly cost projections across all steps.

  4. 4

    Compare providers

    Review the provider comparison table to see how total workflow cost changes if you switch all steps to a different provider, helping you optimize your model choices.

Frequently Asked Questions

What is LLM Workflow Cost Calculator?
LLM Workflow Cost Calculator models multi-step AI pipelines — embedding, retrieval, generation, validation — and calculates total cost per execution across providers, with caching simulation and monthly projections.
How is this different from a token counter?
Token counters measure single texts. This tool models entire workflows with multiple steps, each potentially using different models, and shows how caching, batching, and step ordering affect total cost.
Is LLM Workflow Cost Calculator free?
Yes. Completely free with no account or sign-up required.
Does it send my data to a server?
No. All calculations happen in your browser using static pricing data. Nothing is sent to any server.
How current is the pricing data?
Pricing data is updated regularly to reflect current rates from OpenAI, Anthropic, Google, DeepSeek, Mistral, and Meta.