M
MesmerTools
Back to tools

LLM Token Price Calculator

Estimate API costs across 300+ models. Paste text to count tokens or enter counts directly, pick your model, and see the price instantly.

Estimated Cost

$?.??

Select a model to get started

Prices from OpenRouter, cached for 24h. Token counting uses cl100k_base encoding (GPT-4 tokenizer).

Compare Models

Add models to see a side-by-side cost comparison for your current token counts (1,000 input / 500 output tokens).

Select models above to compare costs side by side

How it works

Three steps to estimate the cost of any LLM API call. No sign-up, no API key needed.

Step 1

Pick a model

Choose from 300+ models across OpenAI, Anthropic, Google, Meta, Mistral, and more. Search by name or browse by provider. Pricing updates every 24 hours from OpenRouter.

Step 2
1,337

Enter your tokens

Paste text and we count tokens server-side with the cl100k_base tokenizer, or type in token counts directly. Set both input (prompt) and output (completion) tokens.

Step 3
Total$0.0312

See the cost

Get an instant cost estimate with input/output breakdown. For non-OpenAI models, see an adjusted range that accounts for tokenizer differences. Compare multiple models side by side.

Why use this calculator

LLM pricing is fragmented across providers and changes frequently. We pull live data so you don't have to compare docs manually.

300+ models, one place

Every model on OpenRouter in a single searchable dropdown. OpenAI, Anthropic, Google, Meta, Mistral, Cohere, and dozens more — all with current pricing.

Real token counting

Paste your actual prompt and get a precise token count using OpenAI's cl100k_base tokenizer (WASM). Not a rough word-based estimate — real BPE encoding.

Accuracy-aware ranges

Different providers use different tokenizers. We automatically flag non-OpenAI models and show adjusted cost ranges — +20% for Anthropic, ±10% for others.

Side-by-side comparison

Add multiple models to the comparison chart and see exactly how costs stack up for your specific token counts. Find the cheapest model for your workload.

Common use cases

Whether you're budgeting a new project or optimizing an existing pipeline, knowing exact costs per call is essential.

API cost budgeting

Building an app that makes thousands of API calls per day? Calculate your expected monthly bill before writing a single line of code. Test different models to find the best price-to-quality ratio.

Used by backend engineers, startup founders, and product managers planning AI features.

Prompt optimization

Paste your prompt, see the token count, then trim unnecessary context to reduce costs. Compare the price impact of system prompts, few-shot examples, and retrieval-augmented context windows.

Used by prompt engineers, ML teams, and developers fine-tuning their LLM integrations.

Model evaluation

Considering switching from GPT-4o to Claude or Gemini? Use the comparison chart to see exactly how much you'd save (or spend) per request. Factor in thinking-model overhead for reasoning tasks.

Used by CTOs, engineering leads, and teams evaluating model migrations.

Perfect for

Anyone who works with LLM APIs and wants to understand costs before they hit the invoice.

Developers

Building AI features and need to estimate per-request costs. Paste your actual prompts to get exact token counts.

Backend engineers, full-stack devs, API integrators.

Founders & PMs

Planning your AI budget or pitching to investors. Get concrete unit economics for your AI-powered features.

Startup founders, product managers, CTOs.

ML Engineers

Evaluating models for production use. Compare pricing across providers and factor in thinking-model overhead.

AI/ML teams, research engineers, data scientists.

Finance Teams

Tracking and forecasting AI infrastructure costs. Understand exactly what drives your LLM spend per department or feature.

FinOps, accounting, procurement teams.