LLM Token Counter
🔒 100% local — nothing leaves your browser
Samples:
Characters: 0 · Words: 0 · Lines: 0 · Sentences: 0
// how token counting works
Tokens are the basic units LLMs operate on. A token is roughly 4 characters in English, or about ¾ of a word. Common words like "the" or "cat" are one token each. Longer or rare words may be split into multiple tokens.
This tool uses a BPE approximation — it's accurate to within ~5% for English text. For exact counts, use the model provider's tokenizer API (tiktoken for OpenAI, claude-tokenizer for Anthropic).
Context window = the maximum tokens a model can process in one call (input + output combined). If your prompt uses 80%+ of the window, you'll have very little space left for the response.