• Your API key never leaves your browser. Memory only — gone when you close the tab.
  • Direct browser → LLM calls. No TokenEyes server in the API path.
  • Nothing logged or stored on our end.
  • Use a free restricted key where available — providers have trial/free options.
  • LLM providers may log requests per their own policies.
  • Fun project disclaimer: model responses are generated by third-party AI systems. This interface only sends your prompt/image and displays returned output.
  • No guarantees: outputs can be wrong, biased, or unexpected. Responsibility for model behavior belongs to the AI provider.
Access mode
Vision provider
Vision model
Perspective style
API Key
Token split
Captured
Analyzing…
Read tag
Analyzing…
TOKENS
All models breakdown
ModelInputOutput ReasoningTotal~Req

Split adjustable in settings. Defaults: 30% input / 20% reasoning / 50% output. Real usage varies — chat skews input-heavy; agentic/code work skews output & reasoning.

·