TLS/WSS Proxy
A transparent proxy that captures every API call your AI agent makes. You'll finally know what's happening on the wire.
Questions this answers
- How to intercept LLM API calls from terminal AI agents?
- Monitor OpenAI API calls from Claude Code in terminal
- Transparent proxy for AI agent API traffic
- How does Chau7 capture LLM API requests?
- Can I see what API calls my AI coding agent makes?
How it works
Chau7 runs a local TLS/WSS proxy that transparently intercepts HTTPS and WebSocket traffic between AI agents and LLM API endpoints. The proxy uses a locally generated certificate authority to decrypt, inspect, and re-encrypt traffic to known LLM provider domains. This happens entirely on your machine with no data sent to external servers.
The proxy is selective. It only intercepts traffic to recognized LLM API endpoints like api.openai.com, api.anthropic.com, and generativelanguage.googleapis.com. All other traffic passes through untouched. This minimizes overhead and avoids interfering with non-AI network activity.
Captured API calls are parsed and fed into downstream analytics features: token counting, cost tracking, and latency measurement. Each call is associated with its originating tab and session, creating a complete per-agent audit trail of API usage.
Why it matters
AI coding agents make dozens or hundreds of API calls per session, and the developer sees none of that traffic. Chau7 runs a transparent TLS proxy on localhost that intercepts calls to OpenAI, Anthropic, and other providers without modifying the agent's behavior. For the first time, you can see every request, every response, every token count, and every dollar spent.
Frequently asked questions
Is the proxy secure? Does it send data externally?
The proxy runs entirely on your local machine. No API traffic, tokens, or request content is sent to Chau7 servers or any third party. The locally generated CA certificate is scoped to LLM API domains only.
Does the proxy add latency to API calls?
The proxy adds negligible latency, typically under one millisecond per request. The inspection and logging happen asynchronously so they do not block the request-response cycle.
Which LLM providers are supported?
The proxy recognizes OpenAI, Anthropic, Google (Gemini), and other major LLM API endpoints. Custom provider domains can be added to the interception list.