Monitor and debug LLM requests with distributed tracing.
The Tracing page provides comprehensive observability for all LLM requests flowing through the HasteKit LLM Gateway. It allows you to monitor request performance, debug issues, and analyze usage patterns across different providers, models, and virtual keys.The Traces page displays a summary dashboard with key metrics, search and filtering capabilities, and a detailed table of all LLM request traces.
Tracing in the HasteKit LLM Gateway uses OpenTelemetry to collect and display detailed information about every LLM request. Each request generates a trace that includes:
Trace ID: Unique identifier for the request
Provider: Which LLM provider was used (OpenAI, Anthropic, Gemini, etc.)