HasteKit LLM Gateway supports Google Gemini’s GenerateContent API for text generation, images, tool calling, and reasoning. You can use any Gemini SDK by simply pointing it to HasteKit’s gateway endpoint. This allows you to leverage HasteKit’s features like virtual keys, provider management, and observability while using your existing Gemini code.Documentation Index
Fetch the complete documentation index at: https://hastekit.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
Overview
The HasteKit LLM Gateway provides an endpoint at/api/gateway/gemini/v1beta/models/{model} that implements Google Gemini’s GenerateContent API specification. This means you can use any Gemini SDK (Python, JavaScript, Go, etc.) without modifying your code - just change the base URL.
Usage Examples
main.py
Streaming Support
The gateway supports streaming responses. UsestreamGenerateContent as the action in the model name:
streaming.py
Supported Features
The gateway currently supports the GenerateContent API with:- ✅ Text generation - Standard text completions
- ✅ Images - Image input and generation
- ✅ Tool calling - Function calling capabilities
- ✅ Reasoning - Advanced reasoning models
Authentication
The gateway accepts authentication via thekey query parameter:
- Virtual Key: Use a virtual key (starts with
sk-hk-) for managed access control - Direct API Key: Use your Google Gemini API key directly