Skip to main content
The HasteKit SDK provides a Go client library for making LLM calls through the HasteKit LLM Gateway. It abstracts away provider differences and provides a unified interface for interacting with multiple LLM providers (OpenAI, Anthropic, Gemini, xAI, and Ollama).

Prerequisites

  • Golang v1.25.0 or later
  • A running HasteKit LLM Gateway instance
  • A virtual key configured in the gateway (see Virtual Keys)

Installation

Install the HasteKit SDK:
go get -u github.com/hastekit/hastekit-sdk-go

Setting Up the SDK

To use the SDK with the HasteKit LLM Gateway, you need to configure it with the gateway endpoint and a virtual key.
package main

import (
    "context"
    "fmt"
    "log"

    hastekit "github.com/hastekit/hastekit-sdk-go"
    "github.com/hastekit/hastekit-sdk-go/pkg/gateway/llm"
    "github.com/hastekit/hastekit-sdk-go/pkg/gateway/llm/responses"
    "github.com/hastekit/hastekit-sdk-go/internal/utils"
)

func main() {
    // Initialize the SDK with gateway configuration
    client, err := hastekit.New(&hastekit.ClientOptions{
        ServerConfig: hastekit.ServerConfig{
            Endpoint:  "http://localhost:6060",  // Your HasteKit gateway endpoint
            VirtualKey: "sk-hk-your-virtual-key-here",  // Your virtual key
        },
    })
    if err != nil {
        log.Fatal(err)
    }

    // Make an LLM call
    resp, err := client.NewResponses(
        context.Background(),
        &responses.Request{
            Model: "OpenAI/gpt-4.1-mini",
            Instructions: utils.Ptr("You are a helpful assistant."),
            Input: responses.InputUnion{
                OfString: utils.Ptr("What is the capital of France?"),
            },
        },
    )
    if err != nil {
        log.Fatal(err)
    }

    // Extract and print the response
    for _, output := range resp.Output {
        if output.OfOutputMessage != nil {
            for _, content := range output.OfOutputMessage.Content {
                if content.OfOutputText != nil {
                    fmt.Println(content.OfOutputText.Text)
                }
            }
        }
    }
}

Configuration

ServerConfig

The ServerConfig struct contains the gateway connection settings:
  • Endpoint: The base URL of your HasteKit LLM Gateway (e.g., http://localhost:6060 or https://your-gateway.com)
  • VirtualKey: Your virtual key from the gateway (prefixed with sk-hk-)
Example:
client, err := hastekit.New(&hastekit.ClientOptions{
    ServerConfig: hastekit.ServerConfig{
        Endpoint:   "https://gateway.example.com",
        VirtualKey: "sk-hk-P7MjwUHAuLbkb16W5z2j8YD-4BSXSg9gKhDiRYEBQ",
    },
})

Getting Your Virtual Key

  1. Navigate to the Virtual Keys page in the HasteKit gateway dashboard
  2. Create a new virtual key or use an existing one
  3. Copy the secret key (it starts with sk-hk-)
  4. Use it in your SDK configuration
See the Virtual Keys documentation for more details.

Making LLM Calls

Basic Example

// Make a request
resp, err := model.NewResponses(
    context.Background(),
    &responses.Request{
        Model: "OpenAI/gpt-4.1-mini",
        Instructions: utils.Ptr("You are a helpful assistant."),
        Input: responses.InputUnion{
            OfString: utils.Ptr("Hello, how are you?"),
        },
    },
)

if err != nil {
    log.Fatal(err)
}

// Process the response
for _, output := range resp.Output {
    if output.OfOutputMessage != nil {
        for _, content := range output.OfOutputMessage.Content {
            if content.OfOutputText != nil {
                fmt.Println(content.OfOutputText.Text)
            }
        }
    }
}

Supported Providers

You can switch between providers by changing the Model field:
// OpenAI
resp, err := client.NewResponses(ctx, &responses.Request{
    Model:    "OpenAI/gpt-4.1-mini",
})

// Anthropic
resp, err := client.NewResponses(ctx, &responses.Request{
    Model:    "Anthropic/claude-3-5-sonnet-20241022",
})

// Gemini
resp, err := client.NewResponses(ctx, &responses.Request{
    Model:    "Gemini/gemini-1.5-pro",
})

// xAI
resp, err := client.NewResponses(ctx, &responses.Request{
    Model:    "xAI/grok-beta",
})

// Ollama
resp, err := client.NewResponses(ctx, &responses.Request{
    Model:    "Ollama/llama2",
})
Note: Make sure your virtual key has access to the providers and models you want to use. Configure this in the Virtual Keys page of the gateway dashboard.

Advanced Usage

The SDK supports many advanced features:
  • Streaming Responses: Get responses in real-time as they’re generated
  • Tool Calling: Use function calling with LLMs
  • Structured Output: Get responses in JSON format with schema validation
  • Reasoning: Use chain-of-thought reasoning for complex tasks
  • Image Generation: Generate images with supported providers
  • Multi-turn Conversations: Maintain conversation history
For detailed information on these features, see the Responses API documentation.

Next Steps