Skip to main content
HasteKit SDK supports generating text content with various LLM providers like OpenAI, Anthropic, and Gemini.

Invoke the model

You can then invoke the model with parameters like model identifer, system instruction, temperature, topK, user message, etc.
resp, err := client.NewResponses(ctx, &responses.Request{
    Model:        "OpenAI/gpt-4.1-mini",
    Instructions: utils.Ptr("You are a helpful assistant."),
    Input: responses.InputUnion{
        OfString: utils.Ptr("What is the capital of France?"),
    },
})

Response

The model returns an array of outputs. The output can be of various types - text, images, reasoning etc. You can access the text content like this:
// Access the text output
for _, output := range resp.Output {
    if output.OfOutputMessage != nil {
        for _, content := range output.OfOutputMessage.Content {
            if content.OfOutputText != nil {
                fmt.Println(content.OfOutputText.Text)
            }
        }
    }
}

Streaming Responses

For real-time applications, the SDK supports streaming responses. This returns a channel that yields chunks of the response as they are generated by the LLM.
import (
    "context"
    "fmt"
    "github.com/hastekit/hastekit-sdk-go/pkg/gateway/llm/responses"
    "github.com/hastekit/hastekit-sdk-go/pkg/utils"
)

func main() {
    // ... client and model initialization ...

    stream, err := client.NewStreamingResponses(context.Background(), &responses.Request{
        Model:        "OpenAI/gpt-4.1-mini",
        Input: responses.InputUnion{
            OfString: utils.Ptr("Write a poem about coding."),
        },
    })
    if err != nil {
        panic(err)
    }

    for chunk := range stream {
        // Handle different types of chunks
        if chunk.OfOutputTextDelta != nil {
            fmt.Print(chunk.OfOutputTextDelta.Delta)
        }
    }
}

Using Multi-turn Conversation

Instead of a single string, you can pass a list of messages for context-aware conversations.
resp, err := client.NewResponses(ctx, &responses.Request{
    Model: "OpenAI/gpt-4.1-mini",
    Input: responses.InputUnion{
        OfInputMessageList: responses.InputMessageList{
            {
                OfEasyInput: &responses.EasyMessage{
                    Role:    "user",
                    Content: responses.EasyInputContentUnion{OfString: utils.Ptr("Hi!")},
                },
            },
            {
                OfEasyInput: &responses.EasyMessage{
                    Role:    "assistant",
                    Content: responses.EasyInputContentUnion{OfString: utils.Ptr("Hello! How can I help you?")},
                },
            },
            {
                OfEasyInput: &responses.EasyMessage{
                    Role:    "user",
                    Content: responses.EasyInputContentUnion{OfString: utils.Ptr("Tell me a joke.")},
                },
            },
        },
    },
})