Documentation Index
Fetch the complete documentation index at: https://hastekit.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
Tool calling or function calling allows LLMs to interact with external systems by requesting the execution of specific functions. The HasteKit SDK provides a unified interface for defining tools and handling tool call requests from various LLM providers.
Tools are defined as part of the responses.Request struct using the Tools field.
// Define a function tool
getWeatherTool := responses.ToolUnion{
OfFunction: &responses.FunctionTool{
Name: "get_current_weather",
Description: utils.Ptr("Get the current weather in a given location"),
Parameters: map[string]any{
"type": "object",
"properties": map[string]any{
"location": map[string]any{
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"unit": map[string]any{
"type": "string",
"enum": []string{"celsius", "fahrenheit"},
},
},
"required": []string{"location"},
},
},
}
Include the defined tools in your request to the model.
resp, err := client.NewResponses(ctx, &responses.Request{
Model: "OpenAI/gpt-4.1-mini",
Input: responses.InputUnion{
OfString: utils.Ptr("What's the weather like in Paris?"),
},
Tools: []responses.ToolUnion{getWeatherTool},
})
You can iterate through the Output list to find any requested tool calls.
for _, output := range resp.Output {
// Check if the output item is a function call
if output.OfFunctionCall != nil {
fnCall := output.OfFunctionCall
fmt.Printf("Model requested tool: %s\n", fnCall.Name)
fmt.Printf("Arguments: %s\n", fnCall.Arguments)
// Execute your local logic here...
}
}
The following example demonstrates how to detect and process a tool call request from a streaming response.
import (
"fmt"
"github.com/hastekit/hastekit-sdk-go/pkg/gateway/llm/responses"
)
// ... request initialization with Tools ...
stream, err := client.NewStreamingResponses(ctx, request)
if err != nil {
panic(err)
}
for chunk := range stream {
// Detect when an output item is completed
if chunk.ChunkType() == "response.output_item.done" {
item := chunk.OfOutputItemDone.Item
// Check if the completed item is a function call
if item.Type == "function_call" {
fmt.Printf("Model requested tool: %s\n", *item.Name)
fmt.Printf("Arguments: %s\n", *item.Arguments)
// Execute your local logic here...
}
}
}
After executing the requested tool, you can send the results back to the model to continue the conversation. Use the OfFunctionCallOutput message type in the InputMessageList.
resp, err := client.NewResponses(ctx, &responses.Request{
Model: "OpenAI/gpt-4.1-mini",
Input: responses.InputUnion{
OfInputMessageList: responses.InputMessageList{
// ... original user message ...
// ... original tool call request from model ...
{
OfFunctionCallOutput: &responses.FunctionCallOutputMessage{
CallID: "call_123", // Must match the ID from the request
Output: responses.FunctionCallOutputContentUnion{
OfString: utils.Ptr(`{"temperature": 22, "condition": "Sunny"}`),
},
},
},
},
},
})