Documentation Index
Fetch the complete documentation index at: https://hastekit.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
A simple agent is the most basic type of agent in the HasteKit SDK. It executes in-process without durability, making it perfect for stateless interactions, testing, and simple use cases that don’t require crash recovery or long-running workflows.
Overview
Simple agents provide:
- System instructions: Define the agent’s behavior and personality
- LLM integration: Use any supported LLM provider (OpenAI, Anthropic, Gemini, etc.)
- Tool support: Optional tools for function calling
- Conversation history: Optional memory across interactions
- Streaming responses: A live channel of response chunks delivered through the run handle
- Stop signal: Cancel an in-flight run cleanly via
handle.Stop
Unlike durable agents, simple agents execute in-process and don’t persist state between runs across restarts. They’re ideal for stateless applications, quick prototypes, and scenarios where you don’t need crash recovery.
Creating a Simple Agent
To create a simple agent, use client.NewAgent() with AgentOptions:
import (
"context"
"fmt"
"log"
"github.com/hastekit/hastekit-sdk-go/pkg/utils"
"github.com/hastekit/hastekit-sdk-go/pkg/agents"
"github.com/hastekit/hastekit-sdk-go/pkg/gateway"
"github.com/hastekit/hastekit-sdk-go/pkg/gateway/llm"
"github.com/hastekit/hastekit-sdk-go/pkg/gateway/llm/responses"
hastekit "github.com/hastekit/hastekit-sdk-go"
)
// Initialize the SDK client
client, err := hastekit.New(&hastekit.ClientOptions{
ProviderConfigs: []gateway.ProviderConfig{
{
ProviderName: llm.ProviderNameOpenAI,
BaseURL: "",
CustomHeaders: nil,
ApiKeys: []*gateway.APIKeyConfig{
{
Name: "Key 1",
APIKey: "",
},
},
},
},
})
if err != nil {
log.Fatal(err)
}
// Create the agent
agent := client.NewAgent(&hastekit.AgentOptions{
Name: "Hello world agent",
Instruction: client.Prompt("You are helpful assistant. You greet user with a light-joke"),
LLM: client.NewLLM(hastekit.LLMOptions{
Provider: llm.ProviderNameOpenAI,
Model: "gpt-4o-mini",
}),
Parameters: responses.Parameters{
Temperature: utils.Ptr(0.2),
},
})
AgentOptions Fields
| Field | Type | Description |
|---|
| Name | string | A unique identifier for the agent |
| Instruction | core.SystemPromptProvider | System prompt defining agent behavior (use client.Prompt() for simple strings) |
| LLM | llm.Provider | The LLM provider instance (created via client.NewLLM()) |
| Parameters | responses.Parameters | Optional LLM parameters (temperature, max tokens, etc.) |
| Tools | []core.Tool | Optional array of tools the agent can use |
| History | *history.CommonConversationManager | Optional conversation history manager |
| Output | map[string]any | Optional JSON schema for structured output |
| McpServers | []*mcpclient.MCPClient | Optional MCP server clients |
Executing an Agent
Execute() is non-blocking — it generates a stream id, subscribes to the broker, and returns an *AgentHandle. There are two valid ways to consume the handle:
handle.Result() — drains the chunk channel internally and returns the aggregated AgentOutput. Use this when you only care about the final output.
for chunk := range handle.Chunks + handle.Wait() — observe chunks as they arrive (e.g. to forward to a UI or SSE stream), then collect the aggregated output.
Call handle.Stop(ctx) at any point to ask the agent to stop at the next iteration boundary.
handle, err := agent.Execute(context.Background(), &agents.AgentInput{
Messages: []responses.InputMessageUnion{
responses.UserMessage("Hello!"),
},
})
if err != nil {
log.Fatal(err)
}
// One-shot: drain + aggregate.
out, err := handle.Result()
if err != nil {
log.Fatal(err)
}
fmt.Println(out.Output[0].OfOutputMessage.Content[0].OfOutputText.Text)
Or if you want to consume chunks live:
for chunk := range handle.Chunks {
// forward chunk to your UI / SSE / log
}
out, err := handle.Wait()
| Field | Type | Description |
|---|
| Messages | []responses.InputMessageUnion | Array of input messages (use responses.UserMessage() helper) |
| Namespace | string | Optional namespace for conversation isolation |
| PreviousMessageID | string | Optional ID of previous message for conversation continuity |
| RunContext | map[string]any | Optional context data for template variable resolution |
| StreamID | string | Optional broker channel id. Execute() generates one if empty; supply your own to resume an existing stream or coordinate a stop signal. |
AgentHandle
Execute() returns a handle:
type AgentHandle struct {
StreamID string // Broker channel id for this run
Chunks <-chan *responses.ResponseChunk // Live stream of chunks; closed on completion
}
func (h *AgentHandle) Stop(ctx context.Context) error
func (h *AgentHandle) Wait() (*AgentOutput, error)
func (h *AgentHandle) Result() (*AgentOutput, error)
Stop records a stop request on the broker; the agent loop polls it at iteration boundaries and transitions to completed.
Wait blocks until the run finishes and returns the aggregated output. Safe to call only after Chunks has been drained.
Result drains Chunks internally and returns the aggregated output — equivalent to for range Chunks {}; Wait().
Calling Wait without draining Chunks will deadlock once the broker’s per-subscriber buffer fills, because the agent’s publisher back-pressures. Use Result if you don’t intend to consume chunks yourself.
AgentOutput Structure
handle.Wait() returns the aggregated AgentOutput:
type AgentOutput struct {
RunID string // Unique run identifier
Status agentstate.RunStatus // Execution status
Output []responses.InputMessageUnion // Agent's response messages
PendingApprovals []responses.FunctionCallMessage // Tool calls requiring approval
}
Complete Example
Here’s a complete working example:
package main
import (
"context"
"fmt"
"log"
"os"
"github.com/hastekit/hastekit-sdk-go/pkg/agents"
"github.com/hastekit/hastekit-sdk-go/pkg/gateway"
"github.com/hastekit/hastekit-sdk-go/pkg/gateway/llm"
"github.com/hastekit/hastekit-sdk-go/pkg/gateway/llm/responses"
hastekit "github.com/hastekit/hastekit-sdk-go"
"github.com/hastekit/hastekit-sdk-go/pkg/utils"
)
func main() {
// Initialize SDK client
client, err := hastekit.New(&hastekit.ClientOptions{
ProviderConfigs: []gateway.ProviderConfig{
{
ProviderName: llm.ProviderNameOpenAI,
ApiKeys: []*gateway.APIKeyConfig{
{Name: "default", APIKey: os.Getenv("OPENAI_API_KEY")},
},
},
},
})
if err != nil {
log.Fatal(err)
}
// Create agent
agent := client.NewAgent(&hastekit.AgentOptions{
Name: "Hello world agent",
Instruction: client.Prompt("You are helpful assistant. You greet user with a light-joke"),
LLM: client.NewLLM(hastekit.LLMOptions{
Provider: llm.ProviderNameOpenAI,
Model: "gpt-4o-mini",
}),
Parameters: responses.Parameters{
Temperature: utils.Ptr(0.2),
},
})
// Execute agent
handle, err := agent.Execute(context.Background(), &agents.AgentInput{
Messages: []responses.InputMessageUnion{
responses.UserMessage("Hello!"),
},
})
if err != nil {
log.Fatal(err)
}
out, err := handle.Result()
if err != nil {
log.Fatal(err)
}
// Print response
fmt.Println(out.Output[0].OfOutputMessage.Content[0].OfOutputText.Text)
}
Streaming Responses
Streaming is built into the handle — every run delivers chunks on handle.Chunks as they arrive:
handle, err := agent.Execute(context.Background(), &agents.AgentInput{
Messages: []responses.InputMessageUnion{
responses.UserMessage("Tell me a story"),
},
})
if err != nil {
log.Fatal(err)
}
for chunk := range handle.Chunks {
switch chunk.ChunkType() {
case "response.output_text.delta":
if chunk.OfOutputTextDelta != nil {
fmt.Print(chunk.OfOutputTextDelta.Delta)
}
case "response.output_text.done":
if chunk.OfOutputTextDone != nil && chunk.OfOutputTextDone.Text != nil {
fmt.Printf("\n\nComplete text: %s\n", *chunk.OfOutputTextDone.Text)
}
}
}
out, _ := handle.Wait()
_ = out
Stopping an In-Flight Run
handle.Stop(ctx) records a stop request on the broker. The agent’s loop checks this at every iteration boundary (between LLM calls / tool executions) and transitions cleanly to completed. The chunk stream stays open while the agent winds down — you’ll still see the final run.completed chunk before the channel closes.
go func() {
time.Sleep(2 * time.Second)
_ = handle.Stop(context.Background())
}()
for chunk := range handle.Chunks {
_ = chunk
}
Helper Functions
The SDK provides convenient helper functions for creating messages:
responses.UserMessage(msg string): Creates a user message from a string
responses.SystemMessage(msg string): Creates a system message
responses.AssistantMessage(msg string): Creates an assistant message
Next Steps