Make a call to the model provider to generate a response:
resp, err := client.NewResponses(ctx, &responses.Request{ Model: "OpenAI/gpt-4.1-mini", Instructions: utils.Ptr("You are a helpful assistant."), Input: responses.InputUnion{ OfString: utils.Ptr("What is the capital of France?"), },})
Finally, extract the generated text from the response:
for _, output := range resp.Output { if output.OfOutputMessage != nil { for _, content := range output.OfOutputMessage.Content { if content.OfOutputText != nil { fmt.Println(content.OfOutputText.Text) } } }}
Responses API
Learn more about making LLM calls with the Responses API.
Create a simple agent with system instructions and an LLM:
agent := client.NewAgent(&hastekit.AgentOptions{ Name: "Assistant", Instruction: client.Prompt("You are a helpful assistant."), LLM: client.NewLLM(hastekit.LLMOptions{ Provider: llm.ProviderNameOpenAI, Model: "gpt-4o-mini", }),})
Execute the agent with user messages. Execute() returns an *AgentHandle; the simplest way to get the aggregated AgentOutput is handle.Result(), which drains the chunk stream internally. To render chunks live (e.g. forward to a UI), range over handle.Chunks yourself and then call handle.Wait().
for _, output := range out.Output { if output.OfOutputMessage != nil { for _, content := range output.OfOutputMessage.Content { if content.OfOutputText != nil { fmt.Println(content.OfOutputText.Text) } } }}
Agents
Learn more about building agents with the Agent SDK.