Skip to main content

Documentation Index

Fetch the complete documentation index at: https://hastekit.ai/docs/llms.txt

Use this file to discover all available pages before exploring further.

The Sandbox Tool lets your agent run bash commands in an isolated environment. Each conversation gets its own sandbox (Docker container or Kubernetes pod) with a persistent workspace. The agent uses the execute_bash_commands tool to run shell commands and receive stdout, stderr, and exit code.

Overview

When you attach the Sandbox tool to an agent:
  • The agent can call execute_bash_commands with a code parameter (the bash command to run).
  • Each session gets a dedicated sandbox created by a sandbox manager (Docker or Kubernetes).
  • A sandbox daemon runs inside the sandbox and handles command execution over HTTP.
  • The workspace is rooted at /sandbox/workspace. Agent data (e.g. skills) can be mounted so the agent can read it via bash (e.g. cat /sandbox/skills/.../SKILL.md).

Example: Agent with Sandbox

The following example uses the HasteKit SDK to create an agent with the Sandbox tool, using the Docker-backed sandbox manager. It runs a single turn asking for the current time; the agent will use execute_bash_commands to run e.g. date.
package main

import (
	"context"
	"fmt"
	"log"
	"os"

	"github.com/bytedance/sonic"
	"github.com/hastekit/hastekit-sdk-go/pkg/agents"
	"github.com/hastekit/hastekit-sdk-go/pkg/agents/tools""
	"github.com/hastekit/hastekit-sdk-go/pkg/gateway"
	"github.com/hastekit/hastekit-sdk-go/pkg/gateway/llm"
	"github.com/hastekit/hastekit-sdk-go/pkg/gateway/llm/responses"
	"github.com/hastekit/hastekit-sdk-go/pkg/sandbox/docker_sandbox"
	hastekit "github.com/hastekit/hastekit-sdk-go"
)

func main() {
	client, err := hastekit.New(&hastekit.ClientOptions{
        ProviderConfigs: []gateway.ProviderConfig{
            {
                ProviderName:  llm.ProviderNameOpenAI,
                BaseURL:       "",
                CustomHeaders: nil,
                ApiKeys: []*gateway.APIKeyConfig{
                    {
                        Name:   "Key 1",
                        APIKey: os.Getenv("OPENAI_API_KEY"),
                    },
                },
            },
        },
    })
	if err != nil {
		log.Fatal(err)
	}

	model := client.NewLLM(hastekit.LLMOptions{
		Provider: llm.ProviderNameOpenAI,
		Model:    "gpt-4.1-mini",
	})

	history := client.NewConversationManager()
	agent := agents.NewAgent(&agents.AgentOptions{
		Name:        "hello-world-agent",
		Instruction: client.Prompt("You are a helpful assistant with access to terminal (bash)"),
		LLM:         model,
		History:     history,
		Tools: []core.Tool{
			tools.NewSandboxTool(docker_sandbox.NewManager(docker_sandbox.Config{
				AgentDataPath: "/path/to/agent-data",
			}), "hastekit-ai-sandbox:v7"),
		},
	})

	handle, err := agent.Execute(context.Background(), &agents.AgentInput{
		Messages: []responses.InputMessageUnion{
			responses.UserMessage("What is the current time?"),
		},
		Namespace:         "default",
		PreviousMessageID: "",
	})
	if err != nil {
		log.Fatal(err)
	}
	out, err := handle.Result()
	if err != nil {
		log.Fatal(err)
	}

	b, _ := sonic.Marshal(out)
	fmt.Println(string(b))
}
The full example is in examples/agents/10_agent_with_sandbox/main.go.

Key pieces

  • SDK clienthastekit.New with LLMConfigs (e.g. OpenAI). The client is used to create an LLM, conversation history, and prompts.
  • Sandbox managerdocker_sandbox.NewManager(docker_sandbox.Config{...}) for local or Docker-based sandboxes. For production you would typically use a Kubernetes-backed manager that implements sandbox.Manager. The manager creates and tracks one sandbox per session.
  • AgentDataPath – Host path where agent data (e.g. skills) lives; it is mounted into the sandbox so the agent can read it via bash. Must exist and be writable if you use skills.
  • Sandbox image – The Docker image for the sandbox container (e.g. hastekit-ai-sandbox). It must run the HasteKit sandbox daemon (e.g. hastekit-ai-sandbox sandbox-daemon). Build from deployments/sandbox/Dockerfile or use a pre-built image.
  • Tool registrationtools.NewSandboxTool(manager, image) returns a core.Tool that exposes execute_bash_commands to the LLM. Pass it in AgentOptions.Tools along with any other tools.

Tool: execute_bash_commands

The Sandbox tool exposes a single function to the LLM:
PropertyValue
Nameexecute_bash_commands
DescriptionExecute bash command and get the output
Parameterscode (string, required) – the bash command to execute
Commands are run via /bin/sh -c inside the sandbox, so normal shell syntax (pipes, redirections, etc.) is supported.

Response format

The sandbox returns a JSON object with:
FieldTypeDescription
stdoutstringStandard output of the command
stderrstringStandard error
exit_codeintProcess exit code (0 for success)
duration_msint64Execution time in milliseconds
Timeouts are enforced by the daemon (default 60 seconds). On timeout, the response may indicate failure and the process is killed.

Summary

AspectDetail
Tool nameexecute_bash_commands
Parametercode (string) – bash command
SDK APItools.NewSandboxTool(manager, image)
Managerdocker_sandbox.NewManager(config) or Kubernetes-backed sandbox.Manager
Exampleexamples/agents/10_agent_with_sandbox/main.go