Agents

Creating Agents

Agents are the core unit of work in Open Astra. Each agent has a model, a system prompt, a set of skills and tools, and a permission profile. You can define agents in three ways: YAML, REST API, or directly in the database.

Defining agents in astra.yml

The most common way to define agents is in astra.yml. The gateway loads this file at startup and seeds agents into Postgres.

yaml
agents:
  - id: code-agent
    displayName: Code Agent
    tier: standard
    model:
      provider: openai
      modelId: gpt-4o
      maxContextTokens: 128000
      maxOutputTokens: 4096
      temperature: 0.2
    systemPromptTemplate: |
      You are a senior software engineer.
      You write clean, well-tested TypeScript code.
      Today is {{date}}.
      {{#if memory}}Context from memory:
{{memory}}{{/if}}
    skills:
      - git_ops
      - codebase
      - test_runner
    tools:
      allow:
        - file_read
        - file_write
        - shell_exec
        - web_search
      deny:
        - file_delete
    spawn:
      enabled: false
    fileAccess:
      restricted: true
      allowedPaths:
        - ./src
        - ./tests

Full agent config schema

Every field in the agent configuration:

FieldTypeRequiredDescription
idstringYesUnique identifier. Used in API calls and spawn targets.
displayNamestringNoHuman-readable name shown in channel messages and logs.
tierfree | standard | premiumNoBilling tier. Controls which models and feature limits apply.
model.providerstringYesInference provider: grok, groq, openai, gemini, claude, ollama, vllm, bedrock, mistral, openrouter
model.modelIdstringYesModel identifier (e.g. gpt-4o, claude-opus-4-6)
model.maxContextTokensnumberNoMax tokens in context window. Defaults to provider model limit.
model.maxOutputTokensnumberNoMax tokens in the response. Defaults to 4096.
model.temperaturenumberNoSampling temperature 0–2. Defaults to 0.7.
model.endpointstringNoCustom inference endpoint URL. Used by ollama and vllm providers to override the default base URL per-agent.
systemPromptTemplatestringNoHandlebars-style template. Supports {{variable}} and {{#if var}}...{{/if}}.
skillsstring[]NoSkill IDs to activate. Merges the skill's tools into the allow list and injects its prompt context.
tools.allowstring[]NoExplicit tool allow list. If empty, inherits from activated skills.
tools.denystring[]NoDeny list. Always wins over allow list.
spawn.enabledbooleanNoWhether this agent can spawn sub-agents. Default false.
spawn.allowedTargetsstring[]NoAgent IDs this agent is permitted to spawn.
spawn.maxDepthnumberNoMax spawn nesting depth. Default 2, max 3.
fileAccess.restrictedbooleanNoIf true, file tools are restricted to allowedPaths.
fileAccess.allowedPathsstring[]NoPaths the agent can read/write when restricted is true.
session.dailyResetHournumberNoHour (0–23) to auto-clear the session each day.
session.idleTimeoutMinutesnumberNoMinutes of inactivity before the session auto-expires.
session.maxContextMessagesnumberNoMax messages to keep in context before oldest are dropped.

Creating agents via REST API

You can create agents dynamically at runtime using the REST API. This is useful for programmatic agent provisioning.

bash
curl -X POST http://localhost:3000/agents \
  -H "Authorization: Bearer ${JWT_TOKEN}" \
  -H "Content-Type: application/json" \
  -d '{
    "id": "my-new-agent",
    "displayName": "My Agent",
    "model": {
      "provider": "openai",
      "modelId": "gpt-4o"
    },
    "systemPromptTemplate": "You are a helpful assistant."
  }'

System prompt templates

System prompts support Handlebars-style interpolation with the following available variables:

VariableDescription
{{date}}Current date in ISO format
{{user.name}}Display name of the requesting user
{{user.id}}User ID
{{workspace.name}}Workspace display name
{{memory}}Retrieved memory context (injected automatically)
{{#if memory}}...{{/if}}Conditional block — renders only if memory is non-empty