Creating Agents
Agents are the core unit of work in Open Astra. Each agent has a model, a system prompt, a set of skills and tools, and a permission profile. You can define agents in three ways: YAML, REST API, or directly in the database.
Defining agents in astra.yml
The most common way to define agents is in astra.yml. The gateway loads this file at startup and seeds agents into Postgres.
yaml
agents:
- id: code-agent
displayName: Code Agent
tier: standard
model:
provider: openai
modelId: gpt-4o
maxContextTokens: 128000
maxOutputTokens: 4096
temperature: 0.2
systemPromptTemplate: |
You are a senior software engineer.
You write clean, well-tested TypeScript code.
Today is {{date}}.
{{#if memory}}Context from memory:
{{memory}}{{/if}}
skills:
- git_ops
- codebase
- test_runner
tools:
allow:
- file_read
- file_write
- shell_exec
- web_search
deny:
- file_delete
spawn:
enabled: false
fileAccess:
restricted: true
allowedPaths:
- ./src
- ./testsFull agent config schema
Every field in the agent configuration:
| Field | Type | Required | Description |
|---|---|---|---|
id | string | Yes | Unique identifier. Used in API calls and spawn targets. |
displayName | string | No | Human-readable name shown in channel messages and logs. |
tier | free | standard | premium | No | Billing tier. Controls which models and feature limits apply. |
model.provider | string | Yes | Inference provider: grok, groq, openai, gemini, claude, ollama, vllm, bedrock, mistral, openrouter |
model.modelId | string | Yes | Model identifier (e.g. gpt-4o, claude-opus-4-6) |
model.maxContextTokens | number | No | Max tokens in context window. Defaults to provider model limit. |
model.maxOutputTokens | number | No | Max tokens in the response. Defaults to 4096. |
model.temperature | number | No | Sampling temperature 0–2. Defaults to 0.7. |
model.endpoint | string | No | Custom inference endpoint URL. Used by ollama and vllm providers to override the default base URL per-agent. |
systemPromptTemplate | string | No | Handlebars-style template. Supports {{variable}} and {{#if var}}...{{/if}}. |
skills | string[] | No | Skill IDs to activate. Merges the skill's tools into the allow list and injects its prompt context. |
tools.allow | string[] | No | Explicit tool allow list. If empty, inherits from activated skills. |
tools.deny | string[] | No | Deny list. Always wins over allow list. |
spawn.enabled | boolean | No | Whether this agent can spawn sub-agents. Default false. |
spawn.allowedTargets | string[] | No | Agent IDs this agent is permitted to spawn. |
spawn.maxDepth | number | No | Max spawn nesting depth. Default 2, max 3. |
fileAccess.restricted | boolean | No | If true, file tools are restricted to allowedPaths. |
fileAccess.allowedPaths | string[] | No | Paths the agent can read/write when restricted is true. |
session.dailyResetHour | number | No | Hour (0–23) to auto-clear the session each day. |
session.idleTimeoutMinutes | number | No | Minutes of inactivity before the session auto-expires. |
session.maxContextMessages | number | No | Max messages to keep in context before oldest are dropped. |
Creating agents via REST API
You can create agents dynamically at runtime using the REST API. This is useful for programmatic agent provisioning.
bash
curl -X POST http://localhost:3000/agents \
-H "Authorization: Bearer ${JWT_TOKEN}" \
-H "Content-Type: application/json" \
-d '{
"id": "my-new-agent",
"displayName": "My Agent",
"model": {
"provider": "openai",
"modelId": "gpt-4o"
},
"systemPromptTemplate": "You are a helpful assistant."
}'System prompt templates
System prompts support Handlebars-style interpolation with the following available variables:
| Variable | Description |
|---|---|
{{date}} | Current date in ISO format |
{{user.name}} | Display name of the requesting user |
{{user.id}} | User ID |
{{workspace.name}} | Workspace display name |
{{memory}} | Retrieved memory context (injected automatically) |
{{#if memory}}...{{/if}} | Conditional block — renders only if memory is non-empty |