Dream Mode
Dream mode runs an autonomous overnight session using only local models (Ollama or vLLM). It loads your recent memory, generates plausible future scenarios based on what's been happening, extracts insights, updates the knowledge graph, and produces a morning report — all without touching a cloud API.
Think of it as a planning session your agent runs while you sleep, using everything it knows about you to think ahead.
How it works
- Load context — pulls your user profile (interests, expertise, strategic notes), recent daily memory entries, and last 3 daily summaries into a dream context string
- Generate scenarios — uses that context to produce 3–5 plausible what-if scenarios with probability estimates (medium and deep only)
- Extract insights — flattens the implications from each scenario into a ranked insight list
- Update the graph — deep dreams persist new entities and relationships discovered during scenario generation to the knowledge graph
- Write the morning report — synthesises the session into a readable report available when you wake up
Intensities
The intensity setting controls how much work is done and how long the session runs:
| Intensity | Duration | What happens | Report includes |
|---|---|---|---|
light | ~5 min | Quick memory consolidation — deduplicates facts, merges similar entries, updates confidence scores | Consolidated memory only, no narrative report |
medium | ~15 min | Light + generates 3–5 future scenarios from recent memory context, extracts insights, updates user profile and daily notes | Narrative summary, top 5 highlights, scenario list with probabilities |
deep | ~30 min | Medium + persists new entities and edges from scenarios to the knowledge graph; runs mini research passes on high-probability scenarios | Full medium report + graph update counts (entities and edges created) |
Giving it instructions
When promptUser: true (the default), dream mode asks what to focus on before starting. You'll see a prompt like:
It's dream time! Based on your recent activity, here are some ideas:
— Deep dive into: [your top interests]
— Explore scenarios around: [recent topics]
What would you like me to dream about tonight? (or say "surprise me")
You can also direct dream mode at any time using the built-in dream tool:
# Trigger manually with a topic
dream({ action: "dream", topic: "Q3 product strategy", intensity: "deep" })
# Let the system choose based on recent activity
dream({ action: "dream", topic: "surprise me" })
# View the morning report from the last session
dream({ action: "export" })
# Check dream history (last 5 sessions)
dream({ action: "history" })
# Update config without restarting
dream({ action: "update", intensity: "deep", scheduleHour: 2 })Morning report
After a medium or deep session, the report is available via GET /dream/last-report or dream({ action: "export" }). It contains:
- summary — a readable narrative of what the session explored (generated at temperature 0.7, ~800 tokens)
- highlights — top 5 insights ranked by relevance
- scenarios — each what-if with a title, key insight, and probability estimate
- graphUpdates — how many entities and edges were added to the knowledge graph (deep only)
{
"date": "2025-02-25",
"title": "Dream Report — Q3 product strategy",
"summary": "A narrative overview of the session...",
"highlights": [
"Revenue concentration risk in top 3 accounts",
"Competitor pricing shift may accelerate churn in SMB segment",
...
],
"scenarios": [
{ "title": "Aggressive expansion", "insight": "...", "probability": 0.35 },
{ "title": "Consolidation focus", "insight": "...", "probability": 0.45 },
...
],
"graphUpdates": { "entitiesCreated": 4, "edgesCreated": 7 }
}Configuration
dream:
enabled: true
intensity: medium # light | medium | deep
scheduleHour: 23 # default 11 PM
scheduleMinute: 0
maxDurationMs: 900000 # 15 min for medium (auto-set by intensity)
useLocalOnly: true # Ollama/vLLM only — no cloud API costs
promptUser: true # Ask what to dream about before startingLocal-only mode
Dream mode defaults to useLocalOnly: true, routing all inference through Ollama or vLLM — no cloud API calls, no cost. If no local model is available, it falls back to your configured default provider.
Inference parameters used internally:
| Task | Temperature | Max tokens |
|---|---|---|
| Scenario generation | 0.8 | 2000 |
| Quick insights (light) | 0.6 | 500 |
| Morning report narrative | 0.7 | 800 |
localModels:
enabled: true
localProvider: ollama
localModel: llama3.2 # Must be pulled first: ollama pull llama3.2Scheduling
The scheduler checks every 30 minutes whether it's within the configured dream window. It will not start a new session if one completed within the last 20 hours. Default schedule is 11 PM (scheduleHour: 23).
Monitoring
# Check if dream mode is currently running
GET /dream/status
# Get the morning report from the last completed dream
GET /dream/last-report
# Manually trigger dream mode
POST /dream/start
{ "intensity": "light", "topic": "infrastructure review" }dream({ action: "history" }) to see the last 5 completed sessions including duration, intensity, and how many graph updates were made.