Collaboration

Peer Learning

Peer learning automatically detects when one agent succeeds where another failed on a similar task, extracts the winning strategy, and makes it available for the failing agent to adopt.

Detecting learning opportunities

Submit the outputs from both agents. The system uses inference to compare them and extract the strategy that led to success.

bash
# Detect a learning opportunity by comparing agent outputs
curl -X POST http://localhost:3000/peer-learning/detect \
  -H "Authorization: Bearer ${JWT_TOKEN}" \
  -H "Content-Type: application/json" \
  -d '{
    "successAgentId": "senior-agent",
    "failedAgentId": "junior-agent",
    "successContent": "The rate limiter uses a sliding window with Redis sorted sets...",
    "failedContent": "The rate limiter uses a fixed counter that resets every minute...",
    "taskDescription": "Design a rate limiting strategy"
  }'

# Response (201 Created — opportunity detected)
{
  "detected": true,
  "opportunity": {
    "id": "opp_abc123",
    "taskType": "Design a rate limiting strategy",
    "successAgentId": "senior-agent",
    "failedAgentId": "junior-agent",
    "strategy": "Use sliding window counters instead of fixed windows to avoid burst problems at boundaries",
    "confidence": 0.87,
    "applied": false
  }
}

Applying strategies

Once a strategy is applied to improve the failing agent, mark it as applied.

bash
# Mark a learning opportunity as applied
curl -X POST http://localhost:3000/peer-learning/opp_abc123/apply \
  -H "Authorization: Bearer ${JWT_TOKEN}"

Endpoint reference

MethodEndpointDescription
GET/peer-learningList learning opportunities (filter by agentId)
POST/peer-learning/detectDetect opportunity from agent comparison
POST/peer-learning/:id/applyMark as applied
Learning opportunities are stored in memory per workspace and are not persisted to the database. They reset on process restart.