Agents & Teams
Agents are AI personas with specific skills, access policies, and harness profiles. Teams group agents for coordinated work under a lead with a dispatch strategy. Both are defined in YAML files in your repository and synced to the platform via eve agents sync.
What are agents?
An agent is a named persona that combines:
- A skill — what the agent knows how to do
- A harness profile — which model(s) power the agent
- Access policies — which environments and services the agent can touch
- Gateway exposure — whether the agent is addressable from chat
Agents are not generic AI assistants. Each one has a narrow, well-defined role. A coder agent writes code. A reviewer agent reviews pull requests. A deploy-agent handles deployments. Specialization makes agents reliable and predictable.
agents.yaml structure
Agents are defined in a YAML file whose path is set via x-eve.agents.config_path in the manifest. The conventional location is agents/agents.yaml.
version: 1
agents:
mission-control:
slug: mission-control
alias: mc # short name for chat: @eve mc deploy to staging
description: "Primary orchestration agent for deploys and incident response"
skill: eve-orchestration
workflow: assistant
harness_profile: primary-orchestrator
access:
envs: [staging, production]
services: [api, web]
api_specs: [openapi]
policies:
permission_policy: auto_edit
git:
commit: manual
push: never
schedule:
heartbeat_cron: "*/15 * * * *"
gateway:
policy: routable
clients: [slack]
Field reference
| Field | Required | Description |
|---|---|---|
slug | No | Org-unique identifier for chat routing. Lowercase alphanumeric + dashes. |
alias | No | Short vanity name for chat addressing (see Agent aliases) |
description | No | Human-readable summary of the agent's purpose |
skill | Yes | Name of the installed skill that defines this agent's capability |
workflow | No | Named workflow to execute (from workflows in the manifest) |
harness_profile | No | Named profile from x-eve.agents.profiles in the manifest |
access | No | Scope restrictions: envs, services, api_specs |
policies | No | Permission and git policies |
schedule | No | Cron-based heartbeat for periodic agents |
gateway | No | Chat gateway exposure settings |
Permission policies
The permission_policy field controls how much autonomy an agent has:
| Policy | Behavior |
|---|---|
default | Interactive — requires human approval for risky actions |
auto_edit | Autonomous — edits files and code without approval |
never | Read-only — cannot modify anything |
yolo | Fully autonomous in controlled environments (use carefully) |
Git policies
Git policies control how agents interact with version control:
policies:
git:
commit: auto # never | manual | auto | required
push: on_success # never | on_success | required
- commit:
autocreates commits automatically.manuallets the agent decide when to commit.requiredmandates a commit before the job completes. - push:
on_successpushes when the job succeeds.nevermeans the agent's changes stay local.requiredmandates a push before completion.
For coding agents, auto commit with on_success push is the common pattern. For read-only agents (auditors, reviewers), set both to never.
Hard guardrails from repo policy files
Policy files like AGENTS.md can define hard constraints that override general autonomy settings. Example: forbidding direct AWS infrastructure mutations and requiring Terraform-only changes in a separate infra repository. Treat these rules as mandatory runtime policy, even when permission_policy is permissive.
Agent slugs and gateway exposure
Slugs
An agent slug is an org-unique identifier used for direct chat routing. When a user sends @eve mission-control deploy to staging in Slack, Eve routes the message to the agent with slug mission-control.
Slug rules:
- Lowercase alphanumeric characters and dashes only
- Must be unique across the entire organization (not just the project)
- Sync fails if a slug already exists in another project
Organizations can set a default agent that receives messages when no slug is specified:
eve org update org_xxx --default-agent mission-control
Gateway exposure policy
The gateway block controls whether an agent is visible and addressable from external chat providers. Internal dispatch (teams, pipelines, routes) is unaffected by this setting.
gateway:
policy: routable
clients: [slack]
| Policy | Listed in @eve agents list | Responds to @eve <slug> msg | Internal dispatch |
|---|---|---|---|
none | Hidden | Rejected | Works |
discoverable | Visible | Rejected (with hint) | Works |
routable | Visible | Works | Works |
Default to none. Make agents routable only when they should receive direct messages from chat. discoverable is useful for agents that should appear in listings but only respond when routed through a team.
Agent aliases
Agent slugs are always prefixed with the project slug to ensure org-wide uniqueness. An agent with slug pm in project pmbot becomes pmbot-pm. In Slack, users must type @eve pmbot-pm hello — clunky and hard to remember.
Aliases solve this. An alias is a short, human-chosen vanity name that bypasses the prefixed slug for chat addressing:
agents:
pm:
slug: pm
alias: pm # users type: @eve pm hello
skill: pm-coordinator
gateway:
policy: routable
tech-lead:
slug: tech-lead
alias: tech # users type: @eve tech review this
skill: tech-lead
gateway:
policy: routable
After sync with project slug pmbot, the canonical slugs are pmbot-pm and pmbot-tech-lead (still work), but users can address these agents as @eve pm hello and @eve tech review this.
Resolution order is backwards-compatible — existing slugs always resolve first:
- Slug match —
@eve pmbot-pm helloroutes directly - Alias match —
@eve pm helloresolves via alias - Org default —
@eve hellofalls back to the organization's default agent - Error — no match and no default configured
Namespace rules:
- Aliases and slugs share the same routing namespace. If project A has slug
pm, project B cannot claim aliaspm. - The namespace is org-scoped and case-insensitive.
- Platform-reserved words (
agents,help,status,eve,admin,system,health) cannot be used as aliases — they conflict with gateway management commands. - Aliases are optional. If omitted, the agent is reachable only by its canonical prefixed slug.
The @eve agents list command shows aliases alongside canonical slugs:
pmbot-pm (-> pm) -- pmbot (PM Coordinator)
devbot-code (-> code) -- devbot (Code Review Agent)
Agent runtime and warm pods
When a chat message arrives for an agent, Eve needs somewhere to execute it. The agent runtime provides pre-provisioned, org-scoped containers — warm pods — that are ready to handle requests immediately, eliminating cold-start latency for conversational flows.
How warm pods work
Warm pods are long-lived containers that report health and capacity to the platform via a heartbeat. When a chat request arrives, the platform places it on a warm pod within the same organization using a sticky routing strategy. This means your agents respond in seconds rather than waiting for a fresh container to spin up.
Each warm pod tracks:
- Health status — whether the pod is ready to accept work
- Capacity — how many concurrent requests the pod can handle
- Org binding — which organization the pod serves
Execution modes
The EVE_AGENT_RUNTIME_EXECUTION_MODE environment variable controls how agent jobs run:
| Mode | Behavior | Best for |
|---|---|---|
inline (default) | Execute directly in the warm pod | Chat, triage, lightweight tasks |
runner | Spin up an ephemeral runner pod | Heavy computation, untrusted code, long-running tasks |
Inline mode is the default because it gives the fastest response times. Switch to runner mode when you need stronger isolation — for example, when agents execute user-provided code or perform resource-intensive operations that could affect other requests sharing the pod.
# Check runtime status for your agents
eve agents runtime-status
Start with inline mode. If you observe resource contention or need stricter isolation for specific agents, switch those agents to runner mode selectively via environment overrides rather than changing the global setting.
Per-job HOME isolation
Each job attempt runs with its own isolated HOME directory. The platform creates a dedicated home for every attempt, pre-populates it with the necessary directory structure, and sets HOME and EVE_JOB_USER_HOME in the harness environment. This prevents cross-job interference — credentials, shell history, and tool configuration from one job cannot leak into another, even when multiple jobs share the same warm pod via inline execution.
Both the agent runtime and the worker enforce this isolation. The job home is cleaned up after the attempt completes.
Teams and dispatch modes
Teams group agents under a lead for coordinated work. When work is dispatched to a team, the lead agent orchestrates the members according to the team's dispatch mode.
teams.yaml structure
Teams are defined in a separate YAML file whose path is set via x-eve.agents.teams_path in the manifest. The conventional location is agents/teams.yaml.
version: 1
teams:
review-council:
lead: mission-control
members: [code-reviewer, security-auditor]
dispatch:
mode: council
max_parallel: 3
lead_timeout: 300
member_timeout: 300
merge_strategy: majority
expert-panel:
lead: pm-coordinator
members: [tech-lead, ux-advocate, biz-analyst, risk-assessor]
dispatch:
mode: council
staged: true # lead prepares before members start
lead_timeout: 3600
member_timeout: 300
deploy-ops:
lead: ops-lead
members: [deploy-agent, monitor-agent]
dispatch:
mode: fanout
max_parallel: 2
pipeline-crew:
lead: orchestrator
members: [builder, tester, deployer]
dispatch:
mode: relay
Dispatch modes
Fanout is the most common mode. The lead creates a root job and dispatches parallel child jobs — one per member. Members work independently. Use fanout when work can be cleanly decomposed into independent tasks.
dispatch:
mode: fanout
max_parallel: 3
Council sends the same prompt to all members and merges their responses using a merge strategy. Use council for collective judgment — code reviews, security audits, design decisions. Council supports an optional staged mode where the lead prepares work before members start.
dispatch:
mode: council
merge_strategy: majority # majority | unanimous | lead-decides
Relay is sequential delegation. The lead delegates to the first member, whose output passes to the next member, and so on. Use relay when each stage's output is the next stage's input — for example, a research-then-implement-then-test pipeline.
dispatch:
mode: relay
Choosing the right mode
| Scenario | Mode | Why |
|---|---|---|
| Implement multiple features in parallel | fanout | Independent work, no dependencies between members |
| Review a pull request from multiple perspectives | council | Multiple opinions merged into a single verdict |
| Transcribe a recording, then fan out to domain experts | council + staged | Lead prepares content before members start |
| Research, implement, then test | relay | Each stage depends on the previous stage's output |
Most work is fanout. Use council only when multiple perspectives genuinely improve the outcome. Use relay only when stages are strictly sequential.
Staged council dispatch
Standard council mode starts the lead and all members simultaneously. This breaks down when the lead needs to prepare material before members can work — for example, transcribing a meeting recording before domain experts analyze it, or triaging an incident before investigators fan out.
Staged dispatch solves this by splitting council execution into three phases:
Enable it with the staged flag on a council dispatch:
teams:
expert-panel:
lead: pm-coordinator
members:
- tech-lead
- ux-advocate
- biz-analyst
- risk-assessor
dispatch:
mode: council
staged: true
lead_timeout: 3600
member_timeout: 300
How it works:
- Dispatch — the platform creates the lead job in
readyphase and member jobs inbacklogphase. Members are visible immediately (eve job listshows the full roster) but will not be claimed by the orchestrator. - Prepare — the lead runs first. It processes attachments, transcribes audio, gathers context, and posts prepared material to the coordination thread. When ready, it returns
eve.status = "prepared". - Promote — the orchestrator sees the
preparedsignal, promotes allbacklogmembers toready, and requeues the lead with achildren.all_donewake condition. - Parallel work — members are claimed and run in parallel. Each reads the coordination thread for the lead's prepared content and returns its analysis.
- Synthesize — when all members complete, the lead wakes and reads their summaries from the coordination thread. It produces a final synthesis and returns
eve.status = "success".
If the lead completes without returning prepared (handles the request solo, or fails), any members still in backlog are automatically cancelled.
Staged dispatch is only valid with mode: council. The staged flag is rejected on fanout and relay modes. If you need sequential preparation followed by sequential processing, use relay with the lead as the first link in the chain.
Syncing agent configuration
All agent and team configuration is repo-first. The repository is the source of truth, and eve agents sync pushes it to the platform.
# Sync from committed ref (production)
eve agents sync --project proj_xxx --ref 0123456789abcdef0123456789abcdef01234567
# Sync local state (development)
eve agents sync --project proj_xxx --local --allow-dirty
# Preview effective config without syncing
eve agents config --repo-dir ./my-app
Sync performs several operations:
- Reads
agents.yaml,teams.yaml, andchat.yamlfrom the paths specified in the manifest - Resolves AgentPacks from
x-eve.packsand writes.eve/packs.lock.yaml - Deep-merges pack agents, teams, and chat config with local overrides
- Validates org-wide slug and alias uniqueness (aliases cannot collide with slugs or reserved names)
- Pushes the merged configuration to the API
Pack overlay
When using AgentPacks, local YAML overlays pack defaults via deep merge. You can override specific fields or remove pack-provided agents entirely:
agents:
# Override a field from the pack
pack-provided-agent:
harness_profile: my-custom-profile
# Remove a pack agent you don't need
unwanted-pack-agent:
_remove: true
Harness profiles
Harness profiles decouple agents from specific AI models. Instead of hardcoding a model in the agent definition, you define named profiles in the manifest and agents reference them by name.
x-eve:
agents:
profiles:
primary-orchestrator:
- harness: mclaude
model: opus-4.5
reasoning_effort: high
primary-reviewer:
- harness: mclaude
model: opus-4.5
reasoning_effort: high
- harness: codex
model: gpt-5.2-codex
reasoning_effort: x-high
fast-triage:
- harness: mclaude
model: sonnet-4.5
reasoning_effort: medium
Each profile is a fallback chain — if the first harness is unavailable, the next one is tried. This provides resilience against provider outages and lets you mix models by capability:
| Task type | Profile strategy |
|---|---|
| Complex coding, architecture | High-reasoning model (opus, gpt-5.2-codex) |
| Code review, documentation | Medium-reasoning model (sonnet, gemini) |
| Triage, routing, classification | Fast model (haiku-class, low reasoning) |
Availability policy
The manifest can configure what happens when a harness in a profile is unavailable:
x-eve:
agents:
availability:
drop_unavailable: true
When drop_unavailable is true, unavailable harnesses are silently skipped and the next entry in the fallback chain is tried.
Planning councils
Planning councils are a specialized use of harness profiles where multiple models collaborate on a planning task. Define a profile with multiple entries, and the orchestrator runs them in parallel to produce a merged plan.
x-eve:
agents:
profiles:
planning-council:
- profile: primary-planner
- harness: gemini
model: gemini-3
A profile entry can reference another profile by name (using the profile key instead of harness), enabling composition of complex multi-model strategies.
Agent install targets
The x-eve.install_agents field controls which agent runtimes receive installed skills. By default, skills are installed for claude-code only.
x-eve:
install_agents: [claude-code, codex, gemini-cli]
This affects where skill files are placed during installation. Each agent runtime has its own skills directory convention, and the installer writes to all specified targets.
You can also override install targets per-pack:
x-eve:
packs:
- source: ./skillpacks/claude-only
install_agents: [claude-code]
- source: ./skillpacks/universal
install_agents: [claude-code, codex, gemini-cli]
Coordination threads
When a team dispatches work, a coordination thread links the parent job to all child agents. This enables real-time communication between the lead and members during execution.
- Thread key:
coord:job:{parent_job_id} - Child agents receive
EVE_PARENT_JOB_IDas an environment variable and derive the thread key from it - End-of-attempt summaries are automatically posted to the coordination thread
- Coordination inbox:
.eve/coordination-inbox.mdis regenerated at job start from recent thread messages
Message kinds
| Kind | Purpose |
|---|---|
status | Automatic end-of-attempt summary |
directive | Lead-to-member instruction |
question | Member-to-lead question |
update | Progress update from a member |
The lead agent can monitor the entire job tree:
eve supervise # supervise current job
eve supervise <job-id> --timeout 60 # supervise specific job
Putting it all together
A complete agent configuration ties together the manifest, agents, teams, and chat:
# .eve/manifest.yaml
x-eve:
agents:
config_path: agents/agents.yaml
teams_path: agents/teams.yaml
profiles:
primary-orchestrator:
- harness: mclaude
model: opus-4.5
reasoning_effort: high
chat:
config_path: agents/chat.yaml
install_agents: [claude-code]
packs:
- source: incept5/eve-skillpacks
ref: 0123456789abcdef0123456789abcdef01234567
Sync everything in one command:
eve agents sync --project proj_xxx --ref <sha>
This resolves packs, merges configuration, validates slugs, and pushes agents, teams, and chat routes to the platform in a single atomic operation.
What's next?
Set up chat integrations to talk to your agents: Chat & Conversations