Introduction
Everruns is a durable AI agent execution platform built on Rust and Temporal. It provides APIs for managing agents, sessions, and runs with streaming event output via SSE.
Overview
Everruns enables you to build reliable AI agents that can:
- Execute long-running tasks with durability guarantees
- Stream real-time events to clients
- Manage conversations through sessions
- Extend agent capabilities with modular tools
Key Concepts
Agents
Agents are AI assistants with configurable system prompts and capabilities. Each agent can be customized with:
- A system prompt that defines its behavior
- A set of capabilities that provide tools
- Model configuration for the underlying LLM
Sessions
Sessions represent conversations with an agent. Each session maintains:
- Conversation history
- Current execution state
- Configuration overrides
Capabilities
Capabilities are modular functionality units that extend agent behavior. They can:
- Add instructions to the system prompt
- Provide tools for the agent to use
- Modify execution behavior
See Capabilities for more details.
Getting Started
Quick Start
- Deploy Everruns using the provided Docker images
- Configure your LLM providers via the Settings UI
- Create an agent with your desired configuration
- Start sessions and interact through the API or UI
API Access
The API is available at your deployment URL with full OpenAPI documentation:
- API Base:
https://your-domain.com/v1/ - Swagger UI:
https://your-domain.com/swagger-ui/ - OpenAPI Spec:
https://your-domain.com/api-doc/openapi.json
Architecture
Everruns uses a layered architecture:
- API Layer: HTTP endpoints (axum), SSE streaming, Swagger UI
- Core Layer: Agent abstractions, capabilities, tools
- Worker Layer: Temporal workflows for durable execution
- Storage Layer: PostgreSQL with encrypted secrets
For detailed architecture information, see the GitHub repository.