Skip to main content
Deploy your agents to production with Celesto Cloud or self-host on your own infrastructure.

Deploy to Celesto Cloud

Deploy your agent with a single command using Agentor.
1

Create Your Agent

First, create your agent with the tools you need:
from agentor import Agentor

agent = Agentor(
    name="Weather Agent",
    model="gpt-4",
    tools=["get_weather"]
)
2

Deploy Your Agent

Deploy your agent with a single command:
celesto deploy
✓ Building agent...
✓ Pushing to Celesto Cloud...
✓ Deploying agent...

🚀 Agent deployed successfully!

Endpoint: https://api.celesto.ai/v1/apps/your-agent-id
Dashboard: https://celesto.ai/apps/your-agent-id
Your agent is now live and accessible via the API endpoint. Visit the dashboard to monitor performance and manage settings.

Self-Hosted Deployment

Deploy agents on your own infrastructure using Docker or Kubernetes.
Create a Dockerfile in your project:
FROM python:3.11-slim

WORKDIR /app

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY . .

EXPOSE 8000

CMD ["python", "agent.py"]
Build and run your container:
docker build -t my-agent .
docker run -p 8000:8000 my-agent

Environment Variables

Set required environment variables for your deployment:
OPENAI_API_KEY
string
required
Your LLM provider API key (OpenAI, Anthropic, etc.)
CELESTO_API_KEY
string
Your Celesto API key for accessing managed tools and services
PORT
integer
default:"8000"
Port to run the agent server on
LOG_LEVEL
string
default:"INFO"
Logging level: DEBUG, INFO, WARNING, ERROR

Monitoring & Logs

Celesto Dashboard

Monitor agent performance, view logs, and track usage metrics in real-time.

Health Checks

Built-in health endpoint at /health for monitoring and load balancers.
For production deployments, we recommend setting up monitoring, logging, and auto-scaling based on your traffic patterns.