Installation
The Celesto CLI is installed automatically with Agentor:Quick Start
Deploy an agent in three steps:# main.py
from agentor import Agentor
agent = Agentor(
name="Weather Agent",
model="gpt-5-mini",
tools=["get_weather"],
instructions="You are a helpful weather assistant."
)
agent.serve(port=8000)
name: weather-agent
runtime: python3.11
entry_point: main.py
env_vars:
OPENAI_API_KEY: ${OPENAI_API_KEY}
Authentication
Get your API key from the Celesto Dashboard. Set it as an environment variable:Deployment Options
Deploy Specific Directory
Deploy with Custom Name
Deploy with Environment Variables
Deploy MCP Server
Configuration File
Createcelesto.yaml for advanced configuration:
Environment Variables
From .env File
Create a.env file:
From Command Line
From YAML
Managing Deployments
List Deployments
Get Deployment Info
View Logs
Update Deployment
Make changes to your code, then redeploy:Delete Deployment
Using Deployed Agents
HTTP Requests
Streaming Requests
A2A Protocol
Access the agent card:MCP Protocol
Connect to deployed MCP server:Production Best Practices
# Good
import os
agent = Agentor(
name="Agent",
model="gpt-5-mini",
api_key=os.environ.get("OPENAI_API_KEY")
)
# Bad
agent = Agentor(
name="Agent",
model="gpt-5-mini",
api_key="sk-hardcoded-key-bad" # Don't do this!
)
from agentor import Agentor
# Tracing auto-enabled with CELESTO_API_KEY
agent = Agentor(
name="Production Agent",
model="gpt-5-mini"
)
# Or explicitly enable
agent = Agentor(
name="Production Agent",
model="gpt-5-mini",
enable_tracing=True
)
View traces at: https://celesto.ai/observe
from agentor.mcp import MCPServerStreamableHttp
async with MCPServerStreamableHttp(
name="Server",
params={
"url": mcp_url,
"timeout": 30 # 30 second timeout
}
) as server:
# Use server
try:
result = await agent.arun(user_input)
except Exception as e:
logger.error(f"Agent error: {e}")
# Fallback logic
from fastapi import FastAPI
from agentor import Agentor
app = FastAPI()
agent = Agentor(name="Agent", model="gpt-5-mini")
@app.get("/health")
def health():
return {"status": "healthy"}
@app.post("/chat")
async def chat(message: str):
result = await agent.arun(message)
return {"response": result.final_output}
Troubleshooting
Deployment Fails
Check logs:- Missing dependencies in
requirements.txt - Invalid environment variables
- Syntax errors in code
Application Crashes
View error logs:Slow Performance
- Check model selection (lighter models are faster)
- Enable caching for MCP servers
- Review trace data for bottlenecks
Connection Timeouts
Increase timeout in client:Cost Optimization
- Use lighter models for simple tasks (
gpt-5-minivsgpt-4o) - Implement caching for repeated queries
- Set appropriate
max_tokenslimits - Monitor token usage in dashboard
Support
Get help:Next Steps
- Enable observability to monitor production agents
- Learn about streaming responses for better UX
- Explore agent communication patterns