Skip to main content
You can connect an agent to other applications and services by exposing an API endpoint. Agentor makes it easy to create a production-ready server.

Serve an Agent as API

Agents can be deployed as REST API server so you can query them from your applications or integrate them into your existing infrastructure. Agentor makes it easy to serve the Agents by providing a simple serve method.
from agentor.tools import WeatherAPI
from agentor import Agentor

agent = Agentor(name="Weather Agent", model="gpt-5-mini", tools=[WeatherAPI()])
agent.serve(port=8000)   
To query your Agent server:
import requests

URL = "http://localhost:8000/chat"

response = requests.post(
    URL,
    json={"input": "how are you?"},
    headers={"Content-Type": "application/json"}
)
print(response.content)

Run Agent in Managed Cloud

Agentor comes with a built-in CLI to deploy agents to the cloud with a single command - celesto deploy. Celesto provides a serverless platform to run agents, which means you only pay when your agent is running.
 Building agent...
 Pushing to Celesto Cloud...
 Deploying agent...

🚀 Agent deployed successfully!

Endpoint: https://api.celesto.ai/v1/apps/your-agent-name
Dashboard: https://celesto.ai/apps/your-agent-name
Your agent is now live and accessible via the API endpoint. Visit the dashboard to monitor performance and manage settings.

Self-Hosted Deployment

Deploy agents on your own infrastructure using Docker or Kubernetes.
Create a Dockerfile in your project:
FROM python:3.11-slim

WORKDIR /app

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY . .

EXPOSE 8000

CMD ["python", "agent.py"]
Build and run your container:
docker build -t my-agent .
docker run -p 8000:8000 my-agent

Environment Variables

Set required environment variables for your deployment:
OPENAI_API_KEY
string
required
Your LLM provider API key (OpenAI, Anthropic, etc.)
CELESTO_API_KEY
string
Your Celesto API key for accessing managed tools and services
PORT
integer
default:"8000"
Port to run the agent server on
LOG_LEVEL
string
default:"INFO"
Logging level: DEBUG, INFO, WARNING, ERROR

Monitoring & Logs

For production deployments, we recommend setting up monitoring, logging, and auto-scaling based on your traffic patterns.