For Developers

Build with curate-me.ai

5 SDKs, one platform. Route LLM calls through governance with a base URL swap, build custom agents, orchestrate pipelines, embed chat widgets, and instrument everything.

5-Minute Quickstart

Route all your LLM calls through the governance gateway with zero code changes:

# Install
pip install curate-me
# Swap base URL — that's it
client = OpenAI(
base_url="https://api.curate-me.ai/v1/openrouter/chat/completions",
api_key="cm_org_...",
)
# All calls now have cost tracking, PII scanning, rate limits, and audit trails

SDKs & Libraries

Python SDK

curate-me
pip install curate-me
Docs

Full gateway integration and agent development. Build agents with the BaseAgent pattern, create orchestrated pipelines, and mount as FastAPI endpoints.

Features

  • Gateway proxy with base URL swap
  • BaseAgent pattern for custom agents
  • Pipeline orchestration with dependencies
  • FastAPI endpoint mounting with SSE
  • Memory system integration
  • Cost tracking middleware

Example

from curate_me import CurateMe, BaseAgent

client = CurateMe(api_key="cm_org_...")

# Route any LLM call through the governance gateway
response = client.chat.completions.create(
    model="anthropic/claude-sonnet-4",
    messages=[{"role": "user", "content": "Hello"}],
    # Cost tracking, PII scanning, and rate limits
    # applied automatically by the gateway
)

# Build a custom agent
class ResearchAgent(BaseAgent):
    model = "anthropic/claude-sonnet-4"
    tools = ["web_search", "knowledge_write"]
    budget_per_task = 0.50

    async def execute(self, task: str) -> str:
        return await self.chat(task)

# Create and run a pipeline
pipeline = client.pipeline.create(
    agents=[ResearchAgent(), WriterAgent()],
    flow="sequential",
)
result = await pipeline.run("Research AI governance trends")

TypeScript SDK

@curate-me/sdk
npm install @curate-me/sdk
Docs

Gateway integration for Node.js and browser environments. OpenAI-compatible interface with automatic governance.

Features

  • OpenAI-compatible chat interface
  • Streaming with SSE support
  • Cost headers on every response
  • Type-safe with full TypeScript types
  • Browser and Node.js compatible
  • Middleware pipeline support

Example

import { CurateMe } from "@curate-me/sdk";

const client = new CurateMe({
  apiKey: "cm_org_...",
  orgId: "my-org",
});

// Drop-in replacement for OpenAI — governance applied automatically
const stream = await client.chat.completions.create({
  model: "anthropic/claude-sonnet-4",
  messages: [{ role: "user", content: "Analyze this data..." }],
  stream: true,
});

for await (const chunk of stream) {
  process.stdout.write(chunk.choices[0]?.delta?.content ?? "");
}

// Access cost data from response headers
console.log("Cost:", stream.costUsd);
console.log("Tokens:", stream.totalTokens);

CLI

@curate-me/cli
npm install -g @curate-me/cli
Docs

Manage agents, runners, and configuration from the command line. Deploy fleets, check costs, and monitor activity.

Features

  • Fleet deployment and management
  • Cost and usage reporting
  • Agent configuration and testing
  • Runner lifecycle management
  • Knowledge base operations
  • CI/CD pipeline integration

Example

# Login and configure
cm auth login
cm config set org my-org

# Deploy a fleet from template
cm fleet deploy --template development-team \
  --budget 40/day \
  --models "opus,sonnet,gpt-4o"

# Check live costs
cm costs today
# ┌──────────────────┬─────────┬──────────┐
# │ Agent            │ Calls   │ Cost     │
# ├──────────────────┼─────────┼──────────┤
# │ code-review      │ 23      │ $1.47    │
# │ blog-dev         │ 15      │ $0.89    │
# │ docs-agent       │ 8       │ $0.12    │
# └──────────────────┴─────────┴──────────┘

# Monitor agent activity in real-time
cm activity watch --agent blog-dev

# Run guardrails test
cm guardrails test --input "My SSN is 123-45-6789"
# ⚠ PII detected: Social Security Number

Embed Widget

@curate-me/embed
npm install @curate-me/embed
Docs

Drop-in chat widget for any web application. Connects to your agent fleet with full governance — add AI chat in 3 lines of code.

Features

  • One-line integration for any website
  • Connects to your fleet agents
  • Customizable theme and branding
  • Memory and personalization built in
  • Cost tracking per user session
  • White-label support

Example

<!-- Add to any HTML page -->
<script src="https://cdn.curate-me.ai/embed.js"></script>
<script>
  CurateMe.init({
    orgId: "my-org",
    apiKey: "cm_pub_...",    // Public key (safe for browsers)
    agent: "support-agent",   // Which fleet agent to connect
    theme: {
      accent: "#B45309",
      position: "bottom-right",
    },
    memory: true,             // Enable cross-session memory
  });
</script>

<!-- Or with React -->
import { CurateMeChat } from "@curate-me/embed/react";

export function App() {
  return (
    <CurateMeChat
      orgId="my-org"
      agent="support-agent"
      theme={{ accent: "#B45309" }}
    />
  );
}

Observer SDK

@curate-me/observer-sdk
npm install @curate-me/observer-sdk
Docs

Observability and tracing for AI applications. Automatic instrumentation of LLM calls with cost, latency, and quality metrics.

Features

  • Auto-instrument LLM calls
  • Distributed trace correlation
  • Cost and latency metrics
  • Custom span annotations
  • OpenTelemetry export
  • Quality scoring integration

Example

import { Observer } from "@curate-me/observer-sdk";

const observer = new Observer({
  apiKey: "cm_org_...",
  serviceName: "my-app",
});

// Auto-instrument all OpenAI/Anthropic calls
observer.instrument();

// Or manually trace specific operations
const span = observer.startSpan("research-pipeline");
try {
  const result = await researchAgent.run(query);
  span.setAttributes({
    "agent.name": "researcher",
    "agent.cost": result.cost,
    "agent.tokens": result.tokens,
  });
} finally {
  span.end();
}

// View traces in dashboard or export to Jaeger/Datadog
// dashboard.curate-me.ai/traces

Integration Patterns

Gateway Proxy (Simplest)

5 minutes

Swap your base URL. All governance applied automatically. Zero code changes.

Install SDK
Set base URL to gateway
All LLM calls now governed

Agent Builder

30 minutes

Build custom agents with the BaseAgent pattern. Automatic tool calling, memory, and cost tracking.

Define agent class
Register tools
Deploy to fleet

Pipeline Orchestration

1-2 hours

Chain agents into workflows with dependencies, parallel execution, and conditional routing.

Define pipeline stages
Configure agent routing
Add HITL gates
Deploy

Full Fleet Deployment

Under 1 hour

Deploy a multi-agent team from templates with governance, budgets, and monitoring.

Choose template
Configure per-agent settings
Set governance rules
Deploy fleet

Full API Reference

REST API docs for agents, runners, costs, workflows, knowledge bases, and webhooks.

API Reference

How This Blog Uses the SDK

This blog is a reference application for curate-me.ai. Here's how we integrate:

Gateway Proxy

All LLM calls route through the governance gateway for cost tracking and PII scanning

src/lib/chat/gateway-client.ts

Fleet Orchestrator

Sonnet 4.6 orchestrator delegates to 5 specialist agents with tool calling

src/lib/chat/fleet-orchestrator.ts

Knowledge Base

Dual-write to local PostgreSQL and cloud KB for cross-agent search

src/lib/curate-me.ts

Webhook Handler

Receives agent results, processes refinement loops, sends Slack notifications

src/app/api/webhooks/agent/route.ts

Start Building

Add governance to your AI application in 5 minutes with a base URL swap. Or build a full multi-agent fleet with the SDK.