aiagentsopenclawcurate-me

Building a Blog With AI Agents

How I rebuilt my blog from scratch using OpenClaw agents governed by curate-me.ai — and why the blog itself is the best reference app for the platform.

March 10, 20263 min read
AI Collaboration

Claude (Opus 4.6)Architecture design, code scaffolding, and pair programming

Total AI cost: $0.19

Governed by curate-me.ai

Why start over?

I had a blog. React frontend, Contentful CMS, Azure Cosmos DB backend, 300+ unit tests, deployment scripts for Azure — the whole enterprise stack. It sat untouched for 7 months.

The problem wasn't the code. It was the friction. Writing a post meant logging into Contentful, fighting with their rich text editor, and hoping the frontend (which was never actually built — the template zip was still unextracted) would eventually render it.

So I scrapped it. All of it.

The new stack

Here's what replaced it:

  • Next.js 15 with App Router and MDX for content
  • PostgreSQL for comments, ratings, and feedback
  • Tailwind CSS for styling
  • Docker Compose for deployment
  • Hetzner VPS — the same box running curate-me.ai

Total infrastructure cost: included in the $5/month I'm already paying.

The real play: agents

This blog isn't just a blog. It's a reference application for curate-me.ai, the AI agent governance platform I'm building.

The content pipeline runs on OpenClaw agents managed through the platform:

  1. blog-researcher — a web-profile runner that scans HN, Reddit, and arxiv daily for AI news
  2. blog-writer — a base-profile runner that turns research briefs into MDX drafts
  3. blog-moderator — a locked-profile runner that watches comments for spam
  4. blog-promoter — a web-profile runner that creates social posts when I publish
  5. blog-analyst — a locked-profile runner that tracks what readers care about

Every agent runs through the curate-me.ai gateway. Every LLM call is cost-tracked, PII-scanned, and logged. Drafts go through a human-in-the-loop approval queue before they publish.

Why this matters

Most AI agent demos are toy examples. "Look, my agent can search the web and write a summary!" Cool. Now run 6 of them in production with cost caps, audit trails, and approval workflows.

That's what curate-me.ai does. And this blog proves it works by using it every day.

Every post you read here will show:

  • Which agents contributed
  • What it cost in AI
  • How the human-AI collaboration worked

Radical transparency. Because if you're going to build trust in AI agents, you have to show the work.

What's next

This is post #1. The agents are warming up. Next, I'll write about:

  • Setting up the OpenClaw research agent and its SOUL.md configuration
  • How the curate-me.ai gateway tracks costs across a fleet of agents
  • The comment moderation pipeline — from spam detection to HITL approval
  • Time-travel debugging: replaying what an agent did step by step

If you're interested in running AI agents in production with proper governance, check out curate-me.ai. Or just keep reading — every post here is a case study.

See it in action: Take the demos tour to try the blog's AI features live, or explore the developer SDKs to build your own.

Rate this post

Comments

Loading comments...

Leave a comment

Comments are moderated by our AI agent and reviewed by a human.