The Goal and the Bet

The thesis was simple: PostHog is growing fast, their user base is hitting implementation problems, and nobody is writing good content that helps with those problems. Search "PostHog events not showing" and you get generic documentation or year-old forum posts. There's a gap between what PostHog users need and what exists online.

My consulting business (Adasight) does analytics implementation and experimentation for B2B SaaS companies. We're an Amplitude partner, but we work with PostHog clients too. The bet: if we publish authoritative content targeting PostHog frustration keywords — the searches people make when something is broken — we can capture them at peak motivation and convert them into $1,500 fixed-price audit bookings.

The competitive landscape confirmed the opportunity. Only 3 direct competitors exist for PostHog consulting content (Vision Labs, DataTools Pro, and one solo consultant). None have fixed-price audits. None rank strongly for frustration keywords. The window is open.

The constraint: I can't spend 20 hours per week writing content. I have clients to serve and pipeline to build. So the content engine needs to run primarily on AI agents, with me providing strategic direction and quality calibration.

Target: 45 articles, published over 12 weeks, targeting 8 keyword clusters. Each article drives toward one CTA: "Stop debugging. Get a $1,500 PostHog audit."

The Architecture: Boring on Purpose

The tech stack for growthanalyticsengine.com is deliberately unsexy:

Why this stack? Because AI agents can maintain it. An agent can write a JSON entry, run a Python script, commit to Git, and push. It cannot reliably manage a complex framework build pipeline, resolve npm dependency conflicts, or debug a Next.js hydration error.

The design system is a single CSS file: brand green (#2F7A62), Inter Tight for headings, Inter for body, Source Serif 4 for article content (gives it a publication feel). No images required — text-only content works with this design. The site isn't beautiful. It's functional and fast. That's the trade-off when your "designer" is an AI agent.

I learned this the hard way. We tried an Astro/Vercel migration for the main Adasight website. It was cancelled. Too many framework-specific decisions that agents handle poorly. Static HTML with generators is the sweet spot for AI-operated content sites.

The Agent Team and Their Roles

Three agents from my 10-agent team are involved in the content engine:

Prax (Website & SEO Agent)

Prax is the workhorse. His responsibilities:

Prax works from a detailed operational brief that includes: the exact JSON format for entries, the voice and style guide (calibrated from existing posts), the CTA template, and the keyword clusters with priorities.

Naomi (Lead Developer)

Naomi built the infrastructure and handles technical issues:

Naomi runs on the Mac Mini — always-on, available for deployments at any hour.

Anna (Chief of Staff)

Anna coordinates the work:

Between the three of them, the content engine runs with minimal input from me.

What Got Produced: The Numbers

As of April 2026, growthanalyticsengine.com has:

Content TypeCountLocation
Blog posts19/blog/{slug}/
Guides21/guides/{slug}/
Interactive tools3/tools/ (maturity assessment, sample size calculator, experimentation ROI)
Total43 pages

Of these, only 1 page was PostHog-specific at launch (a PostHog vs Amplitude comparison). The remaining PostHog content is being produced now — the first 10 articles targeting frustration and audit keywords.

Production Cadence

The target is 2-3 articles per week during the ramp-up phase, scaling to 5 per week after voice calibration. Each batch of articles (typically 3-5) takes one Prax session to draft and one session to generate, review sitemap updates, and push.

Per article, here's the time breakdown:

Total Gregor time per article after calibration: approximately zero. Total Gregor time per week for the content engine: 30 minutes for strategic direction and occasional reviews.

The Publishing Workflow End-to-End

  1. Prax adds JSON entry to data/blog.json
  2. Prax runs python3 generate_blog.py
  3. Prax updates sitemap.xml with new URL
  4. Git add, commit, push to main
  5. Cloudflare Pages auto-deploys in ~30 seconds
  6. Optionally: submit new URLs to IndexNow for fast Bing/Perplexity indexing

What Broke and How We Fixed It

The Content Quality Calibration Problem

The first 3 articles Prax produced were overwritten. Not bad — overwritten. Academic tone, too many qualifiers, hedge words everywhere ("typically," "generally," "in many cases"). They read like documentation, not practitioner content.

The fix wasn't adding a review layer. It was fixing the input. I added specific voice calibration to Prax's brief: "Direct, practitioner-to-practitioner. No fluff. No marketing-speak. Short intro (2-3 sentences), then get to the point. Specific tool features, real scenarios, honest trade-offs." I included 3 examples of sections I'd edited to the right voice.

After that calibration, output quality jumped. The principle from John Rush applies here: fix inputs, not outputs.

The Internal Linking Gap

This took 2 weeks to identify. The generators automatically added "related posts" to each article — but the logic was broken. It picked the first 2 posts in the JSON array for every article. So every new PostHog article showed the same 2 unrelated articles (an Amplitude vs GA4 comparison and a Mixpanel piece) as "related."

The fix: modify the generator to support an optional related_posts field in the JSON. If present, use those slugs. If absent, fall back to the default behavior. Small code change, big impact on internal linking quality.

Beyond the generator fix, there's a broader internal linking weakness. Blog posts don't cross-link to guides or tools in the body text. Each article is an island. This is a significant SEO gap that requires embedding <a href="..."> tags directly in the section body text. Prax can do this, but it requires explicit planning for each article — which links go where.

Agent Memory Degradation

After about 2 weeks of continuous work on the content engine, Prax's output quality started dropping. Articles became more generic, less specific to PostHog, more likely to repeat the same structural patterns. The reason: context window filling up with prior session history, old article drafts, and accumulated noise.

The fix: regular memory maintenance. Pruning the agent's context every 1-2 weeks, refreshing the operational brief, and starting fresh sessions for new content batches. This is an ongoing maintenance cost — roughly 30 minutes per week — but it's essential for quality.

The Google Sandbox Reality

The site launched in March 2026. As of April, Google has indexed only the guides hub page. Individual blog posts and guides aren't showing in search results yet. This is the Google sandbox period — new domains take 2-4 months to get meaningful indexing, regardless of content quality.

This isn't a failure. It's expected. But it means the content engine is currently producing articles that nobody can find through search. The strategy accounts for this: publish during the sandbox period so that when Google does index the site, there's a full content library waiting. But the emotional reality of publishing good content into the void for 2-3 months is something nobody warns you about.

The Amplitude Analytics Gap

The site has Amplitude SDK loaded for tracking, but the API key is still a placeholder. We can't currently measure CTA clicks, page engagement, or conversion funnels. This is a known gap — not blocking for content production, but critical for measuring whether the $1,500 audit CTA is actually converting. It's on the fix list.

The Economics: What 45 Articles Actually Cost

Let me compare the AI agent content engine to the traditional alternative.

The Agent Way

The Content Agency Way

The Freelance Writer Way

The AI agent approach costs roughly 5-10% of the agency approach, with comparable quality for structured, factual content (guides, tutorials, comparisons). The quality gap shows up in original analysis, emotional storytelling, and contrarian takes — areas where human writers still win. For PostHog debugging guides and setup checklists, the agent output is strong enough.

The 80% cost reduction estimate I keep citing is conservative. For this type of content (structured, technical, SEO-targeted), it's closer to 90-95%.

Lessons for Anyone Building a Content Engine With AI

1. Pick the Boring Stack

Static HTML + JSON data + Python generators + free hosting. No framework. No build pipeline. No dependencies that agents can't manage. Every hour you spend debugging a build tool is an hour not spent on content. The site doesn't need to be beautiful. It needs to load fast, render clean HTML, and have proper schema markup.

2. Structure Your Content as Data

JSON is the secret weapon. Each article is a structured data entry: slug, title, sections with H2 headings, body text, FAQs, related articles. This structure means the generator handles all formatting, schema markup, navigation, and layout. The agent only writes content — it doesn't touch HTML templates or CSS.

3. Invest in Voice Calibration

The first 3 articles in any new topic cluster should be reviewed by a human. Mark up what's wrong. Show examples of what's right. Feed that back into the agent's brief. This calibration step takes 30-60 minutes total and saves dozens of hours of editing later.

4. Accept the Sandbox Period

New domains take 2-4 months to get meaningful Google indexing. Publish anyway. When indexing happens, you want a full content library — not 3 articles. Use the sandbox period to build volume. Submit to IndexNow for faster Bing/Perplexity presence in the meantime.

5. Plan Internal Links From Day One

The biggest SEO mistake we made was treating each article as standalone. Internal linking — contextual links between related articles, hub pages that organize topic clusters — should be planned from the first article. It's much harder to retrofit than to build in.

6. Separate the Frustration Keywords

The highest-ROI content targets users at peak frustration: "PostHog events not showing," "PostHog funnels not loading." These searches have zero competition because nobody writes answers to them. The search volume is low per keyword but the conversion intent is extreme — someone searching this at 11pm is one good article away from booking an audit. Start with these before chasing high-volume comparison keywords.

7. Measure or Don't Bother

We launched without analytics tracking (Amplitude API key still a placeholder). That means we have no data on CTA clicks, engagement, or conversion. Publishing content without measurement is shooting in the dark. Get tracking live before you publish, or you're building an engine you can't steer.

The content engine isn't a finished product — it's a system that improves with each batch. The first articles weren't great. The internal linking was broken. The analytics don't exist yet. But the architecture is right, the economics work, and the production pipeline is running. Three agents, one human directing them, 45 articles. That's the model.

Frequently Asked Questions

How long does it take for AI-generated content to rank on Google?

The same as any content: 2-4 months for a new domain, faster for established domains. Google doesn't penalize AI-generated content if it's helpful and well-structured. The sandbox period for new domains is the real bottleneck, not the content source. Focus on proper technical SEO (schema markup, sitemap, internal linking) and target low-competition keywords first.

Can AI agents write content that actually converts?

For structured, factual content (guides, tutorials, comparisons, troubleshooting articles) — yes. AI agents produce consistently good output after voice calibration. For opinion pieces, emotional storytelling, and content that requires original insight — the human still needs to provide the core ideas. The agent structures and scales them. The conversion comes from targeting the right keywords at the right intent, not from the prose quality.

What's the minimum viable stack for an AI content engine?

A Claude Code session (or equivalent AI coding assistant), a JSON file to store article data, a Python script to generate HTML pages, and a free Cloudflare Pages account for hosting. Total infrastructure cost: effectively zero. You can produce and publish your first article in under 2 hours with this setup. Scale by adding more JSON entries and re-running the generator.

How do you maintain content quality at scale with AI agents?

Three mechanisms: (1) voice calibration — review the first 3 articles in any new topic cluster and feed corrections back into the agent's brief; (2) structured output — JSON format forces consistent structure, which the generator turns into consistent HTML; (3) regular memory maintenance — prune agent context every 1-2 weeks to prevent quality degradation from context window noise. After calibration, batch-review monthly rather than reviewing every article.

This article was drafted by an AI agent and reviewed by Gregor Spielmann. The source material, frameworks, and experiences are real. The writing is AI-assisted. Learn how this site works.