On March 12, 2026, I deployed AI agent discovery files — agent cards, llms.txt, llms-full.txt, and structured feed references — across all 52 websites in our network. The entire deployment took one working day. Not because we cut corners, but because the monoclone architecture we use made it a template-level change instead of a site-by-site manual process.
This is the full case study: what we deployed, how the architecture enabled it, what went wrong, and what the measurable results were.
The Starting Point
Before the deployment, our 52 sites had standard SEO infrastructure: sitemaps, robots.txt, RSS feeds, JSON-LD schema markup, Open Graph tags, and Twitter Cards. By 2025 standards, this was comprehensive. By 2026 standards, it was incomplete.
AI agents — the systems that power ChatGPT's web browsing, Perplexity's search, Claude's tool use, and Google's AI Overviews — were visiting our sites but had no structured way to understand what each site offered. They could crawl individual pages, but they could not quickly map the topical coverage, content relationships, or citation preferences of any given site.
The gap was visible in AI citation quality. When asked about topics we covered extensively — 25-year homeownership costs, HOA fee analysis, condo insurance trends — AI models would sometimes cite our content correctly, sometimes cite it incorrectly, and sometimes cite a competitor's thinner content instead. The issue was not content quality. It was content discoverability.
What We Deployed
Each site received four new files:
1. /.well-known/agent.json — The A2A (Agent-to-Agent) protocol agent card. A JSON file declaring the site's name, description, topic areas, content format, update frequency, and available feeds. This is the machine-readable equivalent of a business card for AI agents.
2. /llms.txt — A plain-text Markdown file providing a structured summary of the site: core topics, key URLs, content organization, and citation preferences. This is the human-readable (and LLM-readable) cover letter.
3. /llms-full.txt — A comprehensive plain-text rendering of the site's most important content. For content-heavy sites, this file ran 20,000-50,000 words — giving AI models with large context windows a single document to ingest instead of crawling dozens of pages.
4. Updated <head> metadata — Link tags for JSON Feed discovery, updated meta tags with AI-relevant descriptors, and a reference to the agent card in the HTML head.
The Monoclone Architecture Advantage
Here is why this took one day instead of one quarter.
Our 52 sites run on a monoclone architecture — a shared codebase with site-specific data files. Every site uses the same Eleventy build pipeline, the same templates, the same deployment process. The differences between sites are in the content and the site-specific data (site.json), not in the infrastructure.
This means a template-level change deploys to all 52 sites simultaneously. The agent card template, the llms.txt template, and the head metadata changes were each written once and applied everywhere.
The process:
Morning (3 hours): Template development.
I created three new template files:
agent.json.njk— Generates the agent card from site.json datallms.txt.njk— Generates llms.txt from site content and metadatallms-full.txt.njk— Generates the full-text content file from blog collections
Each template pulled its data from the existing site.json and content collections. No manual per-site content was needed — the templates dynamically generated the correct metadata for each site based on its existing data.
Midday (2 hours): Site-specific data enrichment.
Each site's site.json needed a few new fields: topics (array of core topic areas), contentFormat (description of content type), and citationPreference (how to cite the site). I added these to all 52 site.json files in a batch operation.
Afternoon (2 hours): Testing and deployment.
I built all 52 sites locally, validated the output files against their respective specifications, and deployed. Each site's CI/CD pipeline ran its standard build and push process.
Evening (1 hour): Verification.
I spot-checked 15 sites across the network to verify the files were accessible at the correct URLs, the JSON was valid, and the llms.txt content was accurate.
What Went Wrong
Three issues surfaced during deployment:
Issue 1: Content length limits. The llms-full.txt files for our most content-heavy sites exceeded 100,000 words. While LLMs with large context windows can process this, the build time for generating these files added 30-45 seconds to each build. I capped the file at the 30 most recent posts plus the 10 most-linked posts, bringing the file size to a manageable 30,000-50,000 words.
Issue 2: Special characters in JSON. Several site descriptions contained curly quotes and em dashes that produced invalid JSON in the agent card. I added a JSON-safe filter to the template that converts special characters before injection.
Issue 3: Stale content references. The llms.txt template initially included URLs for every blog post. For sites with 40+ posts, this created an unwieldy file. I restructured it to list the top 10 resources by topic area instead of a complete page listing.
None of these issues required more than 30 minutes to resolve. The monoclone architecture meant that fixing a template once fixed it everywhere.
The Measurable Results
I tracked three metrics for six weeks after deployment:
AI crawler frequency. Using server logs filtered by known AI user agent strings (Perplexitybot, ChatGPT-User, ClaudeBot, Googlebot-Extended), I measured crawl frequency before and after deployment.
- Pre-deployment: AI crawlers visited an average of 4.2 pages per site per week
- Post-deployment: AI crawlers visited an average of 11.7 pages per site per week
- Agent card (
agent.json) was requested by at least one AI crawler on 48 of 52 sites within the first two weeks - llms.txt was requested on 44 of 52 sites within three weeks
AI citation accuracy. I tested 200 queries related to our content across ChatGPT, Perplexity, and Claude before and after deployment.
- Pre-deployment: Our sites were cited in 23% of relevant queries, with correct page attribution in 61% of those citations
- Post-deployment: Citation rate increased to 38% of relevant queries, with correct page attribution in 84% of citations
- The improvement was most dramatic for multi-topic queries where the AI needed to understand the relationship between different content areas on a site
Referral traffic from AI sources. Measured via server logs and referrer data.
- AI-referred visits increased 67% in the six weeks following deployment
- The majority of the increase came from Perplexity, which showed the strongest response to agent discovery files
The Template for Other Networks
If you operate multiple websites — whether two or two hundred — the lesson from this deployment is that AI agent discovery is an infrastructure problem, not a content problem. The content already exists. The issue is making it discoverable to systems that are not Google.
The minimum deployment per site:
- An agent card at
/.well-known/agent.jsondeclaring what the site does - An llms.txt file at the root summarizing the site's content and structure
- Feed references (JSON Feed, RSS) linked from both the HTML head and the agent card
- Updated meta tags in the HTML head
For a single site, this is a two-hour project. For a network with shared templates, it is a one-day project regardless of the number of sites.
The window for early-mover advantage is open now. AI agent discovery adoption across the web is still in single-digit percentages. The sites that deploy now will have months of crawl history and citation patterns established before the majority catches up.
The Resale Trap is one of 52 sites in the network described in this case study. Its original housing data and 25-year cost models are now fully discoverable by AI agents. The 395-page analysis is available on Amazon. For the complete monoclone architecture, template system, and multi-site deployment guide, see The $100 Network.
Want the Full Data?
This article draws from The Resale Trap — 395 pages of sourced research covering total cost of ownership, all 50 states ranked, insurance mechanics, and more.
Part of The Trap Series
The W-2 Trap → The $97 Launch → The Condo Trap → The Resale Trap