Claude Code for Content: The Config File, Skills, and Rules I Use to Avoid AI Slop
The banned phrase list, source verification rules, and content workflow that produced 224 ChatGPT citations on a brand-new domain. With the config file I use in production.
52% of new articles online are AI-generated (Futurism, 2025). Most of them sound like the same person wrote them.
I use Claude Code to produce 2-3 long-form articles per week across multiple sites. AI handles research, structure, source verification, and first drafts. I handle voice, data, and editorial judgment. Readers don’t flag the output as AI. Detection tools don’t either.
This is the full breakdown. The config file, the banned phrase list, the skills and agents that handle research, and the line I draw between what AI does and what stays human.
I’ll also show you two real articles this system produced, with the data to back up why it works.
Why All AI Content Sounds Like the Same Person
AI writing blends together because models learn from the internet, and the internet is now full of AI writing. Each new model trains on the last model’s output. The voice gets flatter with every generation.
You can probably spot it by now. These patterns show up in roughly every other AI-generated article:
You can’t fix this with a better prompt. A single instruction can’t override patterns learned from billions of words. You need rules that apply to every output, every time, without you remembering to paste them.
The Config File That Makes the Difference
Claude Code reads a CLAUDE.md file at the root of every project. Think of it as a permanent instruction manual. It’s not a prompt you paste into a chat. It’s a file the tool reads automatically, every time, on every task.
You write your content rules once. Claude follows them on every article after that.
Here’s what the content section of my production CLAUDE.md looks like:
## Banned Language
- Em-dashes
- Filler: just, very, actually, basically
- Corporate: leverage, robust, scalable, streamline, delve
- Hedging: "It's worth noting", "You may want to consider"
- AI cliche structures:
- "No X. No Y. Just Z."
- "It's not just about X. It's about Y."
- "game-changer" / "supercharge"
- "Enter: [thing]"
- "And here's the kicker"
- "X changed everything"
- Arrow formatting for lists
- "The best part?" / "Want access?"
- "If you're serious about X, [CTA]"
- "To your success" sign-offs
- "Not because of X. But because of Y."
## Writing Style
- Lead with data, not opinions. Every claim needs a source.
- No filler intros ("In today's digital landscape..."). Start with the point.
- 2-3 sentences per paragraph maximum.
- No sentences over 30 words.
- Active voice by default.
- Target Flesch-Kincaid grade 8-10.The trick is being specific, not vague. “Write in a natural tone” does nothing. “Never use the phrase ‘It’s worth noting’” is a rule Claude Code actually follows.
When you ban 20+ specific patterns, the model has to find other ways to say things. Those alternatives sound less like AI because they don’t match the patterns readers have learned to spot.
The Rule That Stops AI From Making Things Up
AI fabricates statistics. Not on purpose, but because models generate things that sound true even when they aren’t. “73% of marketers say...” with no study, no URL, no source. Confident nonsense.
One CLAUDE.md rule fixes most of this:
## Source Verification Protocol
- Every external claim requires a source with URL and date
- Citation format: "81% of SEOs prioritize AI (Source, Month Year)"
- If a claim cannot be verified:
1. Remove the claim, OR
2. Reframe: "Many SEOs report...", OR
3. Mark [NEEDS VERIFICATION] for manual research
- Never guess. Never fabricate statistics.Here’s the difference in practice:
Without the rule:
Studies show that 73% of content marketers are now using AI tools to accelerate their workflow, making it more important than ever to stand out.
With the rule:
52% of new online articles are AI-generated as of mid-2025 (Futurism, 2025). On YouTube, 21-33% of recommended content qualifies as AI slop, generating $117 million in annual ad revenue across 278 synthetic content channels (Search Engine Journal, 2025).
The first version sounds confident but says nothing verifiable. The second has paper trails. Readers trust it because they can check it. AI answer engines cite it for the same reason.
How an Article Actually Gets Made
Let me walk through the real process. Not a theoretical workflow. The steps I follow every time.
Step 1: Research With a Content Brief
I start by running a /content-brief skill. This analyzes the top 10 Google results for my target keyword. It pulls their heading structures, word counts, and topic coverage. Then it builds a brief: what to write, how to structure it, and what gaps exist in the content that already ranks.
claude /content-brief "instagram dm automation"Behind the scenes, Claude Code launches a deep-web-researcher subagent. This agent runs multiple web searches in parallel, cross-references what it finds, and returns structured data with URLs and publication dates. It’s doing in 2 minutes what would take me 45 minutes of tab-switching.
Step 2: Check for Cannibalization
Before writing, I verify the target keyword isn’t already covered on my site. The /cannibalization skill checks Google Search Console data and flags conflicts.
claude /cannibalization --gsc-data ./gsc-export.csvThis step has saved me from writing articles that would have competed with my own pages. Sounds obvious, but it’s the kind of check most people skip because it takes time. With a skill, it takes 30 seconds.
Step 3: Draft With All Constraints Active
The /copywriting skill generates the draft. Because the CLAUDE.md file is always active, the draft automatically follows the banned language list, the citation protocol, the paragraph limits, and the readability targets.
The draft comes out structured like this:
Every H2 section opens with a 50-70 word “citation block” written in factual, third-person tone. AI search engines love pulling these as source material.
Every stat has a source in parentheses.
Paragraphs are 2-3 sentences. No exceptions.
No filler intros. Sections start with the point.
Step 4: Fact-Check With a Dedicated Agent
After the draft is ready, I launch a fact-checker subagent. It takes every claim in the article that references external data and independently verifies it. If it can’t find the source, it gives me three options: remove the claim, reframe it (”Many marketers report...”), or mark it for manual research.
This runs in the background while I start the human editorial pass.
Step 5: The Human Pass (This Part Can’t Be Skipped)
This is where the article stops being a good AI draft and becomes something worth publishing.
I read every paragraph. I add:
My own data. Screenshots from Google Search Console, Ahrefs dashboards, or terminal outputs from real Claude Code sessions. Nobody else has my data.
Opinions that require judgment. “Are these numbers good? Here’s what I think and why.” AI can provide data. It can’t evaluate data.
Things that went wrong. What didn’t work. What surprised me. What I’d do differently. This is the part readers remember.
Voice. The way I phrase things. Short. Direct. Sometimes blunt. The CLAUDE.md constrains AI’s bad habits, but voice comes from the human.
Step 6: SEO and Schema Checks
After the human pass, I run two more skills:
/schema-gen creates Article and FAQ schema (JSON-LD structured data). Every article gets this. It helps both Google and AI search engines understand what the page covers.
/seo-audit crawls the page and checks title tag length, meta description, heading hierarchy, canonical URLs, and structured data validation. It’s the final gate.
Step 7: Track AI Citations After Publishing
A week or two after publishing, I run /ai-visibility to check whether the article shows up in AI-generated answers across ChatGPT, Perplexity, Google AI Overviews, and Grok.
This step closes the loop. If the article isn’t getting cited, I can see what’s missing and adjust the content.
Two Real Articles This System Produced
Theory is nice. Results are better. Here are two articles made with this exact workflow.
Example 1: Instagram DM Automation Guide
Article: Instagram DM Automation: Setup in 10 Min
This is a comprehensive guide for CreatorFlow, an Instagram DM automation tool. It covers setup steps, pricing comparisons, tool breakdowns, and use cases.
What the workflow handled:
/content-briefidentified gaps in existing DM automation guides (most were outdated or focused on a single tool)The draft followed all CLAUDE.md constraints: no filler, sourced claims, short paragraphs, citation blocks under every H2
/schema-genadded FAQ and Article schemaHuman pass added: real pricing data, tool-specific screenshots, the author’s take on which tool fits which use case
The CreatorFlow domain earned 224 ChatGPT citations across 61 pages within 90 days, and this article was one of the highest-performing pieces in the set.
Example 2: Content Strategy Case Study (With Real Numbers)
Article: How Claude Code Helped Us Get 1,000+ Waitlist Signups in 2 Months
This one shows the workflow’s results on a new domain (creatorflow.so). The numbers, for a brand-new site with zero backlinks:
The majority of waitlist signups came from AI search referrals, not traditional Google organic. ChatGPT and Perplexity cited the content, users asked follow-up questions about the tool, and those conversations converted to waitlist visits.
What made this work wasn’t the volume (though 40 articles in 12 weeks matters). It was the structure. Every article opened sections with citation-ready blocks. Every claim had a source. The CLAUDE.md rules prevented slop from creeping in, even at that publishing pace.
Without the workflow, that volume would have required either a team of writers or a pile of generic AI content nobody would cite. Claude Code let a solo founder produce structured, citation-ready articles at a pace that landed 224 ChatGPT mentions in 90 days on a domain nobody had heard of before.
What Stays Human (Non-Negotiable)
Some content should never be delegated, no matter how good the CLAUDE.md rules are:
Google calls this “Experience” in their E-E-A-T framework. It’s the hardest quality signal to fake because it requires proof that the author has done the thing they’re writing about.
The CreatorFlow case study is a good example. Anyone can write about content strategy. Only someone who built CreatorFlow can share its actual Ahrefs dashboard, GSC data, and waitlist numbers. That data is the article’s defensibility. No competitor can replicate it.
How to Start (25 Minutes)
If you want to try this, you need four things in your CLAUDE.md:
Banned language list (10 min). Copy the one from this article. Add or remove phrases to match your voice. 15-20 patterns minimum.
Source verification rule (5 min). The protocol I showed above. This single rule kills most hallucinated statistics.
Structural constraints (5 min). Max 2-3 sentences per paragraph. 30-word sentence cap. Active voice. Sections lead with facts.
Human-only boundaries (5 min). Write down what you always add yourself: your data, your opinions, your experience. This prevents you from gradually outsourcing the parts that make your content unique.
That’s a working setup. The banned list and source verification handle the 80% that matters most.
The Results So Far
Since putting this system into production:
Brief-to-published time dropped from 6 hours to 90 minutes per article
Every article carries 3-5 verified external sources with URLs
224 ChatGPT citations on a brand-new domain in 90 days
The AI draft is good enough to publish as-is. The human pass adds value on top instead of fixing problems underneath
The system isn’t magic. Every article still gets a full human read. I still add data, commentary, and judgment by hand. The AI handles the parts that benefit from speed and consistency: structure, research scaffolding, source verification, schema markup. The human handles the parts that benefit from experience: voice, data, and the “does this pass the smell test” filter.
The internet doesn’t need more content. It needs more content that was worth writing. A well-configured CLAUDE.md is how you make AI help with that instead of adding to the noise.
I’m Vytas Dargis, founder of CC for SEO and AI for Marketing Ops. Subscribe for weekly breakdowns of SEO workflows automated with Claude Code.





Exactly where I need to go. Thank you! 🎉
Some really excellent tips in here. Thank you!