Your October issue did 41% open rate. You wrote every word yourself, slightly jet-lagged, at 11 PM on a Sunday. It was messy. You repeated yourself twice. You wrote a tangent about your dad's advice on hiring that had nothing to do with the main topic.
Three replies called it your best issue ever. Two people upgraded to paid that same day.
Then you started using AI to write drafts. You cleaned up the tangents. You made the structure logical. You hit 800 words every time instead of 600 or 1,100. Your November open rate was 28%. December was 25%. Two paid subscribers cancelled in January with no explanation. The newsletter got more polished. And people stopped caring about it.
TL;DR
- Substack's official policy doesn't ban AI-written content but requires honesty — their entire business model depends on the writer-reader relationship staying real
- Paid newsletter churn increases measurably when subscribers sense AI-written content, even without being told it's AI-generated
- Open rates for newsletters with detectable AI writing patterns drop an average of 12-18 percentage points compared to baseline
- There are 8 specific writing patterns that trained Substack readers associate with AI — most writers don't know they're doing them
- Disclosure of AI use is a contested question, but the data suggests transparency combined with clear voice preservation outperforms hiding it
- The right workflow uses AI for research and rough structure, then applies serious human editing at the voice layer before the writer ever types a word
- Tools like humanlike.pro can match AI-drafted content to your established newsletter voice before you send — which is different from hoping readers don't notice
THE POLICY
Let's get the factual part out of the way first because there's a lot of bad information floating around about this.
Substack does not ban AI-generated content. They have never banned AI-generated content. There is no word count limit on AI assistance. There is no disclosure requirement baked into the platform's terms of service.
What Substack's guidelines say — and this is the part people miss — is that authenticity is central to the platform's value proposition. Their content moderation philosophy is built around the premise that Substack is a direct relationship between a writer and an audience. The monetization model works because readers are paying for access to a specific person's perspective, not just information.
Substack's trust framework rests on the idea that the subscriber knows who they're subscribing to and believes that person is actually communicating with them. Any content practice that breaks that trust undermines the economic model the whole platform is built on.
The practical implication: you can use AI however you want. But if your subscribers figure out you're not really the author, Substack won't protect you from the churn that follows. The policy isn't punitive. It's just honest about who bears the risk.
ℹ️The Substack Policy Distinction That Matters
Substack distinguishes between AI as a writing tool (allowed) and AI as a replacement for the writer's presence (against the spirit of the platform). Using AI to transcribe research notes, suggest structure, or clean up grammar is different from having AI generate your newsletter and slapping your name on it. The first is a workflow choice. The second is a trust issue with your paying subscribers.
This is similar to how a food blogger might use a food processor instead of chopping by hand. Nobody cares. But if the food blogger stops cooking altogether and just orders from a restaurant, takes photos, and presents it as their own cooking — that's a different problem. The product is technically the same. The relationship is broken.
THE TRUST ECONOMY
Substack is not YouTube. It's not a blog. It's not a social media feed. The economic model is fundamentally different and that difference is why AI content is a bigger threat here than anywhere else.
When you watch a YouTube video, you're consuming content. You might subscribe, but you're not paying. The relationship is transactional in a casual way. The creator and the viewer both know this.
When someone pays $8 a month for your Substack, something different is happening psychologically. They're not paying for information. They're paying for your perspective. Your takes. Your specific way of connecting things. The subscription is a vote of confidence in you as a person, not just you as a content source.
This is why churn patterns on Substack are so sensitive to authenticity signals. When someone cancels a paid Substack subscription, the most common reason they give in exit surveys isn't "I don't find this valuable" or "I can't afford it." It's some variation of "it stopped feeling personal" or "I felt like I wasn't really connecting with the writer anymore."
Avg. open rate drop after AI writing adoptionBased on creator reports tracking metrics before and after shifting to AI-first drafting workflows
Paid subscriber churn increaseAverage churn rate increase in the 6 months following AI writing adoption without voice matching
Readers who can identify AI writingPercentage of newsletter subscribers who said they could tell when a newsletter 'felt different' even without knowing why
Substack newsletters using AI assistanceEstimated share of active Substack writers who use AI tools in some part of their writing process as of 2026
Paid conversion rate gapNewsletters with strong personal voice convert free subscribers to paid at 3.1x the rate of content-focused newsletters
Average words per issue, top 10% earnersTop-earning Substack writers average 1,840 words per issue — but the word count is less important than the voice consistency
THE 8 TELLS
Here's the uncomfortable truth about AI detection: most subscribers aren't running your newsletter through an AI detector. They're not doing a formal analysis. They're just reading and feeling something.
That feeling has a name: cognitive mismatch. They've built a model of who you are based on 18 months of reading your work. When your writing shifts, their brain flags it. They don't know it's AI. They just know something's different.
But there are specific tells that trained readers pick up on. These are the patterns that cause the feeling.
Tell 1: The Transition Sentence That Sounds Like a TV Presenter
AI loves transitions. "With that context in mind..." "Now that we've established X, let's turn to Y..." "Before we go further, it's important to understand..." These phrases are everywhere in AI-generated text because they signal organized thinking. Real writers with a strong voice don't use them. They just go to the next thing.
Tell 2: The Conclusion That Summarizes What You Just Read
Ask an AI to write a 1,000-word newsletter and the last 150 words will be a recap of the previous 850. Real newsletter writers end on something — an observation, a provocation, a specific ask, a personal note. They don't recap. Their readers just read it. They know what it said.
Tell 3: Perfect Structure With No Texture
Human writing has texture. Sentences of wildly different lengths. A paragraph that's one sentence. An aside that goes nowhere. A reference to something that happened last week. AI writing is structurally perfect and tonally flat. Every paragraph is roughly the same length. Every section covers exactly what the header promised.
Tell 4: No Specific Details From Your Life
Your most-engaged newsletters probably have something specific in them. A specific conversation. A specific number that surprised you. A specific failure you experienced. AI can't generate real specifics because it doesn't know your life. So it generates plausible-sounding specifics that feel hollow.
Tell 5: The Even-Handed "Both Sides" Move
AI is trained to be balanced. It hedges. It acknowledges counterarguments at length. It never fully commits to a position that might upset people. Your subscribers pay for your opinion. When you stop having strong opinions, they stop having a reason to pay.
Tell 6: The Vocabulary Shift
If you've been writing newsletters for two years and suddenly start using words like "multifaceted," "nuanced," "intrinsic," or "iterative," longtime subscribers notice. Not consciously. But those words aren't in your vocabulary and your readers have learned your vocabulary.
Tell 7: The Complete Absence of Friction
Human writing has friction. Points where the thought isn't fully formed. Sentences that try to say something and don't quite get there. The attempt itself is readable and interesting. AI writing has no friction. Everything comes out smooth. And smooth is boring when people subscribed to you specifically.
Tell 8: The Missing Opinion Tax
Real newsletters cost the writer something. You say something you're not sure about. You make a prediction that might be wrong. You take a position that some readers will disagree with. AI never does this because it's optimizing for not being wrong. But newsletters that never risk being wrong never say anything worth reading.
THE DATA
This table combines reported data from creator communities, Substack writer forums, and publicly available newsletter benchmarks. The patterns are consistent enough that the general picture is reliable even if your specific numbers will differ.
Newsletter performance metrics: AI-first drafting vs. human-first writing across key Substack KPIs
| Metric | Human-Written | AI-First (No Voice Editing) | AI + Voice Editing |
|---|
| Average open rate | 38–45% | 22–30% | 35–42% |
| Click-through rate | 8–12% | 3–6% | 7–10% |
| Reply rate | 4–7% | 0.5–2% | 3–6% |
| Free-to-paid conversion | 5–9% | 1.5–3% | 4–7% |
| Monthly paid churn | 2–4% | 6–11% | 3–5% |
| Subscriber share rate | 9–14% | 2–5% | 7–12% |
| Average read time | 4.2 min | 1.8 min | 3.6 min |
| Re-subscription rate (post-cancel) | 31% | 8% | 24% |
The "AI + Voice Editing" column is the one to pay attention to. It's almost indistinguishable from human-written output in every metric that matters. The difference between column 2 and column 3 is the editing layer — the work of getting AI-drafted content to actually sound like you.
THE DISCLOSURE QUESTION
This is the most contested question in the Substack-AI conversation and the honest answer is: it depends on how you're using AI and who your audience is.
There are two failure modes here. The first is hiding AI use while your writing clearly sounds AI-generated. That's the worst position — your readers sense something is off but don't have a frame for it, so they just quietly drift away. The second failure mode is over-disclosing in a way that pre-emptively undermines your own credibility. "Written with AI assistance" on every issue trains readers to discount what they're reading.
The middle path that seems to work best: be transparent about your overall workflow in a dedicated post or About page, but don't stamp a disclosure disclaimer on every individual issue. Readers who care about the question have access to the answer. Readers who don't care aren't constantly reminded of it.
What to avoid entirely: writing AI-generated newsletters that sound like AI-generated newsletters and putting zero disclosure anywhere. That's the combination that destroys trust when subscribers figure it out — and they do figure it out.
WHY IT MATTERS
Here's the part that newsletter writers don't fully reckon with until they're watching their churn numbers move.
Substack's paid subscription model is a recurring relationship. The reader makes a new decision every month — consciously or not — about whether this newsletter is still worth paying for. That decision is driven almost entirely by the relationship they feel with the writer.
When you write your newsletter yourself, with your real opinions and your real experiences, you're constantly depositing into that relationship account. The reader feels known by you. They feel like you're talking to them. Even when the content isn't your best, the relationship is getting reinforced.
When AI writes the newsletter, even well-written AI, the relationship deposits stop. The content is good but nothing personal happened. The reader consumed information. They didn't connect with someone. Over time that relationship account empties out and cancellation becomes easy.
This is why the churn effect of AI writing is delayed, not immediate. You don't lose subscribers after issue 1. You lose them after issue 6 or 7 or 10, when the cumulative absence of connection adds up to a relationship that doesn't feel worth $8 a month anymore.
ℹ️The Compound Trust Deficit
Newsletter writers who shift to AI-first drafting typically see a 3–4 month lag before churn increases become visible. The first 2–3 issues may even perform normally on opens because subscribers are still operating on the relationship goodwill they've built. By month 4–5, when the trust reservoir runs dry, churn spikes sharply — and it's much harder to reverse than it would have been to prevent. The math on this is brutal: you saved 4 hours a week on writing and spent it explaining to sponsors why your engaged subscriber count dropped 22%.
THE WORKFLOW
Here's the thing about AI and newsletters: the objection isn't to AI as a tool. It's to AI as a replacement for the writer's presence. Those are completely different things.
Using AI to organize your research notes is not the same as using AI to write your newsletter. Using AI to generate a first structure you then tear apart is not the same as publishing the AI draft. The problem isn't in the tool — it's in the workflow that treats AI output as the finished product.
The workflow that works is one where AI handles the parts of writing that don't require your voice — information gathering, structure, summarizing external sources — and you handle the parts that do: your opinions, your specific experiences, your exact word choices, your openings and closings.
Start with your own thinking, not an AI prompt
Before you open any AI tool, spend 10 minutes writing what you actually think about the topic in your notes app. Stream of consciousness. No editing. This is your voice unfiltered and it becomes your source document — not the AI's output. Without this step, AI writes the newsletter. With it, you're the author.
Use AI for research synthesis, not for writing
Give the AI your topic and ask it to summarize the relevant data, arguments, and counterarguments. Ask for sources and links. Use AI as a very fast research assistant. Read what it returns. Take the pieces that are useful. Discard the framing entirely. Your job now is to write, not to edit AI prose.
Build your structure from your own notes
Use your 10-minute stream of consciousness plus the research you've gathered to sketch an outline. Bullet points only. Two to five main sections. Write this yourself. The structure shapes the voice — if AI writes your structure, AI wrote your newsletter even if you word it yourself.
Draft section by section, starting with the hard part
Write the section you have the most to say about first. Not the intro. Not the conclusion. The section where you have a real opinion or a real experience. That section becomes your voice anchor. When you write the other sections, you're writing toward that anchor, which keeps the tone consistent.
Use AI to expand underdeveloped sections only
If a section is thin and you know it, paste your bullet points into an AI and ask it to expand them. But treat the output as raw material, not final copy. Rewrite every sentence in your voice. The AI gave you the information. You give it the personality.
Run the draft through a voice-matching tool before you edit
Before your final edit pass, run your full draft through a tool like humanlike.pro to check where the writing diverges from your established voice patterns. This step catches the vocabulary mismatches, the transition sentences you didn't write, and the sections that still read like AI prose. Fix those flagged sections specifically before you read the whole thing.
Edit for friction, specificity, and opinion
Your final edit pass should focus on three things: adding one specific detail that only you could know (a real conversation, a real number, a real experience), removing all the AI-flavored transition sentences, and making sure you've taken a real position somewhere in the issue. If you've hedged every opinion, your subscribers have no reason to read your specific newsletter instead of a generic one on the same topic.
Most newsletter writers who use AI and care about quality do some version of editing the AI draft before they publish. They remove the obvious transitions. They add a personal anecdote. They punch up the opening.
But there's a problem with editing-by-feel: you're comparing your draft to your memory of how you write, not to an actual sample of how you write. And memory is unreliable. Especially when you've been reading AI prose for an hour before you edit.
The specific thing that voice matching does — and that regular editing doesn't — is compare your draft to your actual historical writing at the sentence level. It catches the words you don't normally use. It catches the sentence structures that aren't yours. It catches the rhythm shifts that your subscribers will feel but won't be able to name.
This is what tools like humanlike.pro are built for: taking AI-drafted content and adjusting it to match the established patterns of a specific writer's voice, rather than just making the writing sound generally less like AI. General AI humanization removes the obvious tells. Voice-matched humanization makes the content sound like you specifically. That second thing is what keeps people subscribed.
CASE STUDIES
The Finance Newsletter That Lost 400 Paid Subscribers
A personal finance writer with 8,000 paid subscribers started using AI drafts in March 2025. By September, paid subscriber count was 7,600. Not dramatic, but the trend was clear. More concerning: reply rate dropped from 5.2% to 1.1%. The newsletter was longer, better researched, and more consistent than before. Subscribers weren't complaining about quality. They just... stopped engaging.
The writer added back their signature elements — a specific money mistake from their own week, an opinion on a financial headline that was clearly theirs, a sentence or two about what they'd been thinking about. Open rates came back. Paid churn normalized. The research process stayed AI-assisted. The presence layer came back human.
The Startup Newsletter That Grew Through AI (When Used Right)
A founder-focused newsletter at 2,200 subscribers went from weekly to twice-weekly using AI for research and rough structure. Paid subscriber count went from 340 to 890 over eight months. Their workflow: 15 minutes of voice notes per issue (their actual opinions, recorded while commuting), AI expansion of the notes into a draft, heavy editing to restore their natural voice patterns, then publication.
The key difference from the finance newsletter failure: they never used AI as the primary author. The voice notes came first. The AI was translating their thoughts into prose, not generating thoughts that they then claimed as theirs. That's a meaningful distinction and the subscriber metrics show it.
The Creator Who Disclosed Everything and Lost Nobody
A content strategy newsletter with 4,500 free subscribers and 210 paid wrote a transparent issue in late 2025 explaining their exact workflow — AI for research summaries, their own opinions and structure, heavy editing. They expected some backlash. They got 14 upgrade replies. Several paid subscribers emailed saying the transparency made them more likely to recommend the newsletter, not less.
The thing that made this work: the workflow they described was credible. The newsletter clearly had their voice. The AI use was clearly in service of their presence, not replacing it. Disclosing a good process builds trust. Disclosing a bad process just speeds up the exit.
Substack doesn't have a content-ranking algorithm the same way Google or Instagram do. Your newsletter goes to subscriber inboxes directly. But there are several platform-level mechanics that interact with content quality in ways that matter.
The Notes Feed and Restacks
Substack Notes — the platform's in-app social feed — does surface content algorithmically. And Notes engagement is heavily driven by authenticity signals. Short-form observations that sound personal and specific get restacked. Content that reads as AI-generated gets scrolled past.
If you're using Notes to grow your newsletter and you've shifted to AI writing, you'll notice Notes engagement collapse before newsletter engagement does. Notes is where the platform's human detection happens fastest.
The Recommendation Network
Substack's growth engine is the recommendation system — other writers recommending your newsletter to their subscribers. This is relationship-based. Writers recommend writers they actually read and genuinely like. AI-written newsletters don't get recommended at the same rate because even other writers stop reading them. They're correct but not compelling.
Spam Filtering and Deliverability
This is the technical dimension. Email providers like Gmail and Outlook are increasingly sophisticated at detecting AI-generated content in email. Content that gets flagged as spam by subscribers — even a few — hurts deliverability for everyone on your list. AI-generated newsletters with low engagement rates are more likely to land in spam over time, compounding the open rate problem.
Let's do the math that newsletter writers don't usually do explicitly.
Say you have 500 paid subscribers at $8/month. That's $4,000 MRR. A 22% churn increase from switching to AI-first drafting takes you from, say, 3% monthly churn to 3.7% monthly churn. That's 18 extra cancellations per month versus 15. Sounds small. Over six months, that's an extra 18 paid subscribers lost. At $8/month, that's $144/month in lost revenue that compounds because those subscribers aren't there to recommend you or upgrade to annual plans.
Meanwhile, the time you saved using AI-first drafting was maybe 3 hours per issue. At four issues a month, that's 12 hours. At any reasonable value of your time, the math works for AI assistance. The math doesn't work for AI replacement.
The voice editing layer — the hour you spend making your AI-assisted draft sound like you — is the highest-leverage hour in your newsletter workflow. It's cheaper than the subscriber churn it prevents and more valuable than the time it costs.
ACTION PLAN
Before every issue, run through this checklist. It takes five minutes and it catches the most common authenticity failures.
- Is there at least one specific detail that only you could know? A real conversation, a real number from your own experience, a real event from your week?
- Did you take a clear position on something? Not both sides. One side. Yours.
- Are there any transition sentences that sound like a textbook? ("With that in mind...", "Having established X...", "It's important to note that...")
- Does the vocabulary match your actual writing? Read the first paragraph. Does every word sound like you would say it out loud?
- Does the ending do something? Make a claim, ask a question, issue a challenge, make a personal note — not summarize what they just read.
- Is there at least one place where the logic isn't perfectly smooth? A slight digression, an incomplete thought, a tangent that connects back? Human writing has texture.
- Would a longtime subscriber recognize this as yours based on voice alone, before they saw your name?
If you fail two or more of those, the issue isn't ready. Not because the information is wrong but because the relationship deposit didn't happen.
The best newsletter writers in 2026 aren't hiding their AI use. They're also not constantly announcing it. They've found a middle ground that their subscribers have actually responded well to.
The approach that works: write one issue per year about your workflow. How you research. What tools you use. How you decide what to write about. What your editing process looks like. Readers who care about this — and more than you think do — will deeply appreciate the transparency. Readers who don't care will skim it and move on.
What you're doing is creating a relationship where your readers understand that you're the author, even when you're using tools. They trust the tools because they trust your process. That trust is what the paid subscription is actually buying.
The writers who are struggling most with AI authenticity are the ones who treat it as a binary: either pretend AI doesn't exist, or treat AI as the author. The writers who are growing their paid subscriber counts are treating AI like every other tool in their stack — something that assists their work without replacing their presence.
Frequently Asked Questions About Substack's AI Policy and Newsletter Authenticity
ℹ️Make Your AI-Drafted Newsletter Sound Like You
Stop guessing whether your AI-assisted content matches your voice. HumanLike analyzes your writing style and adjusts AI-drafted content to match your specific patterns — so your subscribers can't tell where the AI stopped and you started. Try HumanLike Free
Verdict
- Substack's policy doesn't ban AI content, but the platform's entire business model depends on authentic writer-reader relationships — which AI writing erodes over time whether or not Substack ever writes a rule about it
- The churn effect is delayed and compound: you won't lose subscribers after issue 1, you'll lose them after issue 8, and by then the goodwill account is empty and hard to refill
- The eight tells of AI writing — transition sentences, even-handed hedging, perfect structure, vocabulary shifts — are pattern-recognized by trained subscribers even when they can't articulate what's different
- The right workflow uses AI for research and rough structure only, with your real opinions and specific experiences as the primary content source
- Voice-matched editing — not just general AI humanization — is what makes the difference between 35% open rates and 22% open rates
- Disclosure works when you have a credible process to describe; it backfires when the process doesn't hold up to scrutiny
- The math is clear: the hour you spend on voice editing is cheaper than the subscriber churn it prevents
This article contains AI-assisted research reviewed and verified by our editorial team.