← All BlogSchool

ESL Detection Defense

Your writing was not the problem.

Why AI detectors wrongly flag ESL student writing and the complete defense strategy: evidence, appeal template, and institutional rights.

Riley Quinn
Riley QuinnHead of Content at HumanLike
Updated April 5, 2026·24 min read
Students studying together on a university campus lawn
SchoolHUMANLIKE.PRO

ESL Detection Defense

Picture this. You are a graduate student from Chengdu, three years into your PhD at a Canadian university. Your English is not perfect and you know it. But you have worked harder on this language than on almost anything else in your life. Every assignment, you read it over four times. You keep a vocabulary notebook. You know what a subordinate clause is. You know when to use "which" versus "that." You check your articles because in Mandarin there are none and the rules still trip you up sometimes.

You spend six days on a 2,500-word literature review. You are not using AI. The thought doesn't even cross your mind. You write a draft, you revise it, you read it aloud to check the flow, you visit the writing center once, and you submit it feeling reasonably good about the work.

Three days later, your professor sends an email asking you to come in and discuss your submission. When you sit down, she shows you a screenshot. A detection tool has flagged your paper at 87% AI probability. She is not accusing you outright. But the implication is sitting there in the room between you.

⚠️This Is a Systemic Problem, Not a Personal Failure

If an AI detector flags your writing as AI-generated and you are a non-native English speaker, the most likely explanation is not that you cheated. It is that the detector is poorly calibrated for how ESL writers actually write.

This experience is not unique. It is happening to students from India, South Korea, Saudi Arabia, Brazil, Poland, Nigeria, and dozens of other countries at universities across the United States, Canada, the UK, and Australia. The pattern is consistent: ESL students who have worked hard to produce correct, formal, structured academic English are being flagged by tools that mistake their competence for artificial generation.

This guide is specifically for you. You will come out of it knowing exactly what to say, what evidence to collect, what the research shows, and how to build a case that is hard for any reasonable institution to ignore.

17-61%False positive rate on authentic TOEFL essays written pre-ChatGPTLiang et al., Stanford HAI, Patterns (Cell Press), 2023
<2%False positive rate for the same tools on native English essaysSame study, native English college student baseline

Why ESL Writing Looks Like AI Output to a Machine

To understand the problem, you need to understand how AI detectors actually work. Most of the major tools — Turnitin's AI detection, GPTZero, Copyleaks — are doing the same thing: they analyze statistical patterns in your text and compare them to patterns they associate with AI-generated content. The two main signals they use are called perplexity and burstiness.

Perplexity is roughly a measure of how predictable each word choice is. A language model tends to pick high-probability word sequences because that is what it is trained to do. Low perplexity means the words follow predictable patterns. High perplexity means the text has surprising word choices.

Burstiness refers to variation in sentence length and complexity. Human writers tend to write in bursts: very short sentences, then long and winding ones, then medium ones. AI tends to write in a more consistent, metronomic rhythm.

Here is the problem for ESL students specifically. Everything you have been trained to do in formal academic English writing makes your text look like AI output to these systems.

You Were Taught to Use Standard Vocabulary

In ESL instruction, teachers emphasize clear, standard, widely-understood vocabulary. You learned words like "demonstrate," "indicate," "suggest," "significant," "approach," "establish." These are exactly the words that appear constantly in AI output because they are the highest-probability choices in academic contexts. Your vocabulary is low-perplexity by design, not because a model generated it.

You Avoid Idioms and Colloquial Expressions

Native academic writers sometimes slip in a phrase like "cut to the chase" or an unexpected metaphor from their cultural context. ESL writers, especially those still building confidence, tend to stick to language they know is safe and standard. Idioms are risky because you might get them slightly wrong. So you play it safe. The result is text with fewer of the unexpected deviations that detectors use to identify authentic writing.

Your Grammar Is Often More Correct Than a Native Speaker's

This is perhaps the cruelest irony. Many ESL students produce grammatically cleaner text than native speakers because they have explicitly studied grammar rules rather than absorbing them through years of informal exposure. A native English speaker might write a perfectly natural sentence that bends a rule, uses a comma splice for stylistic effect, or employs a fragment for emphasis. These deviations are signals of human writing. ESL students who have drilled grammar since secondary school avoid them precisely because they are trying to get the language right.

Formal Training Produces Consistent Sentence Structures

Academic English courses teach patterns: topic sentence, supporting evidence, analysis, transition, repeat. This is good writing instruction. But it also produces a structural consistency that can look like the pattern-following behavior of a language model.

ℹ️The Core Technical Problem

AI detectors were trained primarily on native English text. The features they treat as signals of human writing — informal phrasing, idiomatic expressions, structural variation — are features that ESL writers are specifically taught to minimize. This creates a systematic bias against non-native writers.

Everything you did right, by the standards of academic English instruction, is being used as evidence against you. This is not a fringe theory. It is documented in peer-reviewed research, and understanding that research is the foundation of your defense.


The Research

What the Research Actually Shows

You are not going to walk into a professor's office and say "I think this tool is biased." That is an assertion. What you need is evidence. And the evidence exists, published in respected venues, and it is damning.

The Stanford HAI Study: The Numbers That Matter

In 2023, researchers at Stanford's Human-Centered Artificial Intelligence institute published a paper by Weixin Liang and colleagues that has become the central piece of evidence in this entire debate. The study is called "GPT Detectors Are Biased Against Non-Native English Writers" and it was published in Patterns, a peer-reviewed Cell Press journal.

The methodology makes the findings hard to dismiss. The researchers used TOEFL essays written in 2009 — real essays, by real non-native English speakers, written four years before ChatGPT existed. There is no universe in which these essays could have been AI-generated.

They ran these pre-ChatGPT TOEFL essays through seven of the major AI detection tools available. Between 17% and 61% of authentic, human-written TOEFL essays were flagged as AI-generated. In one test with a particularly well-known commercial tool, more than half of the non-native essays were misclassified.

Stanford HAI 2023: False positive rates by writer population

Writer PopulationFalse Positive RateContext
Native English college students<2%Near-perfect classification — the baseline the tools were tuned against
ESL writers (TOEFL essays)17% - 61%Across 7 major detectors on 2009 pre-ChatGPT essays
Mandarin-background writersHighest observedFormal instruction patterns overlap most with AI signal
AI text styled to mimic ESLOften passesTools flag 'not native' rather than 'not human'

Our findings raise questions about the appropriateness of deploying these tools in academic settings, particularly given the potential for disproportionate harm to non-native English writers.

Liang et al. · Patterns (Cell Press), 2023

The Demographic Breakdown

The false positive rates were not uniform across all ESL writers. Students whose first language was Chinese showed particularly high rates, likely because features of formal academic Mandarin-influenced English — precise vocabulary, structured argumentation, limited idiomatic variation — overlap substantially with the features detectors associate with AI output. Students from Arabic-speaking, Korean, and other East Asian backgrounds similarly showed elevated rates.

If your first language is Chinese, Korean, Arabic, or another language where the transfer patterns into formal academic English produce the specific features that detectors flag, you can invoke this specific research dimension in your defense.

University of Maryland and Other Published Work

The Stanford study is not alone. Researchers at the University of Maryland have published on the limitations of AI detection in diverse student populations, noting that the corpora used to train detection models are predominantly composed of native English text. A 2023 paper in the Journal of Academic Ethics by Perkins and colleagues found consistent evidence of disparate impact on international students across multiple countries.

A particularly important finding: in several studies, AI text designed to look like ESL writing passes detection more easily than authentic ESL writing. The tools are not reliably distinguishing AI from human writing in non-native contexts. They are essentially flagging "this does not look like what native writers do" as a proxy for AI generation, which is not the same thing at all.

ℹ️Key Citation for Your Appeal

Liang, W., et al. (2023). "GPT Detectors Are Biased Against Non-Native English Writers." Patterns, 4(7). Peer-reviewed, Cell Press journal, pre-ChatGPT essays. Print it. Bring it to every meeting.


This Bias Is Not Neutral — and It Is Not Victimless

AI detection tools that disproportionately flag ESL student writing are not just a technical inconvenience. They are creating a discriminatory pattern in academic enforcement, and that pattern falls along lines of national origin, language background, and in many cases, race.

Who Bears the Burden

The students most likely to be falsely flagged are overwhelmingly from China, India, South Korea, Saudi Arabia, the Gulf States, Japan, and various European countries where English is taught as a formal academic language rather than absorbed through cultural immersion. They are disproportionately part of already-marginalized populations navigating immigration systems, visa restrictions, language barriers, and cultural adjustment alongside their academic work.

When a false positive is mishandled, the consequences for an ESL student can be more severe than for a domestic student. Visa status can be affected by academic integrity violations. Scholarships can be revoked. Opportunities to transition from student visa to work visa can be jeopardized. The stakes are not symmetric.

The Intersection with Race and National Origin

The demographic pattern of who gets flagged is not random. It correlates strongly with race and national origin. In the United States, Title VI of the Civil Rights Act of 1964 prohibits discrimination based on national origin in programs receiving federal funding, which includes virtually all universities. A policy that disproportionately harms students of particular national origins without adequate procedural safeguards is potentially a Title VI issue.

You do not need to file a Title VI complaint to invoke this framework in your appeal, but being aware of it changes the institutional calculus for administrators who hear your case.

Compounding Existing Challenges

International students already navigate a more difficult academic environment. They are writing in their second or third language under the same time pressure as native speakers. They may be managing jet lag from time zones twelve hours away, food adjustment, social isolation, and financial stress from tuition that is often two to four times what domestic students pay.

Adding a false AI accusation on top of this load, and then requiring the student to navigate an appeal process conducted entirely in a language they are still developing, is genuinely unjust.

💡Know Your Rights Before You Walk In

In most academic integrity proceedings, you have the right to respond in writing, to see the evidence against you, and to present counter-evidence. Ask your international student office about specific institutional procedures before any formal meeting.


Your Defense

Building Your Defense Case as an ESL Student

This is where things get practical. You need to think of yourself as building a legal-style case file, not just explaining yourself verbally. The goal is to produce a written record that makes it very difficult for a reasonable adjudicator to maintain the accusation.

Evidence ESL Students Have That Native Speakers Often Don't

Here is something counterintuitive: your history as a language learner is actually a major evidentiary asset. You have documentation of your English language journey that native speakers simply do not have, because they never had to formally track their language development.

  • Official English proficiency test scores (TOEFL, IELTS, CELPIP, PTE) showing your level at the time of application, and potentially multiple scores over time
  • Transcripts from ESL or EAP courses you took, either before entering your degree program or as part of it
  • Records of tutoring sessions with an English language center or private tutor
  • Graded papers from earlier in your academic program showing your writing style at earlier stages
  • Professor feedback on previous assignments that describes your writing voice, characteristic strengths, and areas you were working on
  • Email exchanges with professors, advisors, and classmates in English that demonstrate a consistent writing voice across informal and formal contexts
  • Language learning apps or tools with usage records showing active English language practice
  • Any certificates from English language courses taken at any institution

Using Your Language Learning Background as Your Strongest Evidence

In your appeal, you can specifically connect the dots: these are the writing patterns the detector flagged, this is the instruction I received that trained me to write this way, and here is published research showing that exactly these patterns appear in authentic ESL writing and are systematically misidentified by current detection tools. That chain of logic is not refutable with a single detection score.

Building a Paper Trail of Your Writing Voice

Collect email chains from your time at the institution. Emails to professors, advisors, classmates. These establish a consistent writing voice over time. Adjudicators who see emails written months apart, with the same person's characteristic phrasings and even characteristic error patterns, find it much harder to sustain a claim that the paper in question was not written by that person.

The Writing Center as a Resource and a Witness

If you visited your university's writing center for the paper in question, this is potentially very significant. Writing centers often keep appointment records. Some can provide a letter from the consultant who worked with you, confirming that you came in with a draft and they worked through it with you. That kind of corroborating testimony from a university staff member is highly credible in an appeal.

💡Start Collecting Evidence Before the Meeting

Do not wait until a formal hearing is scheduled. As soon as you receive any indication that your writing has been flagged, start compiling your evidence file. The more documentation you have organized before any formal proceeding, the stronger your position in every subsequent conversation.


The ESL-Specific Appeal Strategy

A generic "I didn't use AI" defense is weak. Anyone can say that. Your appeal needs to be built specifically around who you are, what your language background is, and what the research shows about how that background correlates with false positive rates.

How to Frame Your Appeal Around the Bias Research

The opening frame of your appeal should establish two facts before you ever address the specific accusation: first, that AI detection tools have documented and peer-reviewed false positive rates for non-native English speakers, and second, that you are a non-native English speaker. Once those two facts are on the table, the detection score is no longer proof of anything. It is a data point with a known reliability problem.

Language to Use When You Are a Non-Native Speaker

In your written appeal, be clear and direct about your language background early in the document:

"I am a first-language Mandarin speaker who has been studying academic English for eight years. My formal English language training specifically emphasized the vocabulary range, grammatical precision, and sentence structure patterns that peer-reviewed research has identified as commonly misclassified by AI detection tools."

This immediately contextualizes everything that follows. Avoid hedged, apologetic language. Statements like "I am sorry to cause any trouble" position you as someone making a request rather than someone asserting a right. You are asserting a right to fair treatment under the institution's own academic integrity procedures.

Requesting Comparison to Known ESL Writing Samples

One of the most powerful moves in your appeal is to request that the institution compare your writing to known authentic ESL writing samples from your language background:

"I request that my writing be evaluated against authentic TOEFL writing samples from Mandarin-speaking test-takers, or against a corpus of verified non-AI academic writing from international students, to determine whether my text is more consistent with ESL writing patterns or with AI-generated text."

Most institutions cannot actually do this comparison, and being unable to perform the requested analysis is itself evidence that their detection process is inadequate.

Asking About Policy for International Students

Formally ask the institution, in writing, whether they have a specific policy for applying AI detection evidence in cases involving non-native English speakers. Ask whether the academic integrity office has received training on the differential false positive rates documented in peer-reviewed research. Keep records of every response, including non-responses.


Institutional Resources That Exist to Help You

One of the biggest mistakes ESL students make when facing a false positive accusation is trying to handle it completely alone. You do not have to.

The International Student Office

Your first stop. They have likely seen academic integrity cases involving international students before. They can sometimes serve as advocates or at minimum provide written documentation of your language background that carries institutional credibility. Contact them the same day you receive any notification about a potential academic integrity issue.

The Writing Center

Both a resource and a potential witness. Ask a consultant to provide a written assessment of whether your writing style is consistent with ESL patterns from your language background. A writing center consultant who can say in writing that your submission is consistent with advanced Mandarin-speaking academic writers is genuinely valuable evidence.

ESL and EAP Program Directors

If you have taken any ESL or English for Academic Purposes courses, the faculty who taught those courses can be excellent advocates. A letter from an ESL faculty member drawing on their expertise in second language acquisition is a much more authoritative document than your own self-report.

The Student Ombudsperson

Most universities have a student ombudsperson office that is independent of academic departments and administration. Their job is specifically to help students navigate institutional processes fairly. They can advise you on your rights, review your appeal before you submit it, and sometimes intervene if they believe the process is being applied unfairly. Visit them early, not as a last resort.

Many universities provide free or low-cost legal advice. This is particularly relevant if the case is serious, or if you believe your visa status could be affected. Even if you never pursue formal legal action, having reviewed your case with a lawyer strengthens your negotiating position at every stage.

💡Build Your Support Team Early

Contact the international student office, the student ombudsperson, and your writing center on the same day you receive any formal notification. Do not wait to see how the informal conversation goes first.


The Process

Step-by-Step: Your Complete Defense Process

1

Pause and Document Immediately

The moment you receive any communication suggesting your writing has been flagged, do not respond immediately. Screenshot or save every piece of communication. Note the date and time. Write down everything you remember about how you wrote the paper: when you started, what sources you used, what the process looked like. This contemporaneous record is your baseline.

2

Read Your Institution's Academic Integrity Policy in Full

Before any meeting, any reply, any formal action, read the actual policy document. Know whether the current stage is formal or informal. Know your rights to see evidence, respond in writing, and appeal decisions. Academic integrity offices sometimes make procedural errors that can be grounds for appeal later — but only if you knew the procedures well enough to notice the error.

3

Contact the International Student Office the Same Day

Email or call the international student office immediately. Ask what support they offer for international students in academic integrity proceedings. Ask whether they are aware of the differential false positive rates for ESL students. Record their response. If they offer to accompany you to meetings or provide written support, accept immediately.

4

Visit or Contact the Student Ombudsperson

Book an appointment before any formal meeting with the academic integrity office. Ask them to review any written appeal you draft before you submit it. Their involvement signals to the institution that you are taking your procedural rights seriously.

5

Gather All Evidence in One Place

Create a physical or digital folder with everything relevant: TOEFL or IELTS scores, ESL/EAP course transcripts, writing center appointment records, previous graded papers, email chains in English, draft versions of the paper if you saved them, browser history or research notes, and citations connected to the sources you accessed.

6

Obtain the Full Detection Report

Request the complete detection report. Not just the headline percentage. Ask for the specific tool used, the specific sections flagged, the confidence range, and the methodology. If the institution is using Turnitin, print Turnitin's own acknowledgments that the tool is not definitive evidence and recommends human review.

7

Print and Annotate the Stanford HAI Study

Download the Liang et al. 2023 paper from Patterns. Print the abstract, methods, and results. Highlight the 17-61% false positive rate and the statement about deployment appropriateness. Write a one-paragraph summary that connects the research specifically to your situation.

8

Draft Your Written Appeal

Five sections: (1) your language background and academic history in English, (2) the published research on ESL false positive rates with citation, (3) specific evidence that you wrote the paper, (4) the specific features that likely triggered the detector and why those features are consistent with authenticated ESL writing from your language background, (5) what you are requesting — a finding of no violation, or at minimum a secondary review by a qualified linguistics specialist.

9

Request Secondary Review by a Qualified Reviewer

Request in writing that your writing be reviewed by someone with expertise in second language acquisition or ESL writing assessment, not just by the academic integrity officer. If the request is denied, the denial itself becomes part of a further appeal.

10

Attend Any Formal Meeting With Documentation in Hand

Bring a printed copy of your evidence file. Bring a copy of the Stanford study. Bring printed copies of any written statement for everyone in the room. Bring a trusted person if procedures allow it — an international student advisor, ombudsperson, or student representative. Take notes. Do not speak from memory alone.

11

Follow Every Meeting with a Written Summary

Within 24 hours of any meeting, send an email summarizing what was discussed and what next steps were agreed. "Thank you for meeting with me today. To confirm my understanding: [summary]. Please let me know if I have misunderstood anything." This creates a paper trail.

12

Escalate Formally If the Initial Process Does Not Resolve It

If the initial process does not produce a finding of no violation and you believe the process was flawed, escalate. Most universities have an appeal level above the initial academic integrity committee. At this stage, having a student legal services advisor review your case is worth the time.


Real Scenarios: Different Nationalities, Different Disciplines

Scenario 1: Wei, Chinese PhD Student in Engineering

Wei is a third-year PhD student from Chengdu in a mechanical engineering program at a Canadian university. His written English is strong but formal. He does not use idioms. His sentences are well-constructed and consistent. For a graduate seminar, he submits a literature review on thermal management systems. The professor runs it through Turnitin AI detection and gets a 79% AI probability score.

Wei has IELTS records showing seven years of formal English testing. Transcripts from an EAP course in his first semester. Writing center appointment records for this paper. Research papers saved in his reference manager with access timestamps. Email chains with his supervisor spanning two years that show his consistent writing voice.

His appeal strategy: lead with the Stanford HAI study, present his language learning history, provide the writing center records, and request that the specific linguistic features the tool flagged be compared to the known patterns of formal academic English produced by advanced Mandarin-speaking writers. His supervisor, who knows his writing well, provides a supporting statement. The case is resolved in Wei's favor at the informal stage.

Scenario 2: Priya, Indian Master's Student in Public Health

Priya is from Mumbai and is doing a master's degree in public health in the United States. Her first language is Marathi, but she has been educated in English since primary school. Her written English is clean, academic, and well-structured. She scores 82% AI probability on GPTZero.

The complicating factor: because she has been educated in English since childhood, her professor questions whether the ESL defense applies. Priya's response is precise: formal English instruction in India emphasizes exactly the patterns that US-trained detectors misclassify. Her English was not absorbed through cultural immersion. It was taught through explicit grammar instruction, standardized test preparation, and formal academic writing courses.

She obtains a letter from an ESL faculty member who specializes in South Asian English varieties, explaining the specific writing patterns characteristic of Indian academic English and their overlap with AI detection signals. The case proceeds to the academic integrity committee. The committee finds in her favor and recommends the department review its policy.

Scenario 3: Jae-won, Korean Undergraduate in Economics

Jae-won is a second-year economics student from Seoul studying in the UK. His English has improved substantially over his two years, and his writing reflects that: precise, well-organized, grammatically correct, but it lacks the informal variation and idiomatic flavor of his British classmates' writing. A term paper on monetary policy gets flagged at 71% AI probability.

Jae-won's specific challenge: he does not have strong documentation of his writing process for this particular paper. He wrote it on his personal laptop without cloud saves, did not visit the writing center, and his only evidence is his own account. His strategy focuses on research evidence and his writing history across other assignments. He gathers every piece of graded work from two years, showing a clear development arc.

The key move: he requests a viva voce examination — a verbal discussion of the paper's content, which his institution allows as an optional component. Jae-won performs well, demonstrating detailed knowledge of the arguments, the sources, and the reasoning. Writing history + research evidence + verbal examination = no-violation finding.

Scenario 4: Fatima, Saudi Arabian Doctoral Student in Education

Fatima is completing a doctorate in educational policy at an Australian university. She has a strong academic background, has published one paper in a peer-reviewed journal, and has been working with her supervisor for three years. A chapter of her dissertation is flagged.

Fatima has the most documentation of any of these cases: three years of email correspondence with her supervisor, multiple draft versions of the chapter saved over time, supervisor feedback on drafts, conference papers she has presented based on earlier versions, and her own audio recordings of herself reading drafts aloud as a self-correction tool.

Her supervisor takes an active role in her defense, providing a detailed statement that he has read multiple drafts over six months and that its development has been entirely visible to him. The case is closed without formal proceedings. Fatima uses the experience to write a memo to her department's graduate committee recommending specific procedural safeguards — adopted as interim guidance.


Common Mistakes

Common Mistakes ESL Students Make When Appealing

⚠️Mistake #1: Trying to Make the Case Verbally Without Documentation

Verbal explanations in a meeting are forgotten, misremembered, and carry no procedural weight. Always follow up every conversation with a written summary email. This creates a record and gives you control over the framing.

⚠️Mistake #2: Apologizing for Things That Are Not Your Fault

In a Western institutional context, excessive apology is often misread as implicit admission of guilt. You can be polite and respectful without being apologetic. "I understand this process is important for academic integrity" is not an apology. "I am sorry to be causing this trouble" suggests you are the source of the problem — which you are not.

⚠️Mistake #3: Not Bringing the Research

The Stanford study is publicly available and directly addresses your situation. Print it. Bring it. Quote from it. Do not assume your professor has read it. Many have not. The moment you put a peer-reviewed paper on the table that says "AI detectors have false positive rates of 17-61% for non-native English writers," the evidentiary landscape shifts.

⚠️Mistake #4: Waiting Too Long to Escalate

Many students try to resolve everything informally first. Sometimes this works. But if your professor is not receptive, waiting creates problems: evidence becomes harder to gather, formal appeal timelines may pass, informal positions harden.

⚠️Mistake #5: Not Asking for the Detection Report Directly

You are entitled to see the specific evidence. The detection score alone is not enough. Ask for the full report: flagged sections, specific features, confidence intervals. A 78% AI probability with a stated margin of error of ±25% is not reliable evidence.

⚠️Mistake #6: Accepting Informal Resolution That Doesn't Clear Your Record

Some professors, faced with a credible appeal, will offer to give you a grade and drop the matter informally. Ask explicitly: "If I accept this outcome, will there be any record in my file of an academic integrity question related to this assignment?" A formal finding of no violation is cleaner than informal resolution that leaves ambiguity.

⚠️Mistake #7: Treating This as a Relationship Problem, Not an Evidence Problem

Character arguments are weak in academic integrity proceedings. Evidence arguments are strong. You are not trying to make your professor like you. You are demonstrating, with documentable evidence, that the detection tool's output is unreliable in your specific case.


Writing Strategies That Reduce Your False Positive Risk

Once the immediate accusation is handled, it is worth thinking about writing practices that preserve your academic quality while reducing the statistical patterns that trigger false positives.

Vary Your Sentence Length Intentionally

The single most effective change. If most of your sentences fall in the 18-25 word range, that is a burstiness problem. Deliberately mix in very short sentences — two to five words — after complex ones. And occasionally write a sentence that runs longer than you normally would, one that wanders through a full chain of connected ideas before it reaches a period. This variation is natural in authentic human writing and almost absent in AI output.

Use First-Person Voice Where Your Discipline Allows

Many academic disciplines now permit or even encourage first-person writing. Phrases like "I argue that," "in my reading of this data," "when I examined the secondary literature, I noticed" are both good scholarly writing practice and strongly human writing signals. First-person perspective with genuine intellectual personality is one of the hardest things to fake with AI.

Include Specific Personal and Contextual Details

Where your assignment allows, include specific details from your own experience or research context. A reference to a conversation with a professor about a source, the specific database you used, a detail about why a particular theoretical framework resonates with your own research question. These contextual anchors are signals of genuine engagement that are very difficult to produce without actually doing the work.

Let Yourself Have an Opinion

Many ESL students write in an overly hedged, neutral style. In many fields, genuine intellectual engagement means taking a position and arguing for it. Writing that takes a clear stance, defends it, acknowledges counterarguments, and responds to them shows intellectual personality. It also produces naturally varied sentence structures because you are actually thinking through an argument.

💡Run Your Text Before You Submit

Tools like HumanLike.pro include a built-in detector that shows you exactly what scores you are getting and which parts of your text are flagging. Running your paper before submission lets you identify problem passages and revise them naturally, rather than being surprised after the fact. Think of it as a pre-flight check for your writing.


What to Do Right Now

If you are currently facing a false positive accusation, here is your immediate action list. Today, not tomorrow.

  • Save every piece of communication you have received about this issue, with dates
  • Read your institution's academic integrity policy from start to finish tonight
  • Email the international student office explaining the situation and asking for support
  • Book an appointment with the student ombudsperson
  • Search for and save the Liang et al. 2023 Stanford HAI study on AI detection bias
  • Start compiling your evidence file with everything that establishes your writing history
  • Do not respond to any formal communication without having read the full policy first
  • Do not accept any informal resolution that leaves an unresolved mark on your record without understanding what you are accepting
The Core Message

You have every right to be in the academic institution you are in. You earned it. You are doing the work. The writing you are producing in your second language is a real intellectual achievement, not a liability. The tools being used to evaluate your work were not built with you in mind — and that is a failure of those tools, not of you. The research is on your side. The institutional support systems exist for exactly this situation. Use them.

The goal is not just to get through this one accusation. It is to come out the other side with a clear record, with a documented case that contributes to the institutional conversation about detection tool fairness, and with the confidence to know that you understand your rights and you know how to defend them.

You can do this.

Frequently Asked Questions

Can an AI detector tell the difference between ESL writing and AI-generated writing?+
No, current AI detectors cannot reliably make this distinction. The peer-reviewed research, most notably the 2023 Stanford HAI study by Liang et al., shows that these tools produce false positive rates of 17-61% for authentic non-native English writing. The statistical features the tools use to identify AI output overlap substantially with the features produced by ESL writers trained in formal academic English.
What is the strongest single piece of evidence I can present in an appeal?+
For most ESL students, the combination of the Stanford HAI study and your own language learning documentation is the strongest possible pair of evidence. The study establishes that the tool has a documented high false positive rate for people with your linguistic background. Your TOEFL or IELTS records, ESL course transcripts, and writing center records establish that you belong to exactly that demographic. Together, they shift the burden of proof.
My professor seems convinced I used AI. What do I do if they won't listen?+
Stop trying to convince them through conversation and move to the formal process. Request the formal academic integrity review in writing. The informal professor conversation is not where this will be resolved fairly if they are already convinced. The formal process has procedural protections, an opportunity to present written evidence, and typically involves reviewers other than your professor.
I did not save any drafts of my paper. Can I still defend myself?+
Yes. Drafts are ideal evidence but they are not the only evidence. You can still build a strong case using your language learning history, previous graded work, email records that establish your writing voice, any research notes from when you were writing, and the broader research evidence about ESL false positive rates. A verbal examination of the paper's content can also be very powerful.
Does the ESL defense work if I was born abroad but have been in the US for many years?+
It depends on your actual writing development history. The relevant question is not where you were born but how you learned to write academic English. If you learned through formal ESL instruction, standardized test preparation, or academic programs where English was a subject rather than the ambient medium, your writing patterns may still be shaped by that formal training even after many years.
Can I use the Title VI argument at a US university?+
You can reference it as context without filing a formal Title VI complaint. Noting that AI detection tools have a documented disparate impact on students of specific national origins, and that applying such a tool without adequate procedural safeguards may implicate the institution's obligations under Title VI, is not the same as filing a complaint. It is raising a concern that reasonable administrators take seriously.
What if my academic integrity office refuses to consider the bias research?+
Escalate. Most universities have an appeal level above the initial academic integrity committee, often an academic appeals board or a dean's review. The refusal to consider relevant peer-reviewed evidence about the reliability of the tool being used against you is itself a procedural ground for appeal. Document every instance where you presented the research and it was not engaged with.
How do I prevent this from happening in future assignments?+
Enable version history on all documents so you have a time-stamped record of your writing process. Vary your sentence lengths intentionally. Use first-person voice where your discipline allows. Include specific contextual details that anchor your writing in genuine engagement. Visit your writing center and keep appointment records. Use a pre-submission detection check like HumanLike.pro to see your score before you submit.

Related Tools

See How Your Writing Scores Before You Submit

Run your text through HumanLike's built-in detector to understand exactly what flags. Prepare your defense before anyone else raises it.

Riley Quinn
Riley Quinn
Head of Content at HumanLike

Writing about AI humanization, detection accuracy, content strategy, and the future of human-AI collaboration at HumanLike.

More Articles

← Back to Blog