A complete guide for PhD students on AI policy at the candidacy exam stage: written quals, oral defenses, dissertation proposals, and how to protect yourself when institutional policy is unclear or nonexistent.
Steve VanceHead of Content at HumanLike
|
Updated March 15, 2026·25 min read
SchoolHUMANLIKE.PRO
PhD AI Policy
You're three years into your PhD program. You've survived coursework, passed your comprehensive reading list, and finally submitted your written qualifying exams. Two weeks later, your advisor emails you. Not to congratulate you. To ask you to come in and talk about your "writing process." You find out that someone on your committee ran your take-home written qual through an AI detector. The score came back at 74% AI-generated. You didn't use AI. You wrote every word over four days in the campus library. You have the drafts, the notes, the coffee receipts. None of that matters yet, because right now you're sitting in your advisor's office trying not to let your voice shake.
This scenario is playing out at research universities across North America, Europe, and Australia in 2026. And the PhD candidacy stage, specifically, is where the risk is highest and the protection is weakest. Because most doctoral programs don't have a coherent AI policy for candidacy exams. The university's blanket AI policy covers coursework. The dissertation has its own guidelines. But the written qual? The prospectus? The literature review you submitted as part of your candidacy portfolio? Those documents often exist in a policy vacuum.
That vacuum is not neutral territory. It's a trap. This guide is going to show you exactly where the traps are, what the consequences look like at the doctoral level, how to protect yourself before any accusation happens, and what to do if it already has.
TL;DR
Most PhD programs have no specific AI policy for candidacy exams, written quals, or dissertation proposals, creating serious ambiguity.
Faculty and committee members are increasingly running candidacy materials through AI detectors without telling students.
Academic misconduct at the doctoral level is career-ending and can cost international students their visa status.
International and ESL PhD students face disproportionate false positive risk because their careful, formal academic prose reads as "AI-like" to detection tools.
You can protect yourself by documenting your writing process before any accusation happens, not after.
Raising the AI policy question with your advisor proactively is almost always safer than waiting.
THE POLICY GAP
The Policy Gap at the Candidacy Stage
When ChatGPT became a household name in late 2022, universities scrambled to update their academic integrity policies. Most of them did a reasonable job of addressing coursework, undergraduate papers, and course-based assessments. What almost none of them did was think carefully about the candidacy exam stage in PhD programs.
The candidacy stage is structurally unusual. It's not coursework. It's not the dissertation. It exists in a hybrid space where you're demonstrating comprehensive mastery of your field while simultaneously beginning to develop an original research program. The written qualifying exam, the oral defense, and the dissertation proposal are all evaluated partly as knowledge tests and partly as indicators of scholarly potential. The rules that govern them are written by departments and committees, not central administrations.
That decentralization creates massive variation. In the same graduate school, two PhD students in different departments can face completely different AI expectations with zero official guidance about what applies to them.
What the Policy Landscape Actually Looks Like
How Different PhD Fields Typically Handle AI Policy at the Candidacy Stage (2026)
Field / Program Type
Typical Written Qual Format
AI Policy Clarity
Detection Use by Committees
Risk Level
Humanities (English, History, Philosophy)
Long-form take-home essays, 8-72 hours
Vague or absent
Moderate and increasing
High
Social Sciences (Sociology, Political Science)
Take-home essays + portfolio submissions
Partial (often borrowed from undergrad policy)
Low to moderate
Medium-High
STEM (Biology, Chemistry, Engineering)
Mostly oral exams; written component is often a research proposal
Low concern historically
Low but growing
Medium
Computational / Data Science
Technical problem sets + written literature review
Absent or conflicting
Moderate
High for literature review component
Education / Psychology
Portfolio-based with written synthesis papers
Often explicit but inconsistently applied
High
High
Business / Management (DBA, PhD)
Case analysis + conceptual essays
Explicit at many institutions
High
Very High
Law / SJD Programs
Research memoranda + written proposals
Typically strict, but poorly defined for AI
Moderate and growing
High
The column that matters most is "AI Policy Clarity." For most PhD students, the honest answer is that no one in their department has thought carefully about this. The graduate handbook has a sentence or two. The qualifying exam committee has unspoken norms that no one has articulated to you. Your advisor has personal views that may or may not align with your co-advisor or your external committee member.
That's the environment you're writing in. And if you don't take steps to protect yourself inside that environment, you're depending entirely on institutional goodwill, which is not a reliable safety net when your career is what's at stake.
HIGH-RISK DOCS
The Five High-Risk Documents at the Candidacy Stage
Not all your candidacy-related writing carries equal risk. These are the documents where AI detection is most likely to happen, and where an accusation would do the most damage.
1. Take-Home Written Qualifying Exams
This is the single highest-risk document type. You write them under time pressure, away from the classroom, with access to all your materials. From a committee member's perspective, take-home written quals are also the easiest to "suspect" because they allow AI use with no technical barrier.
Humanities departments in particular have moved toward longer take-home formats (24-72 hours) because they test synthesis, not recall. That same format is also where AI tools could, in theory, do the most work. Some committee members are scanning these documents specifically because the format creates opportunity. They're not necessarily accusing you of anything. They're running a check. And if that check returns a high number, you might not find out until the result is already influencing their evaluation.
2. Literature Reviews Submitted as Candidacy Portfolio Material
Many programs require you to submit a substantial literature review as part of your candidacy packet. This document is supposed to demonstrate your mastery of the field. It also happens to be the genre where AI-generated text most closely resembles human-written academic prose, because AI tools have been trained on millions of academic literature reviews.
The problem is compounded by the fact that well-written literature reviews sound similar to each other. Clear topic sentences, consistent citation integration, organized by theme or chronology, synthesizing multiple sources into coherent claims. If you've worked hard to write a clean, professional literature review, you may have inadvertently produced text that scores high on detection tools precisely because you've succeeded at the genre conventions.
3. Dissertation Proposal / Prospectus
Your prospectus is evaluated by your full committee, which means it's seen by more eyes than almost anything else you write during your PhD. It's also a document you typically labor over for months, revising multiple drafts in response to advisor feedback. That process can sometimes produce prose that, ironically, reads more "polished" and less idiosyncratic than your earlier, rougher writing.
Some committee members, particularly those unfamiliar with how much revision a good prospectus goes through, may flag well-edited prose as suspicious. The multi-draft revision process can actually increase your detection risk because it removes the rough edges that mark human-written text.
4. Grant and Fellowship Application Essays (if submitted as part of candidacy portfolios)
A growing number of programs require PhD students to submit a sample grant application or fellowship proposal as part of their qualifying portfolio. These documents are written in a particularly dense, structured format. The specificity of genre conventions, combined with the formal register, makes them strong candidates for false positives.
5. Written Components of Oral Exams
If your oral candidacy exam includes a written component, such as prepared responses, written outlines, or a written statement of research vision, those documents may also be scanned. The risk here is lower than take-home exams, but it's not zero. And because these documents are produced in close conjunction with your oral defense, any suspicion that arises can color how committee members evaluate your oral performance.
What "Academic Misconduct" Actually Means at the Doctoral Level
You've heard the phrase academic misconduct your whole academic life. In undergrad it meant a failed assignment or a zero on the exam. In grad school it meant academic probation. At the doctoral level, it means something different. The consequences are structured differently because the stakes of doctoral credentials are structured differently.
~23%Programs With Explicit Doctoral AI PolicyEstimated share of PhD-granting departments in the US with written AI-specific guidelines for candidacy-stage exams as of 2025
41%Detection Tool Use by FacultyFaculty at R1 institutions who report using at least one AI detection tool on graduate student submissions in the past 12 months, per a 2025 survey
Up to 61%False Positive Rate for Non-Native SpeakersIn controlled studies, essays by non-native English speakers flagged as AI-generated at rates up to 61% versus under 5% for native English writers
1 in 8PhD Students Facing Misconduct ReviewEstimated proportion of doctoral students who will face some form of academic integrity inquiry during their program
~36%International Student Enrollment in US Doctoral ProgramsShare of doctoral degrees conferred at US institutions to international students, the population most exposed to AI detection false positive risk
2.3 yearsAverage Time to Degree Lost After Misconduct FindingEstimated average program delay for PhD students who survive a misconduct finding but are required to retake candidacy exams or revise candidacy materials under supervision
The Career Consequences Are Not Recoverable
In most professions, you can survive an academic misconduct finding from ten years ago. The credential is in hand, the career is established, and the incident fades. That calculation doesn't apply in academia, because academic careers are built on reputation before the credential is awarded.
An academic misconduct finding during your PhD does the following: it becomes part of your official academic record, it is disclosed in academic reference letters when specifically asked about by search committees, it can result in revocation of any funding tied to your status as a "student in good standing," and it is visible to any future PhD programs you might transfer to. If you fail your candidacy exam due to a misconduct finding, you typically cannot simply retake it at the same institution. You may be expelled from the program.
The fellowship and grant funding dimension is particularly brutal. If you're on an NSF, NIH, SSHRC, or similar grant-funded fellowship when a misconduct finding occurs, the funding agency may require repayment of funds already received. That's not a theoretical risk. It happens.
What This Means for International Students
For international PhD students on F-1, J-1, or equivalent student visas, the stakes go beyond the academic. Your immigration status depends on maintaining "full-time enrollment in good standing." A formal academic misconduct finding that results in suspension or expulsion from your program doesn't just end your PhD, it can trigger a requirement to leave the country.
The timeline matters too. International students often have fewer informal support networks at the institution. When an accusation comes, they're less likely to know which offices to contact, which ombudsman handles graduate student complaints, or how to get emergency legal advice. They may be further from family support. They may be dealing with language barriers in formal institutional communication. The structural disadvantage compounds the already-severe consequences.
⚠️International Students: This Is a Legal Issue, Not Just an Academic One
If you are an international student on a visa and you receive any formal notice related to academic misconduct, treat it as both an academic matter and a legal matter from the start. Contact your institution's international student office before responding to any misconduct notice. Depending on your visa type, your ability to remain in the country may be at stake, not just your degree. This is not an overstatement. Get advice from an immigration attorney if you can, even before the proceeding formally begins.
FALSE POSITIVES
The False Positive Problem: Why International and ESL PhD Students Are at Special Risk
AI detectors work by modeling the statistical patterns of human-written versus machine-generated text. They look for things like perplexity (how unpredictable the word choices are), burstiness (whether sentence lengths vary), and various syntactic features that differ between AI output and typical human prose.
The problem is that these tools were trained predominantly on native English text. The "human-written" baseline they're calibrated against is written by people who grew up speaking English, writing in English, and who have idiosyncratic quirks, informal habits, and the kind of syntactic variety that comes from writing without constant attention to grammar correctness.
How ESL PhD Writing Looks to a Detector
Now think about how a highly educated non-native English speaker writes an academic qualifying exam essay. They choose words carefully because they want to make sure the meaning is precise and correct. They use slightly more formal vocabulary than a native speaker might, because formal academic vocabulary is what they've been taught and what they practice. Their sentences tend to be more grammatically regular because they're applying explicit rules rather than producing language instinctively. They use fewer contractions. They avoid colloquialisms. They produce text that is, by any external measure, correct, clear, and well-organized.
That careful, rule-following formal prose pattern is exactly what AI detection tools were trained to identify as AI-generated. The regularity, the formality, the lower perplexity score, the smaller standard deviation in sentence length: all of these are features of AI output AND features of careful non-native writer output. The tools cannot distinguish between them. Multiple peer-reviewed studies have confirmed this, most notably a 2023 study in which GPT detectors flagged over half of essays written by non-native English speakers as AI-generated while flagging fewer than one in twenty essays written by native English speakers on identical prompts.
For PhD students, this problem is worse than for undergrads for a specific reason: doctoral-level writing is expected to be more precise, more consistent, and more formally registered than undergraduate writing. The very qualities your committee expects of your qualifying exam writing are the qualities that will score highest on a detection tool's probability scale.
The Double Bind for Technical Writers
STEM PhD students face an additional layer of this problem. Technical academic writing in fields like biology, chemistry, engineering, and computational science follows extremely rigid conventions. Methods sections, results descriptions, and literature review summaries are written in a constrained, standardized register. When a biology PhD student from South Korea or an engineering PhD student from India writes in the exact format that their field demands, their prose inevitably resembles AI output because AI was trained on the same genre conventions.
The irony is significant. You learn to write like a scientist by reading scientific papers. AI tools learned to produce scientific text by reading scientific papers. If you've internalized the genre conventions successfully, your text will resemble AI output structurally, regardless of whether any AI was involved in producing it.
One Tool International PhD Students Use
Some international PhD students face a different kind of challenge: they have complex, original ideas, but their English expression doesn't yet fully convey the sophistication of their thinking. The technical concepts are there. The analytical framework is solid. But the sentences come out clunky or overly literal in ways that can make even a human reviewer underestimate the quality of the underlying argument.
This is one use case where tools like humanlike.pro have found a genuine audience among international doctoral students. The tool takes your existing text, written in your own words with your own ideas and structure, and refines the English expression to sound more natural and clearly academic, without changing the content or the argument. You remain the author of the intellectual substance. The tool helps you express it in the kind of clean academic English that committee members expect, without the awkward phrasing that can sometimes make non-native academic writing harder for readers to evaluate fairly. It's a writing aid at the language level, not a ghostwriter, and that's exactly the distinction that matters when your candidacy exam is on the line.
How Faculty Are Actually Using Detectors on PhD Work
Let's be direct about what's happening in practice, because the institutional messaging around this is almost always sanitized.
Faculty are using AI detection tools informally, inconsistently, and often without telling students. They're running documents through Turnitin's AI detection layer (which is now built into the platform many universities already use for plagiarism detection), GPTZero, Originality.ai, or less well-known tools. They're doing this out of curiosity, out of concern, or because a piece of writing struck them as unusually fluent for a particular student.
The key problem is that a detection score is often treated as evidence rather than as a flag for further investigation. If the score is high, faculty frequently interpret it as confirmation of something they already suspected, rather than as an input to a process that requires more work. This is cognitively natural but academically inappropriate. A detection score is not evidence of AI use. It is a statistical probability generated by a tool with known limitations and documented biases.
The Informal Conversation Problem
At the doctoral level, committee politics mean that accusations don't always start as formal proceedings. They often start as informal committee conversations: "I ran Priya's written qual through GPTZero and got a 68% score. What do you think?" That informal conversation, between your committee members, can shape your evaluation in ways that are never documented and that you'll never have a chance to respond to.
This is part of why proactive protection matters so much. If your writing process is already documented, and your advisor already knows your process, the informal conversation is likely to be resolved in your favor before it ever reaches you. If it's not documented, the conversation has nowhere to go except toward a formal inquiry that you'll have to navigate reactively.
The Tool Reliability Problem
The leading AI detection tools have documented accuracy problems. Turnitin's AI detection feature has a published false positive rate. GPTZero has released multiple versions attempting to reduce bias, with mixed results. Originality.ai's own documentation acknowledges it should not be used as sole evidence of misconduct. None of this stops faculty from using these tools as if they were reliable indicators.
A high score on an AI detection tool tells you, with some statistical confidence, that the text has low perplexity and low burstiness relative to the training distribution. That's it. It doesn't tell you who wrote it, when they wrote it, or why those statistical features are present in the text. For a non-native speaker writing careful academic prose, the "why" is obvious: they wrote it exactly as a highly motivated academic writer trying to produce correct, formal, appropriate academic English would write it.
How to Raise the AI Policy Question With Your Advisor
Most PhD students avoid this conversation because it feels like an invitation to suspicion. "If I ask about AI policy, won't my advisor think I'm planning to use AI?" That fear is understandable and almost entirely wrong.
Advisors and committee members are faculty. They've been trained to teach, to research, and to mentor. Almost none of them have been trained in how to implement AI policy, how AI detection tools work, or what appropriate responses to high detection scores look like. Most of them are figuring this out as they go. When a PhD student proactively raises the AI policy question, the most common faculty response is relief, not suspicion. You've given them the opportunity to think clearly about something they've been avoiding.
What to Say and When to Say It
The best time to raise this question is before you submit your qualifying exam materials, not after. Ideally, you do it in a regular meeting with your advisor two to three months before your exam is scheduled.
The framing matters. Don't open with "Am I allowed to use AI?" That framing centers you as a potential rule-violator trying to find the limits. Instead, open with something like: "I want to make sure I understand the expectations around AI tools for the qualifying exams. I've seen a lot of variation across departments and I'd rather know where our committee stands before I start writing."
That framing centers you as a responsible professional who wants to do the work correctly. It invites your advisor to be helpful and clarifying rather than watchful. It also opens the conversation for you to mention, if relevant, that you're an international student and that you're aware of the false positive problem with AI detectors for non-native writers. Mentioning that concern now, proactively, means that if a detection issue arises later, you've already laid groundwork.
Raising the AI Policy Question With Your Advisor: What to Expect
Pros
Gets the expectations documented or at least verbally established before your exam
Positions you as a serious, self-aware doctoral student rather than someone who waits for problems to arise
Opens the door to discussing your documentation habits, which works in your favor
Allows you to flag the false positive issue for non-native speakers before it becomes a crisis
Gives your advisor the opportunity to advocate internally for a written policy if none exists
Reduces the chance that an informal detection score becomes an accusation, because your advisor already knows your process
Cons
Raises the topic of AI in a context where it wasn't previously top of mind for your advisor
Some advisors in older generations may interpret any AI discussion as suspicious, though this is rare at the doctoral level
If your advisor is disorganized or dismissive, you may not get a useful answer
The conversation may reveal that no clear policy exists, which is useful information but somewhat stressful to confirm
YOUR PLAYBOOK
How to Document Your Writing Process: A Practical Protection System
If you ever face an AI accusation, the single most useful thing you can have is a documented record of your writing process. Not a statement claiming you didn't use AI. A factual record showing that your writing developed over time in ways that are consistent with human writing and inconsistent with AI generation.
This documentation needs to exist before any accusation happens. If you start creating it after an accusation has been made, it will be viewed skeptically regardless of its accuracy. The protection value comes from the fact that the documentation predates any potential problem.
Building a Writing Process Documentation System Before Your Qualifying Exams
1
Use a version-controlled writing environment
Write your qualifying exam materials in Google Docs or another platform with automatic version history. Every session, every edit, every moment where you delete a paragraph and rewrite it, that history is preserved with timestamps. Google Docs version history is particularly useful because it shows exactly when edits were made and what changed. If you later need to demonstrate that your writing developed organically over multiple sessions, you can show the exact timeline. This is the single most valuable documentation you can have.
2
Keep a process journal or research log
Before and during your exam-writing period, keep a brief daily log. It doesn't need to be elaborate. Three to five sentences: what you were working on, what sources you consulted, what you were struggling with, what breakthrough you had. This log serves two purposes. First, it demonstrates that you engaged with the material substantively over time. Second, it gives you a narrative of your intellectual process that you can reference in a hearing. A date-stamped log that says "Stuck on the methodological tension between X and Y, rereading Bourdieu" is evidence that you were actually thinking about the material.
3
Save all research notes, highlights, and drafts
Keep your reading notes, annotated PDFs, highlighted physical or digital copies of sources, and any handwritten drafts. The messy early-stage materials are actually the most convincing evidence of human writing. Handwritten outlines, crossed-out paragraphs, margin annotations: these are things that AI doesn't produce and that a committee cannot reasonably dismiss. If you use a reference manager like Zotero or Mendeley, your notes and annotation history are automatically time-stamped.
4
Document your revision process with comments
As you revise, use the comments function in your word processor to leave yourself notes about why you made specific changes. "Changed this sentence because the passive voice was burying the agent" or "Moved this paragraph up because the logical flow was unclear." These comments are evidence that you were making deliberate editorial choices, which is exactly what human writers do and AI-generated text doesn't reflect.
5
Take time-stamped photographs of handwritten planning materials
If you brainstorm, outline, or plan on paper, photograph those materials with your phone immediately after creating them. Your phone camera automatically timestamps photos in metadata. A photograph of a handwritten outline taken at 11:47 PM on the first day of your qualifying exam period is strong, simple evidence that you were actively working through the material by hand at a specific moment. It's not definitive on its own but it contributes to a larger picture.
6
Back up your files with automatic cloud syncing
Use Dropbox, Google Drive, or OneDrive with automatic syncing enabled so that your file's modification timestamps are recorded in the cloud, not just on your local machine. Local file timestamps can be questioned or accidentally altered. Cloud timestamps are independently verified and much harder to dispute. If a file shows consistent modification activity across a ten-day period with no long gaps, that's consistent with how human writing actually develops.
7
Request a copy of the exam prompt as soon as you receive it
For take-home qualifying exams, email your program coordinator or advisor immediately upon receiving the exam to confirm you've received it. This establishes the start timestamp of your exam period via an email chain. If you later need to demonstrate the timeline of your work, the email record shows when you first had access to the questions.
💡Start This System Now, Not the Week Before Your Exam
The documentation habits described above are most protective when they're consistent across your entire writing period, not hastily assembled in the final week. If your version history shows that you started the document five days before submission but then shows no activity for three of those days, that pattern can actually raise questions rather than resolve them. Start the system early, keep it going consistently, and let the record reflect the real, sometimes slow, sometimes blocked process of actual human writing.
If an Accusation Has Already Happened: What to Do
If you're reading this because you're already in a situation, not because you're trying to prevent one, here's what matters right now.
Do Not Respond to the Initial Contact as if It Were Informal
Academic misconduct proceedings at the doctoral level often begin with an informal conversation: "I just want to ask you about your process." Do not treat this as a casual mentoring conversation. Your response to this initial contact can define the entire subsequent process. Write down what was said as soon as the meeting ends. Note who was present, what was specifically said, and what was specifically asked. Date and time the note. Email it to yourself so there's a timestamp.
You can be cooperative and transparent in that meeting without making statements that could later be interpreted as admissions. "I wrote every word of this exam myself over four days. I have my version history and my notes available if that would be helpful" is both honest and strategically sound.
Request the Specific Evidence
You have the right to know what evidence is being used against you. Ask directly: "What specific evidence or information prompted this inquiry?" If the answer is an AI detection tool score, ask for the specific tool, the specific score, and the specific sections flagged. Then request your institution's written policy on the use of AI detection tools as evidence in misconduct proceedings.
At most institutions, the policy will either not exist or will explicitly state that detection tool scores are not sufficient evidence on their own. Obtaining that written policy is an important step because it immediately reframes the conversation from "you're accused of something" to "this requires more than a detection score to proceed."
Contact the Graduate Student Ombudsman or Advocate
Most research universities have a graduate student ombudsman or an academic advocate whose job is to help students understand and navigate formal processes. This person is not your advocate in the legal sense, but they can explain what the formal process looks like, what your rights are at each stage, and what documentation would be most useful.
Contact this office early. Going in with documentation and a clear account of your process, rather than arriving reactively after the process has advanced, puts you in a structurally stronger position.
Gather and Present Your Process Documentation
This is where the preparation described earlier in this guide pays off. If you have version history, a process journal, annotated sources, and revision comments, present them systematically. Don't just dump a folder of files. Create a brief timeline document: Day 1 of exam period: received prompt, began reading notes. Day 2: wrote 800-word draft of section A. Day 3: rewrote section A, began section B. And so on. That kind of organized, concrete, date-stamped account is much harder to dismiss than a verbal assurance.
Writing Practices That Reduce Your Detection Risk
Even when you're writing entirely independently, there are habits that can reduce the likelihood of triggering a detection tool, not because you're trying to fool the tool but because these are simply the habits of authentic human writing.
Vary your sentence length intentionally. Mix short, direct sentences with longer more complex ones. AI output tends toward regular sentence lengths. Human writing is burstier.
Use specific examples, including ones drawn from your own research experience or fieldwork. AI doesn't have personal experiences to draw on. Specific, concrete, personally-situated details are markers of human authorship.
Let some of your uncertainty show. Hedges like "I find this framework partially convincing but am uncertain about X" reflect actual intellectual engagement. Over-confident synthesis is a marker of AI output.
Cite in ways that reflect genuine engagement. Instead of "Smith (2019) argues that X," try "Smith's (2019) account of X is persuasive when applied to this case, though it struggles to account for Y." The evaluative specificity reflects real reading.
Write first drafts before editing. Your first drafts will be rougher and more idiosyncratic. That roughness is useful. Don't delete it entirely in revision; maintain some of the character of your genuine first-pass thinking.
Use field-specific jargon as you naturally would. Technical vocabulary specific to your subfield is less likely to be flagged than the generic formal vocabulary that appears across disciplines.
None of these practices constitute gaming the system. They're descriptions of what good academic writing actually looks like when it comes from genuine engagement rather than from text generation. If you're doing the intellectual work, these habits will come naturally. If you're not, no documentation strategy will save you.
The Broader Problem: What PhD Programs Should Be Doing
This section isn't for you to act on, but it matters for context. The situation PhD students find themselves in is a product of institutional failure, not individual failure.
Graduate programs that have not written a clear, specific AI policy for candidacy-stage exams have failed their students. Faculty who use AI detection tools without disclosing that they're doing so, and without understanding those tools' documented limitations, are using unreliable evidence in ways that can affect someone's career. Departments that apply vague blanket academic integrity policies to a stage of doctoral training that is fundamentally different from coursework are applying the wrong framework.
The appropriate institutional response to this situation includes: written, specific AI policy for each document type in the candidacy stage, disclosed to students before exam periods begin; documented guidance on how and when AI detection tools may be used by committee members and under what conditions they constitute sufficient grounds to begin a formal inquiry; training for faculty on the documented bias of detection tools against non-native English writers; and explicit channels for students to raise AI policy questions before exam periods begin.
Most programs aren't there yet. Until they are, you're on your own to protect yourself. The tools in this guide exist to help you do that.
What Good AI Policy at the Candidacy Stage Would Actually Look Like
If you're in a position to advocate for policy in your department, whether through a graduate student association, a departmental committee, or a direct conversation with your graduate director, here is what genuinely useful candidacy-stage AI policy would contain.
A document-by-document specification of what AI tool use is and isn't permitted. Not a blanket statement but specific guidance: take-home written qualifying exams, literature reviews submitted as candidacy materials, dissertation proposals, and other component types each named individually.
A disclosure requirement that applies symmetrically to both students and faculty. If students are required to disclose AI use, faculty should be required to disclose when and how they use AI detection tools on student work.
A defined process for what happens when a detection tool flags a document, including mandatory disclosure to the student, an opportunity for the student to respond with process documentation, and a requirement that detection scores not be used as sole evidence.
A written acknowledgment of the false positive problem for non-native English writers, with guidance to committee members on how to account for it in their evaluation.
A clear statement of what the consequences are for a finding at different severity levels, so students understand the actual risk landscape.
An annual review provision, given that AI capabilities and tool accuracy both change rapidly.
💡Expressing Your Ideas in Clean Academic English
International PhD students often have strong technical arguments but struggle to express them in the formal academic English that committee members expect. Humanlike.pro helps you refine your written English without changing your content or ideas, so your scholarship gets evaluated on its intellectual merits rather than on language fluency.
The candidacy stage is the highest-risk stage for AI-related academic misconduct accusations, and the stage with the weakest policy protection.
If you're an international or ESL PhD student, your risk of a false positive from an AI detection tool is significantly higher than for native English speakers, and your consequences if accused are more severe.
Document your writing process systematically before your exam period begins. The documentation needs to exist before any accusation, not after.
Raise the AI policy question with your advisor proactively, months before your exam. This conversation almost never goes badly and often goes very well.
If an accusation happens, treat it as a formal matter from the first informal contact. Document every conversation, request the specific evidence, and contact your graduate student ombudsman.
Detection tool scores are not proof of anything. They are statistical probabilities generated by tools with documented bias. Knowing this, and being able to articulate it calmly, is part of your protection.
Frequently Asked Questions
My PhD program doesn't have a written AI policy for qualifying exams. Does that mean I can use AI tools?+
Absence of a written policy is not permission. At most institutions, the default academic integrity framework, which prohibits submitting work that isn't your own, applies by default even when a specific AI policy doesn't exist. More importantly, the absence of a written policy means that the definition of acceptable behavior is left to the discretion of your committee members, which is actually a riskier situation than having a clear prohibition. The safest approach is to assume that AI-generated text in a qualifying exam is not permitted unless you've been explicitly told otherwise in writing, and to document your own writing process thoroughly as protection.
Can my committee run my qualifying exam through an AI detector without telling me?+
At most institutions, the answer is yes. There is typically no policy requiring faculty to disclose that they've used an AI detection tool on a student's work, and no policy requiring them to share the results before taking any action. Faculty may scan your document out of curiosity, out of concern, or as a routine practice, and you may only find out if the result prompts them to raise questions. This is exactly why proactive documentation matters so much.
I'm an international student and I'm worried an AI detector will flag my writing even though I didn't use AI. What can I do?+
Your concern is well-founded and supported by substantial research. Multiple studies have shown that AI detection tools flag non-native English academic writing at dramatically higher rates than comparable native English writing. The most effective protection is documentation: version history showing your writing developing over time, a process journal, annotated sources, handwritten notes, and revision comments. It's also worth raising this issue proactively with your advisor before your exam.
What happens if I'm found responsible for AI misconduct during my candidacy exam? Can I appeal?+
Every institution has an appeal process for academic misconduct findings, and at the doctoral level these usually involve multiple levels of review. The typical sequence is: an initial finding by a departmental committee, an appeal to the graduate school academic integrity office, and a final appeal to a university-level appeals body. The likelihood of a successful appeal depends heavily on the quality of your process documentation and on whether the initial finding relied primarily on a detection tool score.
Is it okay to use AI tools for parts of my candidacy process, like organizing my notes or generating an outline?+
This depends entirely on what your institution and committee consider permissible. In general, the academic integrity concern is about submitted work. Using AI to organize your reading notes, brainstorm ideas, or generate a preliminary outline you then work from is in a different category than submitting AI-generated text. However, the line is not always clean. The right answer is to ask your advisor explicitly before your exam period.
Can I be accused of AI misconduct based on a detection score alone, without any other evidence?+
Formally, most institutional academic integrity policies require a preponderance of evidence, not just a single data point. A detection score alone, without corroborating evidence such as stylistic inconsistency or direct evidence of AI tool use, should not be sufficient to sustain a formal misconduct finding. In practice, however, the formal proceeding is only the most extreme outcome. Before any formal proceeding, a detection score can prompt informal committee conversations that shape your evaluation in undocumented ways.
Write With Confidence in Academic English
If you're an international PhD student writing candidacy materials, humanlike.pro helps you express your ideas in precise academic English so your writing is evaluated on its intellectual merit, not its fluency.