Most schools spent 2023 trying to detect AI-generated student work. By 2025, the consensus is clear: detection doesn't work, and pretending it does creates its own problems.
Here's where effective schools and districts have actually landed.
What didn't work
AI-detection tools. GPTZero, Turnitin's AI detector, and similar tools produce both false positives and false negatives at rates unacceptable for academic consequences. Multiple universities walked back AI-detection-based policies after wrongful accusations of honor code violations.
Outright AI bans. Banning AI tools in a world where they're integrated into word processors, browsers, and phones is unenforceable. Students who'd never cheat still use Grammarly, autocomplete, and search engines with AI-powered results.
Teacher-level vigilance. Asking individual teachers to detect AI-written work in their own classes puts them in an impossible position: grading on intuition and damaging student-teacher relationships.
What's working
1. Shifting high-stakes assessment in-class
The single most effective change is also the simplest: move assessments that matter into supervised, in-person conditions. Take-home essays become in-class essays. Digital quizzes become paper quizzes administered during class time.
This doesn't eliminate learning that happens at home — it just stops pretending you can evaluate it reliably. At-home work becomes practice. In-class work becomes assessment.
2. Oral defenses of written work
Students submit written work digitally, then briefly explain their argument verbally. Two minutes per student. Quick "walk me through your main point" conversations reveal whether they actually produced what they submitted.
This is high-impact for upper grades and college. It doesn't scale to large lecture sections but is realistic for typical K-12 class sizes.
3. Process artifacts alongside final products
Instead of grading just the final essay, require:
- Brainstorm notes (handwritten or timestamped)
- Outline drafts
- Revision history
- Source annotations
A student who can produce the full creation arc of their essay probably wrote it. A student who can only produce a polished final version probably didn't. This reframes "how do we detect AI?" as "how do we evaluate the process?"
4. Assessment redesign toward synthesis
AI is strongest at producing plausible, well-structured responses to generic prompts. It's weakest at:
- Tasks requiring specific classroom context ("Build on our discussion from Tuesday about...")
- Tasks requiring physical artifacts ("Use the data table you collected in Thursday's lab...")
- Tasks requiring personal reflection with specific details
- Tasks requiring the student to evaluate and integrate multiple course-specific readings
Prompts rewritten along these lines are still possible to AI-assist, but the assistance is no longer the whole answer.
Rewrite one prompt as an experiment
Take a traditional essay prompt you've used before. Rewrite it to require specific references to in-class discussions and personal reflection. Give both versions to two different sections. Compare the writing quality and — importantly — how distinctive each student's work is.
5. Explicit AI-literacy instruction
Teaching students how to use AI well — with citation, critical evaluation, and understanding of when it fails — is more durable than prohibition. Students graduate into workplaces that use these tools. Pretending they don't exist is poor preparation.
This usually looks like:
- Explicit lessons on what AI is good and bad at
- Assignments that require AI use with documentation
- Discussions on when AI use becomes plagiarism
- Comparing AI output to student work to build critical reading skills
6. Paper for summative, digital for formative
This is the pragmatic landing spot for many schools:
- Summative assessments (unit tests, finals) — paper, in-class, no devices
- Formative assessments (practice quizzes, homework) — digital, AI-tolerant
- Essays and written response — a mix, with the highest-stakes ones in-class
The tooling for this has matured. Modern paper-test platforms scan and auto-grade, sync to LMS gradebooks, and provide the same analytics digital quizzes do.
What doesn't belong in your policy
Stop trying to:
- Detect AI writing via software
- Ban specific tools that students can easily swap
- Rely on "I can tell when it's AI" instincts
- Treat all AI use as plagiarism
These approaches fail and undermine trust between students and teachers.
Key takeaway
The assumption that "we need to catch cheaters" is the wrong frame. The actual goal is "we need assessments that produce trustworthy evidence of what students know." Shift assessment design toward that, and the cheating question largely solves itself.



