Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks
AI in education: tutors, cheating and what schools can realistically do
Photo by Tim Mossholder / Unsplash

AI in education: tutors, cheating and what schools can realistically do

The essay is dead, long live the essay. As generative AI breaks the traditional homework model, schools are caught between an integrity crisis and the greatest learning opportunity in a generation. Here is the realistic path through the chaos

Ian Lyall profile image
by Ian Lyall

The immediate crisis facing schools isn’t a sentient robot standing at the front of the classroom; it is the quiet, chaotic erosion of the homework essay as a metric of truth. For two decades, I have covered the "edtech" revolution, a sector that has frequently over-promised and under-delivered, offering us smart whiteboards that nobody uses and tablets that just become expensive distraction machines. But Generative AI (GenAI) is different. It is the first wave of technology that threatens to break the levy between "learning" and "output," and it has arrived with a speed that has left policy makers breathless.

We are currently caught in a pincer movement. On one flank, we face a deluge of credible-looking but synthetic student work that defies traditional plagiarism detection. On the other, we see the genuine, tantalising promise of a "super-tutor" for every child, a personalised learning engine that never tires, never judges, and costs pennies.

The tension is practical, not philosophical. Headteachers do not need another abstract debate about the future of work; they need to know what to do on Monday morning when a Year 10 student hands in a piece of coursework that is suspiciously perfect, yet technically original. The realistic path forward isn’t a ban (impossible) or total surrender (negligent). It is a rapid, unsentimental shift in how we value "work."

The Arms Race is Over (And We Lost)

Let’s be blunt: the take-home essay, as a proxy for understanding, is effectively dead. If a 14-year-old can prompt a Large Language Model (LLM) like ChatGPT or Claude to "explain the causes of the First World War in the style of a GCSE student, including one minor grammatical error to make it look authentic," and receive a B-grade response in seconds, the assessment model is broken.

For a brief window in 2023, schools hoped that "AI detectors" would save them. They won’t. The evidence is now unambiguous: detection tools generate false positives that penalise honest students—often those writing in a non-native language, and are easily bypassed by those willing to paraphrase or simply ask the AI to "rewrite this to bypass detection." Relying on software to catch software is a losing battle. The Russell Group universities and the Joint Council for Qualifications (JCQ) have effectively conceded this, moving the conversation from "detection" to "resilience."

This leaves schools with only one realistic control: a return to the physical. We are likely to see a bifurcation of assessment. "High-stakes" validation of competence will retreat into the exam hall: invigilated written exams, oral presentations (vivas), and in-class coursework done under supervision. Everything else (homework, research projects, drafting) must be treated as "AI-contaminated" by default.

The "Super-Tutor" Mirage?

However, if we treat AI solely as a threat to integrity, we miss the "Bloom’s 2 Sigma" opportunity. In 1984, educational psychologist Benjamin Bloom found that average students who received one-to-one tutoring performed two standard deviations better than those in conventional classrooms, effectively moving an average student to the top 2% of the class.

Until now, giving every child a private tutor was economically impossible. AI changes the math. Adaptive learning platforms can now diagnose a student's specific weakness (say, adding fractions with unlike denominators) and serve up the exact practice problem needed to fix it, explaining the steps in real-time. This is not the "death of the teacher"; it is the scaling of the teacher’s reach.

The danger here is not the AI, but the passive student. A student who asks ChatGPT to "write my answer" learns nothing; in fact, they engage in "cognitive offloading," outsourcing the neural struggle required to build long-term memory. But a student who asks, "Quiz me on the key themes of Macbeth, and don't give me the answer until I guess," is engaging in high-level active recall. The school’s job is no longer just teaching the subject, but teaching the interrogation of the machine.

Automating the Drudgery

There is a third stakeholder in this equation: the overworked teacher. The UK Department for Education (DfE) has rightly identified AI’s potential to reduce administrative burdens. We are seeing tools that can draft lesson plans, generate quiz distractors, differentiate reading materials for dyslexic students, and even perform "first pass" marking on routine assignments.

If AI can free up five hours a week for a teacher to actually speak to students or run an extracurricular club, the trade-off is worth the risk of occasional algorithmic weirdness. However, the rule of "Human-in-the-Loop" must be absolute. No grade affecting a student’s permanent record should ever be assigned solely by an algorithm. We know that LLMs can "hallucinate" facts and exhibit bias; a teacher’s professional judgement must remain the final arbiter.

The New Rules: A Model Policy Starter Kit

So, what does a functional AI policy look like today? It is not a document that bans technology; it is a framework that categorises it. Many forward-thinking schools are adopting a "Traffic Light" system for assignments. This offers clarity to students, parents, and staff.

🔴 Red Zone: AI Banned

  • Context: In-class timed essays, mental arithmetic, vocabulary tests, oral exams.
  • Purpose: To assess raw competence and memory.
  • The Rule: "No technology allowed. This is about what you know."

🟡 Amber Zone: AI Assisted

  • Context: Coursework, research projects, creative writing drafts.
  • Purpose: To teach collaboration and iteration.
  • The Rule: "You may use AI to brainstorm ideas, structure your argument, or critique your draft. You must not use it to write the final prose. You must submit a 'prompt log' or screenshot showing how you used the tool."

🟢 Green Zone: AI Open

  • Context: Specific "AI literacy" tasks.
  • Purpose: To teach critical thinking and prompt engineering.
  • The Rule: "Use AI to generate the output. Your job is then to critique the AI’s work, find its hallucinations, check its sources, and improve upon it."

This system moves us away from a binary "cheating vs. not cheating" dynamic and towards a nuanced understanding of tools. It mirrors the adult world: architects use CAD software, but they still need to know how buildings stand up.

The Integrity Gap: Access vs. Equity

We must also talk about inequality. There is a looming "tiered" system where wealthy students access premium, tutor-grade AI models (like GPT-4 or bespoke educational tools) while disadvantaged students are left with free, older, less capable versions—or no access at all due to the digital divide.

The UNESCO guidance on AI in education stresses that public education systems must ensure equitable access. If schools ban AI entirely, they simply hand an advantage to the children whose parents can afford the subscription at home. The only way to level the playing field is to integrate these tools into the school day, ensuring every child learns how to wield them.

Advice for Parents and Students

As a parent, your instinct might be to panic when you see your child using ChatGPT for homework. Don't. But do change how you supervise.

For Parents:

  • Ignore "AI detection" services: Do not pay for online tools that claim to "catch" your child. They are snake oil.
  • Look for "The Flinch": Watch your child work. If they stare at a blank screen and panic, they are over-reliant on tools. If they use AI to get "unstuck"—to ask for a definition or a summary of a complex text—but then keep writing, they are using it correctly.
  • Treat it like a calculator: You wouldn't use a calculator to learn your times tables, but you would use it for calculus. Ensure they master the basics first. If they can't write a sentence without AI, they have a problem.

For Students:

  • You are training your replacement: If you use AI to do the thinking for you, you are rendering your own brain obsolete. The value of a human in the future workforce will be the ability to do what the AI cannot.
  • Use it as a sparring partner: Paste your essay into an LLM and ask: "Roast this argument. Find the logical fallacies. Tell me what I missed." Then fix them yourself.
  • Cite it: If you used an LLM, say so. "Ideation assistance provided by Claude 3.5." Transparency is the new integrity.

The Future is Hybrid

The dystopian vision of education—children staring silently at screens while algorithms spoon-feed them content—is a choice, not an inevitability. The alternative is a "human-centric" AI future.

In this future, the AI handles the mundane: the scheduling, the basic marking, the remedial grammar drills. This clears the decks for the human teacher to do what they do best: mentor, debate, inspire, and manage the complex social dynamics of a classroom.

Related reading

We are moving from an era of "knowledge acquisition"—where the school’s job was to fill a student’s head with facts—to an era of "knowledge management." The facts are everywhere; the skill is in curating, verifying, and synthesising them.

Schools cannot stop the wave. But they can learn to surf it. The essay may be dead, but the need to think clearly, argue persuasively, and verify truth is more alive (and more urgent) than ever.

Ian Lyall profile image
by Ian Lyall

Read More