AI and student work

What students can and cannot use AI for, and identifying potential misuse.

AI tools are becoming part of students’ lives—used in some lessons and encountered every day outside school. Outwood’s approach is to set clear expectations about appropriate use and help students develop the judgement to use these tools responsibly.

Questions about authenticity in student work are becoming more common. The most reliable approach remains what you already do: knowing your students’ work, spotting inconsistencies, and monitoring the process.

What students can use AI for

The guiding principle is aid, not author: students can use AI to help them understand, explore, and engage with material. But their work must be in their own words, with students acknowledging the use of AI.

Acceptable uses include:

  • Understanding material: asking AI to explain a difficult paragraph or concept so they can engage with it themselves
  • Exploring ideas: using AI to get a different angle on a topic, to brainstorm, or to generate initial ideas to respond to
  • Reviewing their own draft: asking AI for feedback on something they have already written
  • Practising: asking AI to quiz them on a topic, or to explain something they are revising
Tip

Be clear with students about where AI fits into an assignment—whether it has any place at all, and if so, how. For example: ‘You may use AI to explore ideas or get feedback on a draft, but all submitted writing must be in your own words’.

What students must not do

Students must not submit AI-generated content as their own work without explicit permission from their teacher. That includes asking AI to write part of the work (such as an introduction) or to convert their own notes into a finished answer.

Like asking another person to do the work or copying text verbatim from a source, this is cheating or plagiarism—and it means missing out on learning.

Students must not use humaniser tools either: services designed to rewrite AI-generated text to make it harder to detect. Using such tools can represent a deliberate attempt to deceive, and should be treated more seriously than other forms of AI misuse.

Use in formal assessment

JCQ (Joint Council for Qualifications) guidance is clear: pupils must submit work that is entirely their own. If they use AI in any way, they must acknowledge it appropriately. Failing to do so is considered malpractice and can result in serious consequences, including disqualification.

For any formally assessed work—including GCSE non-exam assessment (NEA), A-level coursework, and vocational qualifications—follow the JCQ AI Use in Assessments document and your centre’s malpractice policy.

Recognising signs of misuse

There is no reliable way to prove with absolute certainty that a piece of work was AI-generated, but that does not prevent you from acting on your professional judgement.

Signals that may indicate AI use include:

  • A noticeable shift in register or vocabulary compared to the student’s usual work
  • Unusually polished, fluent phrasing with little personal voice
  • Generic arguments that don’t engage with the specific question set
  • Confident-sounding but inaccurate factual claims—a common sign of AI hallucination
  • Uncharacteristic formatting, such as bullet points or subheadings appearing in a prose piece
  • An absence of personal context or specific examples you would expect the student to draw on
  • American spellings, which are the default for many AI tools

These are indicators that something may warrant a closer look, not proof of wrongdoing.

AI detection tools

Some services claim to identify whether a piece of text was generated by AI rather than written by a person.

Detection tools have significant limitations:

  • Poor accuracy: false-positive rates are high, meaning they can flag legitimate student work as AI-generated
  • Unequal impact: neurodivergent students and non-native English speakers are disproportionately likely to have their work flagged incorrectly
  • Unreliable results: tools are easy to circumvent, so a negative result can provide false reassurance, not a clean bill of health

AI detection tools are best avoided. Where they are used, treat results as a starting point, not a conclusion: they may prompt a conversation with a student but must not trigger an automatic sanction.

Note

If you still believe a detection service would be valuable, check that an organisational agreement is in place before submitting any student work—this applies regardless of whether the work contains personal information or whether the service is free.

Responding to suspected misuse

A conversation is both more reliable and more educationally valuable than any detection tool.

Ask the student to:

  • Talk through their work—explain their reasoning and describe the choices they made
  • Expand on a specific point or argument in their own words
  • Describe the process they went through to produce the piece

A student who completed the work themselves can generally do this. A student who didn’t typically cannot.

If misuse is confirmed, treat it as you usually would for cases of cheating or plagiarism.

For formal assessment, if you are unsure whether a concern meets the threshold for reporting malpractice, speak to your Head of Centre or Exams Officer for guidance.