Skip to main content
MarkMate logo
Subject guide10 May 2026 · 7 min read

AI detection in MarkMate: a teacher's guide to using risk reports

Cheryl · Head Teacher Administration, NSW. Built MarkMate.

I'm a teacher. I'm also the person who built MarkMate. The AI detection feature is the one I get the most questions about, and the one I get most nervous about teachers misusing.

This post is what I tell colleagues when they ask. Two things up front:

  1. AI detection in MarkMate gives you indicators, not proof. If you're going to use it as evidence in an academic integrity case, you'll need to do more work than reading a risk report.
  2. The detection report is for teachers. It's never shown to students. Students see their feedback; they don't see what you see about whether their work might be AI-generated.

Both points sit in the Compliance & Safety page. They're worth restating because they shape how the rest of this post is meant to be read.

What MarkMate's AI detection actually does

Every submission run through MarkMate gets analysed across four risk categories:

  • Language patterns. The model checks for sentence structures, vocabulary frequencies, and rhetorical patterns common to AI-generated text. Long, evenly-weighted sentences. An absence of grammatical quirks. A particular set of connective phrases.
  • Content red flags. The model checks for phrases and constructions that AI tools default to but human writers under exam conditions don't. ("Delve into," "in today's fast-paced," "tapestry of," and other AI tells.)
  • Style inconsistency. The model checks whether the submission's voice matches itself. Sudden shifts in vocabulary range or sentence complexity inside a single piece are flags.
  • Common AI phrases. A fixed list of high-frequency AI tells gets checked. The list is updated as new tools come online and new patterns emerge.

Each category gets a Low, Medium, or High risk rating with supporting evidence. The supporting evidence is the bit that matters; it shows you which sentences triggered the flag, not just an overall score.

What the report does not do

I want to be specific about the limits, because every false-positive case I've seen has come from a teacher reading more into the report than it can support.

The report does not tell you whether a student used AI. It tells you the probability that the writing pattern matches AI-generated text. Those are different things. Some students naturally write in ways that match AI patterns: heavy use of formal connectives, evenly-weighted sentences, polished vocabulary. EAL/D students and high-achievers both produce false positives more often than the average.

The report does not tell you which AI tool was used. Detection methods are pattern-based. A student who's run their work through three different tools, edited it heavily, and copied it out by hand will not look the same as a student who's pasted ChatGPT output verbatim.

The report does not tell you what to do about a high-risk flag. That's your call, and it should be based on your knowledge of the student, the assessment, and your school's policy.

How to read the report responsibly

For a single student submission, the report shows four ratings (one per category) and the evidence for each. A useful question to ask yourself, in order:

1. Does this match the student I know?

Most teachers can tell within five seconds whether a piece of writing sounds like the student who handed it in. If the report flags High risk and the writing sounds like the student, the report is probably overreading a stylistic match. If the report flags Low risk and the writing doesn't sound like the student, the report has missed something. Your knowledge of the student is the strongest single signal you have. Trust it.

2. Does the evidence look load-bearing?

If three sentences are flagged in a 1,000-word essay, that's not the same as 30 sentences flagged. Skim the highlighted evidence. If the flagged sentences are the kind a student might genuinely write (a strong opener, a textbook definition, a connective phrase), the flag is weaker than a flag on a paragraph of body argument.

3. Is there cross-essay context?

MarkMate's similarity check runs across every submission in a class. If three students have submitted suspiciously similar paragraphs, that's a different signal from one student's writing matching AI patterns. Cross-student similarity is harder to false-positive than language-pattern detection.

4. Is there a pattern across this student's submissions?

If a student is regularly Low risk and one assessment is suddenly High, that's worth a conversation. If a student is regularly Medium-High because of their natural writing style, that pattern is itself the calibration. The first time you see them, you don't know which is which. After three or four assessments, you do.

What to do with a High risk flag

This is the part most teachers want a checklist for. The honest answer is that there isn't a checklist; there's a conversation.

A reasonable protocol:

  1. Re-read the essay yourself, ignoring the report.
  2. Look at the supporting evidence the report provides. Decide whether the flagged sentences are the kind the student would write.
  3. Compare to the student's previous submissions if you have them.
  4. If you're still uncertain, have a low-stakes conversation with the student. Ask them to walk you through their argument. Ask which sources they used. Ask what they found hardest.
  5. Make a note in your records.
  6. Decide whether to escalate. School policy, faculty practice, and the assessment weight all matter here.

Things to avoid:

  • Confronting a student with the AI detection report directly. The report is teacher-facing for a reason. It's not designed to be shown to students or parents as evidence.
  • Treating the flag as a verdict. Patterns are not proof.
  • Letting a single report drive an academic integrity case. Most schools require a body of evidence, not a single tool's output.

What about the false positives?

False positives happen. The most common categories I see:

  • EAL/D students writing in formal registers learned from textbooks. The pattern matches AI partly because it matches the formal English register AI was trained on.
  • High-achieving students who write polished prose. Polish in a Year 12 essay can look like AI to a pattern-matcher.
  • Students who use writing scaffolds heavily. PEEL paragraphs done well produce evenly-weighted sentence structures.
  • Students who've copied a few model phrases from a study guide. The pattern-matcher sees the phrases, not the source.

If you're getting a high false-positive rate in your cohort, the issue is usually that the cohort writes in a particular way that overlaps with AI patterns. Adjust your reading of the report accordingly. Don't change the rubric or the marking; change how heavily you weight the AI report relative to other signals.

The classroom protocol I use

In my own teaching, AI detection sits inside a wider classroom protocol I'm transparent with students about. Students know:

  • Drafts can be checked in MarkMate. They can use the feedback to improve their work.
  • Final submissions are marked by me, with MarkMate handling the rubric-scoring layer and my professional judgement on top.
  • AI detection is run on every submission. They will not see the report.
  • If a submission is flagged High risk, I'll have a conversation with them about it. The conversation is not a punishment. It's an academic integrity check.
  • The decision about consequences is made by me, with reference to school policy, not by MarkMate.

This protocol isn't perfect. It is, however, the protocol I'd want a tool to support, rather than a tool that makes the decision for me. MarkMate's AI detection is built around the assumption that you're the marker, the report is an input, and the conversation is the actual integrity work.

Read the full compliance detail

The Compliance & Safety page has the full technical detail on how AI detection runs, what data is sent to the AI provider (no student names or identifiers), and what we do not do with the report. If you're assessing MarkMate for school procurement or your own classroom use, that's the page to read alongside this one.