Calibrating AI-writing detection across eight model families
Inside our 2026 calibration study: how we hold a sub-1% false-positive rate while precision rises with each new generation of frontier models.
Research, policy, and classroom practice from the people building Merituss and the educators putting it to work.
Inside our 2026 calibration study: how we hold a sub-1% false-positive rate while precision rises with each new generation of frontier models.
What to permit, what to disclose, what to forbid, and how to put it in writing without lawyering your students into silence.
A working vocabulary for integrity officers: matched text, borrowed phrasing, paraphrase, mosaic, and model-authored prose.
The April release notes, in plain English, with the policy implications for testing centres and remote cohorts.
A multi-campus diary study on real generative-AI behaviour, the gap between policy and practice, and what it means for assessment design.
A short note on tone: how the language of an integrity report changes whether students learn from it or hide from it.
A checklist for procurement: where student work flows, who can train on it, and what 'data residency' actually buys you.
No vendor newsletter noise. A single annotated study, policy note, or classroom artefact, with our editor's read on what it means for your institution.
"The best integrity policy is the one your faculty can read aloud."
Run a check →