Assessment providers are facing a growing problem. Candidates are now using AI tools to produce work that sounds polished and professional, even when the ideas behind it may not be their own. This is making it harder to tell whether someone genuinely understands the subject or is simply good at using generative tools. For companies that set and mark qualifications, this is a system-wide issue that needs a scalable solution.
Some educators are returning to oral exams to get around the problem. Speaking live makes it much harder to rely on AI-generated answers. However, these assessments are expensive to run, harder to scale, and introduce other risks. Not every learner performs well under pressure, and live questioning can create a barrier for people who would otherwise be able to demonstrate strong subject knowledge in writing.
To solve this at scale, digital assessment firms are developing new tools that go beyond traditional marking. Instead of just evaluating how well something is written, they look at how the content was created. These systems analyse the structure of the argument, the logic behind the ideas, and the originality of how concepts are applied. This helps examiners identify work that may be too generic or formulaic, a common trait of AI-assisted writing, even if the grammar and tone are flawless.
RM plc (LON:RM) is a global EdTech provider of learning and assessment solutions, supporting the full learning journey, from early years through to higher education and professional qualifications.







































