Turnitin’s new AI bypasser detection draws scrutiny from academics and early testers
Following Turnitin’s launch of its AI bypasser detection feature, educators and researchers are questioning its transparency and effectiveness, while the company emphasizes ongoing refinement of its detection models.
Turnitin, a provider of plagiarism detection and academic integrity tools, recently introduced AI bypasser detection. The feature is designed to identify text modified by “humanizer” tools that disguise AI-generated writing to evade detection.
The update extends Turnitin’s existing AI writing detection capability, which is widely used by universities and colleges as part of originality checks, feedback, and assessment workflows.
Annie Chechitelli, Chief Product Officer at Turnitin, describes humanizer services as “a new category of cheating providers,” arguing that such tools undermine trust between students and educators.
Concerns raised on LinkedIn
Reactions to the update have been mixed. On LinkedIn, Dr. Mark A. Bassett, Associate Professor and Academic Lead (Artificial Intelligence) at Charles Sturt University, welcomed the move toward transparency but called for greater scrutiny.
“We’re gonna need the following: public access to the testing algorithm or API for independent verification; test results using standardized, benchmark datasets; independent third-party evaluation with reproducible methods; versioned technical reports detailing the testing methodology; statements of data provenance and consent for training/testing data,” Bassett writes.
Other academics and practitioners expressed doubts about effectiveness. Laura Dumin, Professor at the University of Central Oklahoma, comments: “And the game of whack-a-mole continues.” Sam Doherty, an education-focused AI specialist, adds: “There is no chance this will work with any level of useful effectiveness.”
YouTube testing shows mixed results
Further scrutiny came from Tadhg Blommerde, Assistant Professor at Northumbria University, who shared a YouTube video testing Turnitin’s bypasser detection against several popular humanizer tools, including GPT Human, Refrazy, StealthGPT, StealthWriter, Groby, and Easy Essay.
The results highlighted inconsistencies across tools. For GPT Human, Turnitin produced a 31% likely AI score. Refrazy and StealthWriter returned “unknown” results, with scores between 1% and 19% that did not specify which parts of the text were flagged. StealthGPT, one of the most widely used tools, saw the largest shift—rising from 0% to 72% likely AI—while Groby increased from 0% to 67%. Easy Essay, by contrast, remained unchanged at 0%.
Blommerde commented: “Apparently Turnitin can now identify when writing is AI generated, AI paraphrased, and AI humanized. I’m not sure if I believe that.” He added that the results show progress, but remain far from definitive: “The new AI bypasser detector is an improvement, but it’s not perfect.”
He also highlighted the system’s lack of clarity, noting that all flagged content was still simply labeled as “AI generated.” According to Blommerde, this approach risks conflating different forms of modified writing and leaves educators with limited information: “Totally accurate AI detection is a myth. If this feature worked well, every single document we tested would have got a 100% likely AI score.”
Concluding his review, Blommerde warned that Turnitin’s update may not resolve the fundamental challenge of AI detection: “I think everything Turnitin have done with AI detection has been about choice. AI detection is a waste of time, and they’re falling into that cat and mouse game between AI detection organizations and AI humanizer organizations that everyone expected from the beginning.”
Turnitin’s response
Turnitin issued the following statement to ETIH in response to the update and subsequent debate:
“Turnitin has updated its AI detection model to identify leading bypassers and writing altered to mask AI origins. While no AI system can achieve zero false positives, our models are continuously researched, tested, and refined to adapt to new large language models and bypassing techniques. Our tools provide insights that help instructors focus their time and energy on what matters most, teaching and guiding students, while keeping human judgment essential in determining originality and integrity.”
The ETIH Innovation Awards 2026
The EdTech Innovation Hub Awards celebrate excellence in global education technology, with a particular focus on workforce development, AI integration, and innovative learning solutions across all stages of education.
Now open for entries, the ETIH Innovation Awards 2026 recognize the companies, platforms, and individuals driving transformation in the sector, from AI-driven assessment tools and personalized learning systems, to upskilling solutions and digital platforms that connect learners with real-world outcomes.
Submissions are open to organizations across the UK, the Americas, and internationally. Entries should highlight measurable impact, whether in K–12 classrooms, higher education institutions, or lifelong learning settings.
Winners will be announced on 14 January 2026 as part of an online showcase featuring expert commentary on emerging trends and standout innovation. All winners and finalists will also be featured in our first print magazine, to be distributed at BETT 2026.
To explore categories and submit your entry, visit the ETIH Innovation Awards hub.