A new tool from the anti-plagiarism platform Crossplag can supposedly identify texts from AI.
AI text tools like ChatGPT are designed to generate generic texts on topics that have already been described in detail beforehand. Or to put it another way: ChatGPT and Co. are test automation tools that pose new challenges to the educational apparatus.
The era of “Aigiraisme”
Crossplag is a platform specializing in the detection of academic plagiarism. In AI text generators, Crossplag has found a new final boss: the micro-plagiarism of millions of compiled texts into new text on par with the original content. It may even be original content, but that’s open to debate.
In light of the new threat, Crossplag proclaims the age of AI plagiarism, calling it “aigiarism”. AI text could be the Achilles heel of academic integrity, the Crossplag team speculates. The term aigiarism was previously mentioned by Mike Waters on Twitter.
A d
ChatGPT started to have immediate negative implications in academia as it was found to be effective in helping students generate content without doing any work.
crossplag
As a countermeasure, Crossplag introduces a free AI content detector: it’s designed to detect if a text is from a machine or a human – or if both worked on it.

Even with extensive adjustments to an AI text, the tool can still “find paths to AI-generated content,” the platform writes. The beta version only works for English texts and has not yet been released for institutional use. It can analyze up to 1,000 words at a time.
Use AI against AI
For AI content detection, Crossplag uses a refined RoBERTa model with a GPT-2 dataset, as well as a large transformer-based language model like GPT-3.5, which is behind ChatGPT.
The AI system is said to identify “human patterns and irregularities” in texts. Other clear indications are repetitive language and inconsistencies in tone. I have requested data on the reliability of the system and will update the article when I have it.
Additionally, the tool is supposed to benefit from interleaving with the Crossplag platform, as the AI generates text from existing sources “most often”. The platform does not describe exactly how it works.
Recommendation

The issue of transparency in AI-generated media is likely to be with us for some time, despite the AI content detector. It usually occurs with generative AI. Besides the text, there is a discussion about plagiarism for images and soon maybe for videos or 3D models.
OpenAI, the company behind ChatGPT, is experimenting with an embedded watermark in AI text to enable unique identification. The company plans to present the detection system in a scientific paper next year. China will ban AI-generated media without labeling from early 2023.