OpenAI, the creators of ChatGPT, has recently admitted that the current crop of AI detectors, including ChatGPT itself, are ineffective at distinguishing between human-generated and machine-generated text.
Teachers, students, and content creators, listen up! OpenAI has made it clear the ineffectiveness of AI detectors in their FAQ section. “In short, no,” they say, when asked if these tools can accurately distinguish between human and AI-generated content. So, if you’re relying on an “AI essay detector” or a “ChatGPT detector” to validate the authenticity of a text, you might want to reconsider.
AI detectors like GPTZero have been found to yield false positives, often mistaking human text for machine-generated content. OpenAI even discontinued its AI Text Classifier tool, which had a jaw-droppingly low accuracy rate of just 26%.
ChatGPT, often considered a reliable source for generating human-like text, is also in the hot seat. The model has no “understanding” of what content can be created by AI. It sometimes fabricates answers to questions like, “did you write this [essay]?” or “could this have been written by AI?” These answers are random and have no basis, further emphasizing the limitations of AI detectors.
Experts are now advocating for a shift towards human experience and intuition rather than relying solely on automated AI detection tools. A teacher familiar with a student’s writing style, for instance, may notice if the student’s style or level of knowledge suddenly changes. Some sloppy attempts to pass off AI-generated text even reveal tell-tale signs like the phrase “as an AI language model,” indicating a mere copy-paste job.