
Last week, there was this dope blog post from OpenAI about using ChatGPT as a sick educational aid. They had all these teachers sharing their experiences and giving tips on how to use it in the classroom. They even dropped some badass FAQs where they straight up admitted that AI writing detectors are a bunch of BS, man. They don’t work at all, even though they’re often used to unjustly punish students.
OpenAI made it clear in the FAQ that AI detectors just can’t tell the difference between AI-generated and human-generated content. They tried to create some tools to detect the AI stuff, but it turned out to be a bunch of snake oil, man. False positives all over the place. They discontinued one of their tools, the AI Classifier, because it was garbage with only a 26 percent accuracy rate.
Here’s a juicy truth bomb: ChatGPT doesn’t have a freakin’ clue if the text is AI-written or not. Seriously, it’s clueless, bro. Sometimes it straight-up makes stuff up when asked if it wrote something or if it could have been written by AI. It’s all random and has no basis in reality.
And get this, OpenAI admitted that ChatGPT can be a big fat liar. Yeah, it can confabulate false information, or as they call it, have a “hallucination.” It might sound legit, but it could feed you some straight-up incorrect or misleading info. It’s even been known to make up quotes and citations, so don’t be relying on it for your research, man.
Remember that lawyer who got busted for citing non-existent cases? Yeah, he got that flimsy evidence from ChatGPT. Hilarious and sketchy at the same time.
But check it out, even though these automated AI detectors are a joke, us humans can still spot AI writing. Teachers who are familiar with their students’ typical writing style can easily pick up on any sudden changes. Plus, there are some clumsy attempts to pass off AI-generated work as human-written that give it away, like when someone accidentally leaves in the phrase “as an AI language model.” That’s a dead giveaway, man. And recently, there was this article in Nature where some humans spotted the phrase “Regenerate response” in a scientific paper, which is the label of a button in ChatGPT. Busteeeed!
For now, the safest move is to steer clear of these automated AI detection tools. As of today, AI writing is pretty much undetectable and it will probably stay that way, dude. The false positive rates on these detectors are so high that they’re just not worth using, man. That’s what Ethan Mollick, this AI analyst and professor dude, told Ars. So listen to the experts, bro, and let’s leave the AI detecting to the humans.