I think what’s worse is the fact that it’s destroyed our trust in one another, causing us to believe things to be AI even if they are not. I ran the article’s text through several AI detection services, and while not entirely reliable, they all say it’s 100% human written.
I don’t disagree, hence my preface, but my initial point still stands regardless.
Too many times I’ve seen real artists called AI by people when it’s clearly real work. Likewise, I’ve seen AI passed off as legit and people clap their hands awarding praise to those that don’t deserve it.
It’s funny that they got it backwards but honestly getting an 80% success rate by picking the opposite is more accurate detection than I would have expected.
I think what’s worse is the fact that it’s destroyed our trust in one another, causing us to believe things to be AI even if they are not. I ran the article’s text through several AI detection services, and while not entirely reliable, they all say it’s 100% human written.
AI detectors are completely unreliable and useless. They are AI themselves, and rife with all the problems that other LLMs have.
I don’t disagree, hence my preface, but my initial point still stands regardless.
Too many times I’ve seen real artists called AI by people when it’s clearly real work. Likewise, I’ve seen AI passed off as legit and people clap their hands awarding praise to those that don’t deserve it.
It’s all sad as hell.
AI detectors are very unreliable. One OpenAI used to develop had a 20% success rate (approximately).
It’s funny that they got it backwards but honestly getting an 80% success rate by picking the opposite is more accurate detection than I would have expected.