doodledup@lemmy.worldtoTechnology@lemmy.world•Meta addresses AI hallucination as chatbot says Trump shooting didn’t happenEnglish
0·
4 months agoAI doesn’t know what’s wrong or correct. It hallucinates every answer. It’s up to the supervisor to determine whether it’s wrong or correct.
Mathematically verifying the correctness of these algorithms is a hard problem. It’s intentional and the trade-off for the incredible efficiency.
Besides, it can only “know” what it has been trained on. It shouldn’t be suprising that it cannot answer about the Trump shooting. Anyone who thinks otherwise simply doesn’t know how to use these models.
Human beings are not infallible either.