My original prompt was: “Please help me write a document on the difference between AI slop versus the real world and actual facts”
Take it for whatever it is, but even Google’s own AI literally says at the end to basically not trust it over your own critical thinking skills and research and such.
The document can also be found via the short link I made for it. I’m gonna leave this document online and otherwise unedited besides my addendum at the end of it.
https://docs.google.com/document/d/1o6PNCcHC1G9tVGwX6PlyFXFhZ64mDCFLV6wUyvYAz8E
https://tinyurl.com/googleaislopdoc
Edit: Apparently I can’t open the original link on my tablet, as it isn’t signed into Google, but the short link works and opens it up in the web browser (I’m using Fennec if that makes any difference for anyone).
Fuck AI, and fuck Google. I shouldn’t have to sign in to read a shared document…


deleted by creator
Oh, now I see what comment you were replying to, no no, you missed what I was referring to when I mentioned hallucinations of the Cambridge Dictionary above. That part I was talking about has nothing to do with the Google AI article, that literally has to do with the Cambridge Dictionary website.
Follow the link above to their definition above, and scroll down and look at the related terms below, why in the hell is the word ‘peanut’ listed as a related term for nuanced?
Are they using AI to generate definitions for the Cambridge Dictionary? That’s about the only way I can see ‘peanut’ somehow creeping into the related words, as a hallucination…
I wasn’t trying to get it to agree with me or not agree with me, I just wanted to see what it would say about AI technology in general. And it admitted within the very article it wrote that hallucinations are known to happen within AI results and to not inherently trust it over your own research.