My original prompt was: “Please help me write a document on the difference between AI slop versus the real world and actual facts”
Take it for whatever it is, but even Google’s own AI literally says at the end to basically not trust it over your own critical thinking skills and research and such.
The document can also be found via the short link I made for it. I’m gonna leave this document online and otherwise unedited besides my addendum at the end of it.
https://docs.google.com/document/d/1o6PNCcHC1G9tVGwX6PlyFXFhZ64mDCFLV6wUyvYAz8E
https://tinyurl.com/googleaislopdoc
Edit: Apparently I can’t open the original link on my tablet, as it isn’t signed into Google, but the short link works and opens it up in the web browser (I’m using Fennec if that makes any difference for anyone).
Fuck AI, and fuck Google. I shouldn’t have to sign in to read a shared document…


Your post was very clear on what you were doing and how you were doing it, my question is why were you doing it, beyond your desire to see what it had to say, which is pretty much implied the moment you willfully prompt an LLM.
To get to the point I was trying to reach : As I’m sure you know, the output of an LLM is meant to reflect its training data (and further data it might search on the internet for), largely based on a statistical analysis of said data, all this directed by your prompt.
Using the term “slop” pushes the LLM to give more weight to the parts of its data where the same word appears, and so on for the rest of your prompt.
The result is that, what the LLM “thinks” is ultimately what humans have tended to write about, bar the possible distortion due to the amount of randomness introduced by the designers (LLM “temperature”).
These “thoughts” are not based on an analysis of the actual truth behind the words we use, but rather on an analysis of what other words appear alongside the words you have put in your prompt.
In this case, what your efforts reveal, is that human discourse where the words in your prompt occur the most, is most likely to talk about critical thinking, not-trusting AI and whatever else is included in the output you got.
In other words, this output is not even a reflection of the general credit and trust that humans give or not to LLM outputs, but a reflection of what those of us who use “slop” have written on the subject. So basically you put on a filter for “negative responses only” in the first place, since “slop” is basically a slur at this point.
Based on my own observation of human discourse on the subject, I find that the output is a rather accurate reflection of what humans write about when it comes to “slop” and actual “facts”. What I make out of your results, is that the LLM is not only working as intended, but has successfully and accurately given you what you asked of it (a clear and concise document summarizing what humans who have a negative opinion on the subject say on the supposed-facts presented in LLM output)
If anything, having the LLM reply something else would be a stronger indication of their untrustworthiness, since I’m pretty sure that nobody writes something along the lines of “AI slop gives us an accurate reflection of well-established facts and the real world.” or “You should believe in AI slop, it’s all real world facts”.
Rest assured that I remain more interested in what you have to say than in the output of an LLM, and I do put a lot more trust in your capabilities to distinguish between what people generally say and actual facts.