I was in a group chat where the bro copied and pasted my question into ChatGPT, took a screenshot, and pasted it into the chat.

As a joke, I said that Gemini disagreed with the answer. He asked what it said.

I made up an answer and then said I did another fact check with Claude, and ran it through “blast processing” servers to “fact check with a million sources”.

He replies that ChatGPT 5.1 is getting unreliable even at the $200 a month model and considering switching to a smarter agent.

Guys - it’s not funny anymore.

  • ThomasWilliams@lemmy.world
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    4 days ago

    From the context, it seems to be a question which answer isn’t easily verifiably. So it is a question unsuitable for LLMs

    Have you ever used a chatbot ? , the fact that an answer is unverifiable doesn’t stop them answering at all.

    • cmhe@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      4 days ago

      Have you ever used a chatbot ? , the fact that an answer is unverifiable doesn’t stop them answering at all.

      Yes, I’ve use chatbots. And yes, I know that they always manage to generate answer full of conviction even while wrong. I never said otherwise.

      My point is about the person using a chatbot/LLM needs to be able to easily verify if a generated reply is right or wrong, otherwise it doesn’t make much sense using LLMs, because they could have just researched the answer directly instead.