• raspberriesareyummy@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 month ago

    As has been pointed out to you, there is no thinking involved in an LLM. No context comprehension. Please don’t spread this misconception.

    Edit: a typo

      • raspberriesareyummy@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        That’s leaving out vital information however. Certain types of brains (e.g. mammal brains) can derive abstract understanding of relationships from reinforcement learning. A LLM that is trained on “letting go of a stone makes it fall to the ground” will not be able to predict what “letting go of a stick” will result in. Unless it is trained on thousands of other non-stick objects also falling to the ground, in which case it will also tell you that letting go of a gas balloon will make it fall to the ground.

        • Best_Jeanist@discuss.online
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          30 days ago

          Well that seems like a pretty easy hypothesis to test. Why don’t you log on to chatgpt and ask it what will happen if you let go of a helium balloon? Your hypothesis is it’ll say the balloon falls, so prove it.