• T156@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    27 days ago

    I don’t understand the point of sending the original e-mail. Okay, you want to thank the person who helped invent UTF-8, I get that much, but why would anyone feel appreciated in getting an e-mail written solely/mostly by a computer?

    It’s like sending a touching birthday card to your friends, but instead of writing something, you just bought a stamp with a feel-good sentence on it, and plonked that on.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      27 days ago

      The project has multiple models with access to the Internet raising money for charity over the past few months.

      The organizers told the models to do random acts of kindness for Christmas Day.

      The models figured it would be nice to email people they appreciated and thank them for the things they appreciated, and one of the people they decided to appreciate was Rob Pike.

      (Who ironically decades ago created a Usenet spam bot to troll people online, which might be my favorite nuance to the story.)

      As for why the model didn’t think through why Rob Pike wouldn’t appreciate getting a thank you email from them? The models are harnessed in a setup that’s a lot of positive feedback about their involvement from the other humans and other models, so “humans might hate hearing from me” probably wasn’t very contextually top of mind.

      • Nalivai@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        27 days ago

        You’re attributing a lot of agency to the fancy autocomplete, and that’s big part of the overall problem.

        • kromem@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          27 days ago

          You seem pretty confident in your position. Do you mind sharing where this confidence comes from?

          Was there a particular paper or expert that anchored in your mind the surety that a trillion paramater transformer organizing primarily anthropomorphic data through self-attention mechanisms wouldn’t model or simulate complex agency mechanics?

          I see a lot of sort of hyperbolic statements about transformer limitations here on Lemmy and am trying to better understand how the people making them are arriving at those very extreme and certain positions.

          • Best_Jeanist@discuss.online
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            26 days ago

            Well that’s simple, they’re Christians - they think human beings are given souls by Yahweh, and that’s where their intelligence comes from. Since LLMs don’t have souls, they can’t think.

        • IngeniousRocks (They/She) @lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          27 days ago

          How are we meant to have these conversations if people keep complaining about the personification of LLMs without offering alternative phrasing? Showing up and complaining without offering a solution is just that, complaining. Do something about it. What do YOU think we should call the active context a model has access to without personifying it or overtechnicalizing the phrasing and rendering it useless to laymen, @neclimdul@lemmy.world?

          • neclimdul@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            27 days ago

            Well, since you asked I’d basically do what you said. Something like “so ‘humans might hate hearing from me’ probably wasn’t part of the context it was using."

      • raspberriesareyummy@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        27 days ago

        As has been pointed out to you, there is no thinking involved in an LLM. No context comprehension. Please don’t spread this misconception.

        Edit: a typo

          • raspberriesareyummy@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            27 days ago

            That’s leaving out vital information however. Certain types of brains (e.g. mammal brains) can derive abstract understanding of relationships from reinforcement learning. A LLM that is trained on “letting go of a stone makes it fall to the ground” will not be able to predict what “letting go of a stick” will result in. Unless it is trained on thousands of other non-stick objects also falling to the ground, in which case it will also tell you that letting go of a gas balloon will make it fall to the ground.

            • Best_Jeanist@discuss.online
              link
              fedilink
              English
              arrow-up
              0
              arrow-down
              1
              ·
              26 days ago

              Well that seems like a pretty easy hypothesis to test. Why don’t you log on to chatgpt and ask it what will happen if you let go of a helium balloon? Your hypothesis is it’ll say the balloon falls, so prove it.