because that’s what we want. To open up classified documents to an AI controlled by Elon, of all people :/

  • Skyrmir@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    4 days ago

    By definition LLM models need massive external input in order to improve. So they can’t really be disconnected. Top that off with them only being useful when you can interact with them from many or remote locations, and there’s just no way to really keep them secure. They need massive communication to accomplish anything useful, and there’s no real way to keep massive communication secure.

    • raccoon@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      You don’t have to improve them out in the field. Just collect metrics on their behavior and train a central model on that data, then you upgrade the local models on each unit when they are brought in for maintenance. I’m simplifying, of course. And terrified.

      • Skyrmir@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        That’s the thing, the local models aren’t going to have the processing power to really beat deterministic systems. So you’re going to need comms at some point to really get any kind of edge. Otherwise dumb systems get the job done for pennies, while you’re having to produce high end chips to handle local processing and back end training, just to significantly out perform a garage door trip light. Yes, an exaggerated comparison, but the point holds.