• MudMan@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      11 days ago

      The idea is having tensor acceleration built into SoCs for portable devices so they can run models locally on laptops, tablets and phones.

      Because, you know, server-side ML model calculations are expensive, so offloading compute to the client makes them cheaper.

      But this gen can’t really run anything useful locally so far, as far as I can tell. Most of the demos during the ramp-up to these were thoroughly underwhelming and nowhere near what you get from server-side services.

      Of course they could have just called the “NPU” a new GPU feature and make it work closer to how this is run on dedicated GPUs, but I suppose somebody thought that branding this as a separate device was more marketable.

      • This is fine🔥🐶☕🔥@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        11 days ago

        EU should introduce regulation that prohibits client-side AI/ML processing for applications that require internet access. Show the cost upfront. Let’s see how many people pay for that.