• cecilkorik@piefed.ca
    link
    fedilink
    English
    arrow-up
    6
    ·
    9 days ago

    It’s built in layers, and the layers that are improving are not the LLMs themselves, it’s the layers that interact between the user and the LLM that are improving, which creates the illusion that the LLMs are improving. They’re not. TropicalDingdong knows what they’re talking about, you should listen to them.

    If you continue to improve the layers between the LLM and the user long enough, you’ll end up with something that we traditionally used to call a “software program” that is optimized for accomplishing a task, and you won’t need an LLM much if at all.

    • pixxelkick@lemmy.world
      link
      fedilink
      arrow-up
      1
      arrow-down
      4
      ·
      9 days ago

      You’ve gotta be living under a rock if you dont think the models themselves have been improving over the last year, lol.

      We are bumping into a log scale problem where people arent fully grasping how big of a difference going from an x% error rate to a y% error rate is in actual practice for where it matters.