• partial_accumen@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    15 hours ago

    Tried the same thing in Asahi but without macOS’ memory management and access to GPU acceleration, it just wasn’t feasible.

    Thank you for sharing this result. I knew Asahi’s memory management wasn’t as robust (so I got a 24GB RAM M2 unit to overcome this).

    For your macOS Ollama implementation are you able to leverage the NPU in the hardware (which I know is also unavailable so far in Asahi)?

    • djdarren@piefed.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 hours ago

      I actually have no idea how it all works. It just does.

      Asahi is incredible for general use computing on M1/2 machines, and perhaps even in use as a general purpose home server. But it’s still very much a fun exercise in what might be possible rather than a solid option, in my opinion.