• Shimitar@downonthestreet.eu
    link
    fedilink
    English
    arrow-up
    10
    ·
    23 hours ago

    I plugged in an NVIDIA gpu in my server and enabled ollama to use it, diligently updated my public wiki about it and now enjoying real time gpt: OSS model responses!

    I was amazed, time cut from 3-8 minutes down to seconds. I have a Intel Core7 with 48gb ram, but even an oldish gpu beats the crap out of it.

      • Shimitar@downonthestreet.eu
        link
        fedilink
        English
        arrow-up
        1
        ·
        15 hours ago

        NVIDIA Corporation GA104GL [RTX A4000] (rev a1)

        From lspci

        It has 16gb of VRAM, not too much but enough to run gpt:OSS 20b and a few other models pretty nice.

        I noticed that it’s better to stick to a single model, I imagine that unload and reload the model in VRAM takes time.

    • mierdabird@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      22 hours ago

      In that same vein I got an AMD Pro V620 32GB off ebay and have been struggling to get it to POST on my x570 motherboard, but I finally tried it on my old ASUS b450-i with a Ryzen 5 2400GE and with a few BIOS setting changes it fired right up.

      Now I need to figure out what I’m doing wrong on the x570 board so I can run the V620 combined with my 9060XT for bigger models