

It runs pretty well. I didn’t notice a speed difference between it and DeepSeek’s web chat. I haven’t used it for anything big, I’m primarily trying to stay current with the technology so I know what I’m talking about during job interviews.
He/him


It runs pretty well. I didn’t notice a speed difference between it and DeepSeek’s web chat. I haven’t used it for anything big, I’m primarily trying to stay current with the technology so I know what I’m talking about during job interviews.


No problem. My desktop has an nvidia RTX 350 card that has 8GB of ram on it. It’s a basic modernish video card. Ollama is an open source framework for running large language models. The model I’m using is qwen 2.5. It has 3 billion (3b) parameters(basically the size of the LLM) . Docker is a program that allows you to basically run smaller dedicated computers on your computer.
I am not in China. I’m an American living in Albania. I recommended DeepSeek because it’s free, works well, and if a company is going to have the information on what you’re chatting about, it might as well be one that isn’t in the same country as you.


I’m running ollama with qwen2.5:3b in docker on a rtx3050 8gb. I also use DeepSeek.
You can checkout https://aihorde.net/