Installing Deepseek-R1:14b AI Locally

Without getting too off topic, and without wanting to invoke a cliché - I’ve come to accept that this is a journey and not a destination. I’ve been seriously working towards migrating to non-big tech alternatives for around 2 1/2 years now, and it will be ongoing for some time yet. It also involves compromise. For example, Organic Maps is a terrific Google Maps alternative and supports every feature I need, but doesn’t have live traffic updates. It’s inconvenient, but for my use case it merely means “if it’s peak hour, leave 10 minutes earlier”. I’ll still get excellent quality offline maps and full navigation instructions, turn by turn, while driving. I have found that overall deGoogling/deAmazoning/deMicrosofting/etc. has meant that I still get to use all the wonderful modern innovations, I just have to think a bit harder about what goal I’m actually trying to achieve.

To come back to the AI discussion, if your goal is a local LLM because you value privacy above all else, don’t be afraid to try a smaller model on an older local machine or VM. I’ve found that knowing that a response might take 10 minutes actually forces me to be more mindful about my prompts. For example, “What day has the least number of public holidays?” might become “I want to set up a regular weekly bank transfer between NSW and Queensland. Over the next 30 years, which day of the week has the fewest number of Australian national and Queensland/NSW state based public holidays, to ensure that this transfer is not delayed?”. I’ll get the latter answer in one go, whereas the former will likely involve several rounds of back and forth between the LLM and I, and probably 5+ minutes between each response.

If your goal is to tinker for a bit, an older machine or VM might suit, as would spinning up an AI optimised VPS for a day or two to play. If your goal is quick responses based on the bleeding edge models with access to public internet sources, sign up to one of the commercial services for free or a low end paid account. If your goal is offline portability, then a beefy laptop and/or eGPU might fit the bill. Better yet, a combination of a few different options might work well together depending on what you’re trying to achieve.

Second this - my 13 year old HP machine (3rd gen Intel) + GTX1050 is quite capable. Don’t be put off - if your machine can load the model into RAM, it can work with it using Linux and Ollama just fine.

Cheers