Running DeepSeek R1 locally is surprisingly simple, thanks to LM Studio. If you want to harness the power of this model on your own machine to get as much privacy as possible, follow these steps. If you’re reading this, you are probably already aware that there are some major privacy concerns with using DeepSeek’s app or website.
Why Run DeepSeek Locally?
There are three main reasons why someone would want to run DeepSeek locally vs. over the internet:
Offline Access: No need for an internet connection.
Privacy: Your data stays on your machine.
No API Limits: Use the model without restrictions.
Step 1: Download LM Studio
First things first, head over to the LM Studio website and get the right version for your operating system (Windows, macOS, or Linux). Install it like you would any other software.
Step 2: Open LM Studio and Find the DeepSeek R1 Model
Once installed, fire up LM Studio and navigate to the Discover tab. This is where you’ll find a variety of AI models ready for local use.
Now, search for DeepSeek R1 Distill QU 7B or DeepSeek R1 D Llama in the model list. These are variations of the DeepSeek R1 model optimized for local use.
Step 3: Download the Model
Click on your preferred DeepSeek R1 version and download it. The file size may be large, so depending on your internet speed, grab a coffee or 40 oz beer while it downloads.
Step 4: Load the DeepSeek Model
Once the download is complete, go to the My Models tab inside LM Studio. Locate DeepSeek R1, click on it, and select Load Model.
Step 5: Start Chatting Locally with DeepSeek!
With the model loaded, hit New Chat, type in your prompt, and press Send. That’s it! You now have DeepSeek R1 running locally, ready to answer your questions with detailed reasoning.
Wrapping It Up
Give it a shot and see how DeepSeek R1 performs locally to meet your needs. Let us know how well it worked for you in the comments section below.
Until next time, be sure to run the prompts and prompt the planet!