Home » How to Run DeepSeek R1 Locally for Free Using LM Studio

How to Run DeepSeek R1 Locally for Free Using LM Studio

by Nick Smith
4.5K views

Running DeepSeek R1 locally is surprisingly simple, thanks to LM Studio. If you want to harness the power of this model on your own machine to get as much privacy as possible, follow these steps. If you’re reading this, you are probably already aware that there are some major privacy concerns with using DeepSeek’s app or website.

Why Run DeepSeek Locally?

There are three main reasons why someone would want to run DeepSeek locally vs. over the internet:

Offline Access: No need for an internet connection.

Privacy: Your data stays on your machine.

No API Limits: Use the model without restrictions.

Step 1: Download LM Studio

LM Studio Logo

First things first, head over to the LM Studio website and get the right version for your operating system (Windows, macOS, or Linux). Install it like you would any other software.

Step 2: Open LM Studio and Find the DeepSeek R1 Model

Once installed, fire up LM Studio and navigate to the Discover tab. This is where you’ll find a variety of AI models ready for local use.

Now, search for DeepSeek R1 Distill QU 7B or DeepSeek R1 D Llama in the model list. These are variations of the DeepSeek R1 model optimized for local use.

Step 3: Download the Model

Click on your preferred DeepSeek R1 version and download it. The file size may be large, so depending on your internet speed, grab a coffee or 40 oz beer while it downloads.

Step 4: Load the DeepSeek Model

Once the download is complete, go to the My Models tab inside LM Studio. Locate DeepSeek R1, click on it, and select Load Model.

Step 5: Start Chatting Locally with DeepSeek!

With the model loaded, hit New Chat, type in your prompt, and press Send. That’s it! You now have DeepSeek R1 running locally, ready to answer your questions with detailed reasoning.

Wrapping It Up

Give it a shot and see how DeepSeek R1 performs locally to meet your needs. Let us know how well it worked for you in the comments section below.

Until next time, be sure to run the prompts and prompt the planet!

You may also like

2 comments

Marc July 22, 2025 - 12:38 am

Hey there! Fantastic article on getting DeepSeek-R1 up and running locally – really appreciate the detailed steps. It got me thinking, and please excuse the slight digression and the link, but I came across something that had me a bit confused, and I thought maybe someone here could shed some light on it given the discussions about model performance and quirks. I found information about a pharmaceutical product called Wonacet LM, specifically at https://pillintrip.com/medicine/woncet-lm. Is there any known, perhaps very niche, correlation or even just a funny coincidence between the “LM” in DeepSeek-R1 (referring to “Language Model”, I assume) and this drug, especially when we’re talking about optimizing how these models “behave” or their “side effects” in terms of resource usage? Just trying to connect some dots, or maybe I’m completely off-base!

Reply
Nick Smith, Founder/CEO/Content Creator/God
Nick Smith July 22, 2025 - 4:34 pm

Sorry, man. I honestly don’t know.

Reply

Add a Thrilling Comment