Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

blog: add DeepSeek R1 local installation guide #4552

Merged
merged 4 commits into from
Feb 3, 2025
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file added docs/src/pages/post/_assets/download-jan.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/src/pages/post/_assets/jan-local-ai.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
109 changes: 109 additions & 0 deletions docs/src/pages/post/deepseek-r1-locally.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,109 @@
---
title: "Beginner's Guide: Run DeepSeek R1 Locally (Private)"
eckartal marked this conversation as resolved.
Show resolved Hide resolved
description: "Quick steps on how to run DeepSeek R1 locally for full privacy. Perfect for beginners—no coding required."
tags: DeepSeek, R1, local AI, Jan, GGUF, Qwen, Llama
categories: guides
date: 2024-01-31
ogImage: assets/run-deepseek-r1-locally-in-jan.jpg
---

import { Callout } from 'nextra/components'
import CTABlog from '@/components/Blog/CTA'

# Beginner’s Guide: Run DeepSeek R1 Locally

![image](./_assets/run-deepseek-r1-locally-in-jan.jpg)

You can run DeepSeek R1 on your own computer! While the full model needs very powerful hardware, we'll use a smaller version that works great on regular computers.
eckartal marked this conversation as resolved.
Show resolved Hide resolved

Why use a smaller version?
- Works smoothly on most modern computers
- Downloads much faster
- Uses less storage space on your computer

## Quick Steps at a Glance
1. Download and install [Jan](https://jan.ai/) (just like any other app!)
2. Pick a version that fits your computer
3. Choose the best settings
4. Set up a quick template & start chatting!

Keep reading for a step-by-step guide with pictures.

## Step 1: Download Jan
[Jan](https://jan.ai/) is a free app that helps you run AI models on your computer. It works on Windows, Mac, and Linux, and it's super easy to use - no coding needed!
eckartal marked this conversation as resolved.
Show resolved Hide resolved

![image](./_assets/download-jan.jpg)

- Get Jan from [jan.ai](https://jan.ai)
- Install it like you would any other app
- That's it! Jan takes care of all the technical stuff for you

## Step 2: Choose Your DeepSeek R1 Version
DeepSeek R1 comes in different sizes. Let's help you pick the right one for your computer.

<Callout type="info">
💡 Not sure how much VRAM your computer has?
- Windows: Press Windows + R, type "dxdiag", press Enter, and click the "Display" tab
- Mac: Click Apple menu > About This Mac > More Info > Graphics/Displays
eckartal marked this conversation as resolved.
Show resolved Hide resolved
</Callout>

Below is a detailed table showing which version you can run based on your computer's VRAM:
eckartal marked this conversation as resolved.
Show resolved Hide resolved

| Version | Link to Paste into Jan Hub | Required VRAM for smooth performance |
|---------|---------------------------|---------------|
| Qwen 1.5B | [https://huggingface.co/bartowski/DeepSeek-R1-Distill-Qwen-1.5B-GGUF](https://huggingface.co/bartowski/DeepSeek-R1-Distill-Qwen-1.5B-GGUF) | 6GB+ VRAM |
| Qwen 7B | [https://huggingface.co/bartowski/DeepSeek-R1-Distill-Qwen-7B-GGUF](https://huggingface.co/bartowski/DeepSeek-R1-Distill-Qwen-7B-GGUF) | 8GB+ VRAM |
| Llama 8B | [https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-8B-GGUF](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-8B-GGUF) | 8GB+ VRAM |
| Qwen 14B | [https://huggingface.co/bartowski/DeepSeek-R1-Distill-Qwen-14B-GGUF](https://huggingface.co/bartowski/DeepSeek-R1-Distill-Qwen-14B-GGUF) | 16GB+ VRAM |
| Qwen 32B | [https://huggingface.co/bartowski/DeepSeek-R1-Distill-Qwen-32B-GGUF](https://huggingface.co/bartowski/DeepSeek-R1-Distill-Qwen-32B-GGUF) | 16GB+ VRAM |
| Llama 70B | [https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-70B-GGUF](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-70B-GGUF) | 48GB+ VRAM |

<Callout type="info">
Quick Guide:
- 6GB VRAM? Start with the 1.5B version - it's fast and works great!
- 8GB VRAM? Try the 7B or 8B versions - good balance of speed and smarts
- 16GB+ VRAM? You can run the larger versions for even better results
</Callout>

Ready to download? Here's how:
1. Open Jan and click the button in the left sidebar to open Jan Hub
2. Find the "Add Model" section (shown below)

![image](./_assets/jan-library-deepseek-r1.jpg)

3. Copy the link for your chosen version and paste it here:

![image](./_assets/jan-hub-deepseek-r1.jpg)

## Step 3: Choose Model Settings
When adding your model, you'll see two options:

<Callout type="tip">
- **Q4:** Perfect for most users - fast and works great! ✨ (Recommended)
- **Q8:** Slightly more accurate but needs more powerful hardware
</Callout>

## Step 4: Set Up & Start Chatting
Almost done! Just one quick setup:

1. Click Model Settings in the sidebar
2. Find the Prompt Template box
3. Copy and paste this exactly:

<Callout type="warning">
```
<|User|>{prompt}<|Assistant|>
```
</Callout>

This helps DeepSeek understand when you're talking and when it should respond.

Now you're ready to start chatting!

![image](./_assets/jan-runs-deepseek-r1-distills.jpg)

## Need help?

<Callout type="info">
Having trouble? We're here to help! [Join our Discord community](https://discord.gg/Exe46xPMbK) for support.
</Callout>
188 changes: 188 additions & 0 deletions docs/src/pages/post/run-ai-models-locally.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,188 @@
---
title: "How to run AI models locally: A Complete Guide for Beginners"
eckartal marked this conversation as resolved.
Show resolved Hide resolved
description: "A simple guide to running AI models locally on your computer. It's for beginners - no technical knowledge needed."
tags: AI, local models, Jan, GGUF, privacy, local AI
categories: guides
date: 2024-01-31
ogImage: assets/jan-local-ai.jpg
---

import { Callout } from 'nextra/components'
import CTABlog from '@/components/Blog/CTA'

# How to run AI models locally: A Complete Guide for Beginners

Running AI models locally means installing them on your computer instead of using cloud services. This guide shows you how to run open-source AI models like Llama, Mistral, or DeepSeek on your computer - even if you're not technical.
eckartal marked this conversation as resolved.
Show resolved Hide resolved

## Quick steps:
1. Download [Jan](https://jan.ai)
2. Pick a recommended model
3. Start chatting

Read [Quickstart](https://jan.ai/docs/quickstart) to get started. For more details, keep reading.

![Run AI models locally with Jan](./_assets/jan-local-ai.jpg)
*Jan is for running AI models locally. Download [Jan](https://jan.ai)*

<Callout type="info">
Benefits of running AI locally:
- **Privacy:** Your data stays on your computer
- **No internet needed:** Use AI even offline
- **No limits:** Chat as much as you want
- **Full control:** Choose which AI models to use
</Callout>

## How to run AI models locally as a beginner

[Jan](https://jan.ai) makes it easy to run AI models. Just download the app and you're ready to go - no complex setup needed.

<Callout type="tip">
What you can do with Jan:
- Download AI models with one click
- Everything is set up automatically
- Find models that work on your computer
</Callout>

## Understanding Local AI models

Think of AI models like apps - some are small and fast, others are bigger but smarter. Let's understand two important terms you'll see often: parameters and quantization.
eckartal marked this conversation as resolved.
Show resolved Hide resolved

### What's a "Parameter"?

When looking at AI models, you'll see names like "Llama-2-7B" or "Mistral-7B". Here's what that means:

![AI model parameters explained](./_assets/local-ai-model-parameters.jpg)
*Model sizes: Bigger models = Better results + More resources*

- The "B" means "billion parameters" (like brain cells)
- More parameters = smarter AI but needs a faster computer
- Fewer parameters = simpler AI but works on most computers

<Callout type="info">
Which size to choose?
- **7B models:** Best for most people - works on most computers
- **13B models:** Smarter but needs a good graphics card
- **70B models:** Very smart but needs a powerful computer
</Callout>

### What's Quantization?

Quantization makes AI models smaller so they can run on your computer. Think of it like compressing a video to save space:

![AI model quantization explained](./_assets/open-source-ai-quantization.jpg)
*Quantization: Balance between size and quality*

Simple guide:
- **Q4:** Best choice for most people - runs fast and works well
- **Q6:** Better quality but runs slower
- **Q8:** Best quality but needs a powerful computer

Example: A 7B model with Q4 works well on most computers.

## Hardware Requirements

Before downloading an AI model, let's check if your computer can run it.

<Callout type="info">
The most important thing is VRAM:
- VRAM is your graphics card's memory
- More VRAM = ability to run bigger AI models
- Most computers have between 4GB to 16GB VRAM
</Callout>

### How to check your VRAM:

**On Windows:**
1. Press Windows + R
2. Type "dxdiag" and press Enter
3. Click "Display" tab
4. Look for "Display Memory"

**On Mac:**
1. Click Apple menu
2. Select "About This Mac"
3. Click "More Info"
4. Look under "Graphics/Displays"

### Which models can you run?

Here's a simple guide:

| Your VRAM | What You Can Run | What It Can Do |
|-----------|-----------------|----------------|
| 4GB | Small models (1-3B) | Basic writing and questions |
| 6GB | Medium models (7B) | Good for most tasks |
| 8GB | Larger models (13B) | Better understanding |
| 16GB | Largest models (32B) | Best performance |

<Callout type="tip">
Start with smaller models:
- Try 7B models first - they work well for most people
- Test how they run on your computer
- Try larger models only if you need better results
</Callout>

## Setting Up Your Local AI

### 1. Get Started
Download Jan from [jan.ai](https://jan.ai) - it sets everything up for you.

### 2. Get an AI Model

You can get models two ways:

### 1. Use Jan Hub (Recommended):
- Click "Download Model" in Jan
- Pick a recommended model
- Choose one that fits your computer

![AI model parameters explained](./_assets/jan-model-download.jpg)
*Use Jan Hub to download AI models*

### 2. Use Hugging Face:

<Callout type="warning">
Important: Only GGUF models will work with Jan. Make sure to use models that have "GGUF" in their name.
eckartal marked this conversation as resolved.
Show resolved Hide resolved
</Callout>

#### Step 1: Get the model link
Find and copy a GGUF model link from [Hugging Face](https://huggingface.co)

![Finding a GGUF model on Hugging Face](./_assets/hugging-face-jan-model-download.jpg)
*Look for models with "GGUF" in their name*

#### Step 2: Open Jan
Launch Jan and go to the Models tab

![Opening Jan's model section](./_assets/jan-library-deepseek-r1.jpg)
*Navigate to the Models section in Jan*

#### Step 3: Add the model
Paste your Hugging Face link into Jan

![Adding a model from Hugging Face](./_assets/jan-hub-deepseek-r1.jpg)
*Paste your GGUF model link here*

#### Step 4: Download
Select your quantization and start the download

![Downloading the model](./_assets/jan-hf-model-download.jpg)
*Choose your preferred model size and download*

### Common Questions

<Callout type="info">
**"My computer doesn't have a graphics card - can I still use AI?"**
Yes! It will run slower but still work. Start with 7B models.

**"Which model should I start with?"**
Try a 7B model first - it's the best balance of smart and fast.

**"Will it slow down my computer?"**
Only while you're using the AI. Close other big programs for better speed.
</Callout>

## Need help?
<Callout type="info">
Having trouble? We're here to help! [Join our Discord community](https://discord.gg/Exe46xPMbK) for support.
</Callout>
Loading