A simple web-based chat application that uses the Ollama API to interact with the DeepSeek-R1 AI model.
Before running the application, make sure you have:
- Node.js installed on your system
- Ollama installed and running locally
- The DeepSeek-R1 model installed in Ollama
- Git installed and check in terminal with this command line: git --version
-
Download Ollama for Windows from the official website:
- Visit https://ollama.ai/download
- Click on the Windows download link
- Run the downloaded installer
-
After installation:
- Ollama will start automatically
- You should see the Ollama icon in your system tray
- The Ollama service will be running at http://127.0.0.1:11434
- Install using Homebrew:
brew install ollama
- Start Ollama:
ollama serve
- Install using curl:
curl -fsSL https://ollama.ai/install.sh | sh
- Start the Ollama service:
ollama serve
After installing Ollama:
-
Open a terminal (Command Prompt or PowerShell on Windows)
-
Pull the DeepSeek-R1 model:
ollama pull deepseek-r1:7b
-
Wait for the download to complete (this may take several minutes depending on your internet connection)
-
Verify the installation:
ollama list
You should see 'deepseek-r1:latest' in the list of models
- Run the model:
ollama run deepseek-r1:7b
This will start an interactive chat session with the model in your terminal. Type 'exit' or 'Ctrl + d' to quit the session.
- Clone the repository:
git clone https://github.com/IsaacTalb/OllaSeek-Chat-Local.git
cd OllaSeek-Chat-Local
- Install the required dependencies:
npm install
-
Make sure Ollama is running:
- On Windows: Check the system tray for the Ollama icon
- On macOS/Linux: Run
ollama serve
if not already running
-
Start the application server:
npm start
- Open your web browser and navigate to:
http://localhost:3000
- AI model and API endpoint runs locally on your machine
- Clean user interface, responsive design
- Real-time chat interaction with the DeepSeek-R1 AI model
- Error handling and status messages
index.html
- The main HTML file containing the chat interfacestyles.css
- CSS styles for the chat applicationscript.js
- Client-side JavaScript codeserver.js
- Node.js server that handles API proxyingpackage.json
- Project dependencies and scripts
- Frontend: HTML5, CSS3, JavaScript
- Backend: Node.js with Express
- AI Model: DeepSeek-R1 via Ollama
- API Endpoint: http://localhost:3000/api/chat
- Ollama API: http://127.0.0.1:11434/api/generate
-
If you see "model not found" error:
- Make sure Ollama is running:
- Windows: Check system tray
- macOS/Linux: Run
ps aux | grep ollama
to check if the process is running
- Run
ollama pull deepseek-r1:latest
to install the model - Verify with
ollama list
that the model is installed - Try running
ollama run deepseek-r1:latest
to test the model directly
- Make sure Ollama is running:
-
If the server won't start:
- Check if port 3000 is already in use
- Make sure all dependencies are installed
- Try running
npm install
again
-
If you get CORS errors:
- Make sure you're accessing the application through http://localhost:3000
- Check if the server is running
- Verify Ollama is running at http://127.0.0.1:11434
-
If Ollama won't start:
- Windows:
- Check Windows Services
- Try restarting the Ollama service
- macOS/Linux:
- Check system logs:
journalctl -u ollama
- Try restarting:
sudo systemctl restart ollama
- Check system logs:
- Windows:
This project is open source and available under the MIT License.