An intelligent document assistant that allows users to chat with their documents using state-of-the-art language models. Built with FastAPI, React, and LangChain.
- 📄 Document Processing: Support for PDF, DOCX, CSV, and JSON files
- 💬 Interactive Chat: Natural conversation with your documents
- 🔍 Smart Search: Semantic search across all uploaded documents
- 📊 Source Citations: Automatic citation of sources in responses
- 💾 Persistent Conversations: Chat history preserved between sessions
- 📥 Export Functionality: Export conversations as PDF or TXT
- FastAPI
- LangChain
- Ollama (DeepSeek model)
- ChromaDB
- ReportLab
- Python 3.11+
- React
- Material-UI
- Axios
- React Markdown
- React Syntax Highlighter
- Install Python 3.11 or higher
- Install Node.js 18 or higher
- Install Ollama from ollama.ai
- Pull the DeepSeek model:
ollama pull deepseek-r1:32b
-
Clone the repository:
git clone https://github.com/yourusername/ai-doc-assistant.git cd ai-doc-assistant
-
Create and activate a virtual environment:
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies:
pip install -r requirements.txt
-
Create necessary directories:
mkdir -p uploads exports vectordb/conversations
-
Start the backend server:
uvicorn main:app --reload --port 8000
-
Navigate to the frontend directory:
cd frontend
-
Install dependencies:
npm install
-
Start the development server:
npm start
The application will be available at http://localhost:3000
-
Set up a Linux server (Ubuntu 20.04+ recommended)
-
Install system dependencies:
sudo apt update sudo apt install python3.11 python3.11-venv nginx
-
Install Ollama:
curl -fsSL https://ollama.ai/install.sh | sh
-
Pull the DeepSeek model:
ollama pull deepseek-r1:32b
-
Clone and set up the application:
git clone https://github.com/yourusername/ai-doc-assistant.git cd ai-doc-assistant python3.11 -m venv venv source venv/bin/activate pip install -r requirements.txt
-
Create a systemd service for Ollama:
sudo nano /etc/systemd/system/ollama.service
Add the following content:
[Unit] Description=Ollama Service After=network.target [Service] Type=simple User=root ExecStart=/usr/bin/ollama serve Restart=always [Install] WantedBy=multi-user.target
-
Create a systemd service for the FastAPI application:
sudo nano /etc/systemd/system/aidoc.service
Add the following content:
[Unit] Description=AI Document Assistant After=network.target [Service] User=ubuntu WorkingDirectory=/path/to/ai-doc-assistant Environment="PATH=/path/to/ai-doc-assistant/venv/bin" ExecStart=/path/to/ai-doc-assistant/venv/bin/uvicorn main:app --host 0.0.0.0 --port 8000 Restart=always [Install] WantedBy=multi-user.target
-
Configure Nginx:
sudo nano /etc/nginx/sites-available/aidoc
Add the following content:
server { listen 80; server_name your_domain.com; location / { root /path/to/ai-doc-assistant/frontend/build; try_files $uri $uri/ /index.html; } location /api { proxy_pass http://localhost:8000; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } }
-
Enable and start services:
sudo ln -s /etc/nginx/sites-available/aidoc /etc/nginx/sites-enabled/ sudo systemctl enable nginx sudo systemctl enable ollama sudo systemctl enable aidoc sudo systemctl start nginx sudo systemctl start ollama sudo systemctl start aidoc
-
Build the frontend:
cd frontend npm install npm run build
-
Copy the build files to the server:
scp -r build/* user@your_server:/path/to/ai-doc-assistant/frontend/build/
Create a .env
file in the root directory:
# Backend
UPLOAD_DIR=uploads
DB_DIR=vectordb
EXPORT_DIR=exports
MAX_UPLOAD_SIZE=10485760 # 10MB
# Frontend
REACT_APP_API_URL=http://localhost:8000
For production, update the REACT_APP_API_URL
to your domain.
-
Set up SSL/TLS using Let's Encrypt:
sudo apt install certbot python3-certbot-nginx sudo certbot --nginx -d your_domain.com
-
Configure firewall:
sudo ufw allow 80 sudo ufw allow 443 sudo ufw allow 22 sudo ufw enable
-
Set up proper file permissions:
sudo chown -R ubuntu:ubuntu /path/to/ai-doc-assistant chmod -R 755 /path/to/ai-doc-assistant
-
Monitor logs:
sudo journalctl -u ollama -f sudo journalctl -u aidoc -f
-
Update the application:
cd /path/to/ai-doc-assistant git pull source venv/bin/activate pip install -r requirements.txt sudo systemctl restart aidoc
-
Check service status:
sudo systemctl status ollama sudo systemctl status aidoc sudo systemctl status nginx
-
Check logs:
sudo journalctl -u ollama -n 100 sudo journalctl -u aidoc -n 100 sudo tail -f /var/log/nginx/error.log
-
Common issues:
- Port conflicts: Check if ports 8000 and 3000 are free
- Memory issues: Monitor RAM usage with
htop
- Disk space: Check available space with
df -h
- Fork the repository
- Create a feature branch
- Commit your changes
- Push to the branch
- Create a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.