Skip to content

RAG Based LLM Chatbot Built using Open Source Stack (Llama 3.2 Model, BGE Embeddings, and Qdrant running locally within a Docker Container)

License

Notifications You must be signed in to change notification settings

GURPREETKAURJETHRA/RAG-Based-LLM-Chatbot

Repository files navigation

RAG Based LLM AI Chatbot 🤖

RAG Based LLM AI Chatbot Built using Open Source Stack (Llama 3.2 Model, BGE Embeddings, and Qdrant running locally within a Docker Container)

RAG Based LLM AI Chatbot

RAG Based LLM AI Chatbot is a powerful Streamlit-based application designed to simplify document management. Upload your PDF documents, create embeddings for efficient retrieval, and interact with your documents through an intelligent chatbot interface. 🚀

🛠️ Features

  • 📂 Upload Documents: Easily upload and preview your PDF documents within the app.
  • 🧠 Create Embeddings: Generate embeddings for your documents to enable efficient search and retrieval.
  • 🤖 Chatbot Interface: Interact with your documents using a smart chatbot that leverages the created embeddings.
  • 📧 Contact: Get in touch with the developer or contribute to the project on GitHub.
  • 🌟 User-Friendly Interface: Enjoy a sleek and intuitive UI with emojis and responsive design for enhanced user experience.

🖥️ Tech Stack

The Document Buddy App leverages a combination of cutting-edge technologies to deliver a seamless and efficient user experience. Here's a breakdown of the technologies and tools used:

  • LangChain: Utilized as the orchestration framework to manage the flow between different components, including embeddings creation, vector storage, and chatbot interactions.

  • Unstructured: Employed for robust PDF processing, enabling the extraction and preprocessing of text from uploaded PDF documents.

  • BGE Embeddings from HuggingFace: Used to generate high-quality embeddings for the processed documents, facilitating effective semantic search and retrieval.

  • Qdrant: A vector database running locally via Docker, responsible for storing and managing the generated embeddings for fast and scalable retrieval.

  • LLaMA 3.2 via Ollama: Integrated as the local language model to power the chatbot, providing intelligent and context-aware responses based on the document embeddings.

  • Streamlit: The core framework for building the interactive web application, offering an intuitive interface for users to upload documents, create embeddings, and interact with the chatbot.

📁 Directory Structure

document_buddy_app/

│── logo.png
├── new.py
├── vectors.py
├── chatbot.py
├── requirements.txt

🚀 Getting Started

Follow these instructions to set up and run the Document Buddy App on your local machine.

1. Clone the Repository

git clone https://github.com/GURPREETKAURJETHRA/RAG-Based-LLM-Chatbot.git
cd RAG-Based-LLM-Chatbot
  1. Create a Virtual Environment

You can either use Python’s venv or Anaconda to create a virtual environment for managing dependencies.

Option 1: Using venv

On Windows:

python -m venv venv
venv\Scripts\activate

On macOS and Linux:

python3 -m venv venv
source venv/bin/activate

Option 2: Using Anaconda

Follow these steps to create a virtual environment using Anaconda:

  1. Open the Anaconda Prompt.
  2. Create a new environment:
conda create --name Chatbot python=3.10

(Replace Chatbot with your preferred environment name if desired).

  1. Activate the newly created environment:
conda activate Chatbot
  1. Install Dependencies

Once the environment is set up (whether venv or Conda), install the required dependencies using requirements.txt:

pip install -r requirements.txt
  1. Run the App

Start the Streamlit app using the following command:

streamlit run new.py

Note: If your main application file is named differently, replace new.py with your actual file name (e.g., app.py).

This command will launch the app in your default web browser. If it doesn’t open automatically, navigate to the URL provided in the terminal (usually http://localhost:8501).

🤝 Contributing

Contributions are welcome! Whether it’s reporting a bug, suggesting a feature, or submitting a pull request, your input is highly appreciated. Follow these steps to contribute:

  1. Fork the Repository: Click on the “Fork” button at the top-right corner of the repository page.
  2. Clone Your Fork
  3. Create a New Branch:
git checkout -b feature/YourFeatureName
  1. Make Your Changes: Implement your feature or fix.
  2. Commit Your Changes:
git commit -m "Add Your Feature Description"
  1. Push to Your Fork:
git push origin feature/YourFeatureName
  1. Create a Pull Request: Navigate to the original repository and create a pull request from your fork.

🔗 Useful Links

• Streamlit Documentation: https://docs.streamlit.io/

• LangChain Documentation: https://langchain.readthedocs.io/

• Qdrant Documentation: https://qdrant.tech/documentation/

• ChatOllama Documentation: https://github.com/langchain-ai/langchain-llms#ollama

Happy coding! 🚀✨

©️ License 🪪

Distributed under the MIT License. See LICENSE for more information.


If you like this LLM Project do drop ⭐ to this repo

Follow me on LinkedIn   GitHub


About

RAG Based LLM Chatbot Built using Open Source Stack (Llama 3.2 Model, BGE Embeddings, and Qdrant running locally within a Docker Container)

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages