In 2024, coronavirus isn't new, but we all remember how hard it was to find reliable information when the pandemic began. This inspired me to develop a chatbot using a Seq2Seq model with LSTM that can answer questions about this serious virus.
My project has two main components:
- A Pygame application, where you can interact with the bot and ask questions.
- A Jupyter Notebook, that documents the entire training process, with detailed comments.
Feel free to explore the project and ask the bot questions! Please note, however, that the bot is designed to answer only coronavirus-related questions.
Section | Description |
---|---|
Running the application | Instructions for running the Pygame application |
Training process | Information about the datasets used for training model and training process description |
Jupyter Notebook | A detailed breakdown of the notebook |
- Install python interpreter How to make it, you can find here
- Clone the Repository:
Open a terminal and clone the repository:
git clone https://github.com/AndreRab/Spam-filter.git
- Navigate to the Project Directory:
Change your directory to the project folder:
cd Spam-filter
- Install the necessary libraries:
Install the necessary libraries for properly application working:
pip install -r requirements.txt
- Start the application
python scripts/main.py
For training datasets, I used the following sources: COVID19 frequent asked questions and COVID19 related faqs. In those datasets, there are only two columns: questions and answers. Therefore, I didn't need to perform any preprocessing before training my models.
The LSTM model requires a long training time, which is why I trained it for over 300 epochs to achieve a satisfactory result. Naturally, the model doesn't perform perfectly for every question, as all embeddings were learned from scratch, and the dataset was not ideal.
The notebook is located in the research folder. There, you can see that I first created a vocabulary, where each word is assigned a unique ID. If a word is not in the vocabulary, it is assigned an unknown token.
Next, I defined the model, which is divided into two parts: an encoder and a decoder. Each part uses an LSTM block. For predictions, the model first encodes the question and then uses this encoding as input to the decoder. The decoder returns a probability distribution for each word in the vocabulary. I then use a grid-search approach to generate the answer.
You can also find a plot showing its learning process for first 100 epochs.