Discord chatbot created for the NLP elective course at Insper
Warning: Python version is 3.10
-
First of all, create a discord bot on the discord developer portal. You can follow this tutorial to do it
-
pip install -r requirements.txt
-
then, create a .env file with:
TOKEN = Bot token
KEY_RESET = Reset key for the command !reset
FILE_CLASS_PICKLE = Name of a pickle file like: class1.pkl
GUILD = Server Name where the bot is
CHANNEL = Channel that the bot will use
API_KEY = OpenAI API key
-
Download Wordnet using the following command on the python terminal:
import nltk
nltk.download('wordnet')
-
Now, you are able to run it!
python3 chatbot.py
Show help message describing all commands
Returns the source code from this bot
Returns the author's name
get informations from IP address.
IPv4 format: x.x.x.x
IPv6 format y:y:y:y:y:y:y:y (also works with abreviation.e.g->2001:db8::)
!run [IPv4]
!run [IPv6] v6
API used: https://rapidapi.com/xakageminato/api/ip-geolocation-ipwhois-io
web crawling from webpage. Only receive 1 url and craw over a max of 15 pages. There is a timout with request takes more than 15 seconds.
!crawl
Seach for a word or a phrase in the documents and show all the documents where the word appears and the TFiDF (measures how important a term is within a document relative to a collection of documents). You can use the argument th=X in the end for filtering pages by sentiment value. th default is 0 which means that filter wasn't applied. Goes from 0 (negative) to 1 (positive).
!search [word] [th=[value]]
Search for one word in the documents and show all the documents where the word appears and the TFiDF (measures how important a term is within a document relative to a collection of documents). If the word is not in the documents, try to find synonims that is in the documents. You can use the argument th=X in the end for filtering pages by sentiment value. th default is 0 which means that filter wasn't applied. Goes from 0 (negative text) to 1 (positive text).
!wn_search [word] [th=[value]]
Generate text from database using model
!generate [word]
Get the url and sentiment value from all pages that have been webscrapped. You can use the argument th=X in the end for filtering pages by sentiment value. th default is 0 which means that filter wasn't applied. Goes from 0 (negative text) to 1 (positive text).
Web crawling reset
!reset [secret_key]