A conversational Discord bot using a Local Large Language Model.
This bot is currently dependent on ollama. In order for the bot to work, ollama needs to be running and the configured model needs to be pulled already.
The bot can be configured using environment variables, these can also be set by adding a .env file.
The only thing that is really really required in you environment is the the bot token, this also means that you need to first make a bot by yourself in order to get the token. I followed this guide for that: https://www.ionos.com/digitalguide/server/know-how/creating-discord-bot/
Comma separated list of channels where the bot should announce itself every time that it starts up. If not set, it doesn't announce itself at startup.
Comma separated list of channels where the bot is allowed to answer. If not set, it will answer in any channel where it has access and is tagged.
The ollama model to use. Not that this model already needs to be pulled and that ollama needs to already be running on your system
# Install the poetry package manager
pip install poetry
# Install the requirements
poetry install
# test
poetry run python -m pytest
# run the bot
poetry run python discordbot
# Code Coverage
poetry run python -m pytest --cov=discordbot tests --cov-report=html
- Built using the discord and langchain python libraries
- Uses a local large language model by using ollama via langchain.
- Answers to messages when it's tagged in one of the allowed channels.
- Posts to a channel when it comes online.
- Streaming output: takes the streaming output of the LLM and then uses the discord edit api to update the message every 5 seconds.
- Configurable via a .env file
- Supports back and forth conversations via replies to messages.
- Internet Browsing
- Index Whole Discord channel contents and let LLM search through that in its answers
- Find content in the discord channel: Hey @bot find message that talk about large language models.
- Impersonate Discord users: Hey @bot what would @FrancisD say to this question: ...
- Index the whole Ko-Lab Wiki and allow answering questions about the wiki:
- Eg. Hey @bot is there an event on the wiki next friday? Or simpler, is there a project about 3D printing?
- Dockerize package
- Deployment using ansible
- Move to MLC-LLM in order to be able to use orange pi.
- Redundancy: use kubernetes for redundancy and rolling upgrades
- Maybe: make this available as a bot that any server can integrate if they provide a openai key.
- Maybe: Provide an api, messages starting with slash should trigger stuff like set temp and set system prompt.