LLMommy - simple and quite extensible Telegram bot to chat with Hugging Face and OpenAI LLMs via any OpenAI API endpoint.
- Quick setup for chatting with Hugging Face self-hosted models
- Quick setup for chatting with official OpenAI API models
- Features
- Commands
- Chatting with out of the box self-hosted Hugging Face models, provided by vllm
- Out of the box self-hosted store for chats history -> ability to see previous chats and read it's messages
- Voice messagging with LLM in case of official OpenAI API setup
- Configurable LLM's context window size (size of LLM's memory in bytes)
- Configurable GPU usage percentage
- Configurable chat length (count of messages in chat)
- Ability to continue previous chats
- Voice messagging with LLM in case of any OpenAI-like API setup
- Response without ratelimit in case of official OpenAI API setup.
- Localization (en,ru)
/new
-- starts a new chat/history
-- loads all saved previous chats
All OSs:
- Telegram bot Token
- Hugging Face Access Token
- NVIDIA GPU
Linux:
-
Clone Repository
git clone https://github.com/PavelRubis/LLMommy
-
Go to repository directory
cd ./LLMommy
-
Open
local-endpoint.env
file with your favourite editor -
Paste your Telegram bot token in
TELEGRAM_TOKEN
variable value -
Paste allowed user's usernames in
TELEGRAM_ALLOWED_USERS_USERNAMES
variable value -
Paste your Hugging Face Access token in
HUGGING_FACE_HUB_TOKEN
variable value -
Enter model-name in
AI_ASSISTANT_OPEN_AI_MODEL
value. For example:AI_ASSISTANT_OPEN_AI_MODEL="MTSAIR/Cotype-Nano"
-
(Optional) allign
GPU_USAGE
andCONTEXT_WINDOW_LENGTH
variables to fit your needs and GPU VRAM. -
Save and rename file:
local-endpoint.env
->.env
-
Run docker compose command to start:
sudo docker compose -f "docker-compose-local.yml" up -d --build
-
Clone Repository
git clone https://github.com/PavelRubis/LLMommy
-
Go to repository directory
cd ./LLMommy
-
Open
remote-endpoint.env
file -
Paste your Telegram bot token in
TELEGRAM_TOKEN
variable value -
Paste allowed user's usernames in
TELEGRAM_ALLOWED_USERS_USERNAMES
variable value -
Paste your OpenAI API key in
AI_ASSISTANT_OPEN_AI_KEY
variable value -
Enter OpenAI model-name in
AI_ASSISTANT_OPEN_AI_MODEL
variable value. -
Save and rename file:
remote-endpoint.env
->.env
-
Run docker compose command to start:
sudo docker compose -f "docker-compose-remote.yml" up -d --build