Skip to content

A simple, transparent, and extensible web client to chat with your LLM

License

Notifications You must be signed in to change notification settings

debatelab/simple-chat

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

simple-chat

A simple, transparent, and extensible web client to chat with your LLM

🤔 Why?

There are lots of ever more powerful open LLMs (HF hub), wonderful frameworks to build (train or merge) your own models (trl, axolotl), as well as reliable and efficient solutions to serve your models (vllm, tgi). But, or so I find, there are relatively few simple, local, open-source chat clients that work well with custom (self-hosted) LLMs and allow you to use your models in a straightforward way. (However, check chat clients below.)

🎉 What?

simple-chat is a minimalistic chat client that runs locally (in your browser) and connects to a local or remote LLM. It works out of the box, but can also be used as a boilerplate to build more sophisticated agents.

🔢 How?

  1. Clone the repository.

    git clone https://github.com/debatelab/simple-chat.git
    cd simple-chat
  2. Set base URL. Create a text file named .env (e.g. with text editor) that contains the following line:

    BASE_URL="<insert-your-inference-server-url-here>"
    
  3. Install poetry (python package manager) and its dotenv plugin.

  4. Install the dependencies.

    poetry install
  5. Run the app.

    poetry run chainlit run src/simple_chat/app.py

🙏 Built with

😎 Also cool

  • Sanctum AI: privacy-focused local chat client
  • jan.ai: open-source chat client to interact with local and remote LLMs

About

A simple, transparent, and extensible web client to chat with your LLM

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages