Probably one of the lightest native RAG + Agent apps out there,experience the power of Agent-powered models and Agent-driven knowledge bases in one click, without complex configuration.
Chat and Agent interactions:
- 💭 Simple, easy-to-use chat box interface.
- 🌏️ Language options (Simplified Chinese, English)
- 🔧 Inference support for multiple (local) model sources (Azure OpenAI, Groq, ollama, llamafile)
- Native Function Call (OpenAI, Azure OpenAI, OpenAI Like, Ollama)
- 🤖 Multiple Agent modes on-premises
- 🖥️ Local storage of dialog data and management
- Multiple export formats(Markdown, HTML)
- Multiple themes(HTML)
Knowledgebase:
- Native implementation of Retrieval Augmentation Generation (RAG), lightweight and efficient
- Optional embedding models (Hugging Face/OpenAI)
- Easy-to-use knowledge base management
- Multiple search methods available: Hybrid search, reranking, and specified file retrieval
If you like this project, please star it, it's the biggest encouragement for me!
Support export format, theme selection and export range control:
![Export settings and preview](https://private-user-images.githubusercontent.com/107250451/405897808-85756a3c-7ca2-4fcf-becc-682f22091c4e.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkzNjc5NjcsIm5iZiI6MTczOTM2NzY2NywicGF0aCI6Ii8xMDcyNTA0NTEvNDA1ODk3ODA4LTg1NzU2YTNjLTdjYTItNGZjZi1iZWNjLTY4MmYyMjA5MWM0ZS5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjEyJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxMlQxMzQxMDdaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT1hYWUxNmVmNTVjZGU5NzM5ZDYyZTQyNDQwMzdmMzYyZGVlZWQ3YTY1Mjg4Yzc4MDRkNDI5NjA0NWJmMzBlYjRjJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.eYhiBZdRXH-PvB6wiINgJWl_iLEs4THlKZ7VrcBYMRI)
Currently supported themes:
Default | Glassmorphism |
---|---|
![]() |
![]() |
![image](https://private-user-images.githubusercontent.com/107250451/405898161-bc574d1e-e614-4310-ad00-746c5646963a.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkzNjc5NjcsIm5iZiI6MTczOTM2NzY2NywicGF0aCI6Ii8xMDcyNTA0NTEvNDA1ODk4MTYxLWJjNTc0ZDFlLWU2MTQtNDMxMC1hZDAwLTc0NmM1NjQ2OTYzYS5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjEyJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxMlQxMzQxMDdaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT0xZDdkMTk2YjM5NmNjY2RmZTY0MDdlYWRhNjBiZjZmODdmZmJjNTY0MGQ5ZGVkMDcwZDIzY2E4OWMwNWYyOWEyJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.PZNFsa4PbZ7CBTjIvOcB9Q6uVPZlVGONDvHyszBTzhM)
You can set up the model (sidebar) and view detailed references:
![image](https://private-user-images.githubusercontent.com/107250451/405898544-a6ce3f0b-3c8f-4e3d-8d34-bceb834da81e.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkzNjc5NjcsIm5iZiI6MTczOTM2NzY2NywicGF0aCI6Ii8xMDcyNTA0NTEvNDA1ODk4NTQ0LWE2Y2UzZjBiLTNjOGYtNGUzZC04ZDM0LWJjZWI4MzRkYTgxZS5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjEyJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxMlQxMzQxMDdaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT1jNGEzZjI1MzI3N2NlZTFiNWNiYzQ1ZmIyZDJhMzExYThhMzg0MzU5NTQyM2EzZjk5ZmJhZTlhNGNhZDE1OGM5JlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.r9sdrQS1WPeX-cDTio1fysY7WFJOK0p4V4LLT13K9cs)
Configure RAG:
![image](https://private-user-images.githubusercontent.com/107250451/405898633-82480174-bac1-47d4-b5f4-9725774618f2.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkzNjc5NjcsIm5iZiI6MTczOTM2NzY2NywicGF0aCI6Ii8xMDcyNTA0NTEvNDA1ODk4NjMzLTgyNDgwMTc0LWJhYzEtNDdkNC1iNWY0LTk3MjU3NzQ2MThmMi5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjEyJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxMlQxMzQxMDdaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT05NzZiZjhiOTE1YjI3MjIzZGE1MTdlNjBhZmNjMmJmZmM4NjY0ZjY5NmZlMTg3M2Y0OWQ1OWRjOTNkZDY1ZWY3JlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.ZxInNoH9FwePkv0g-thZqYIKhu1d1oM2jN-gX_LXAs4)
Function calls are supported on
Chat
now, andAgentChat
will be supported in the future.
The Function Calls on this page are native and work for all OpenAI Compatible models, but require the model itself to support Function calls.
Function Call can significantly enhance the capabilities of LLM, allowing it to complete tasks that it was previously unable to complete (such as mathematical calculations), as shown below:
![image](https://private-user-images.githubusercontent.com/107250451/405898843-fba30f4a-dbfc-47d0-9f1c-4443171fa018.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkzNjc5NjcsIm5iZiI6MTczOTM2NzY2NywicGF0aCI6Ii8xMDcyNTA0NTEvNDA1ODk4ODQzLWZiYTMwZjRhLWRiZmMtNDdkMC05ZjFjLTQ0NDMxNzFmYTAxOC5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjEyJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxMlQxMzQxMDdaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT1mYjNjZjBlNGQzMzM0ODE3NjM1NzQxOTczMTIyYjJhMDc2ZGU4MzI4N2Y4OTcwZWZiMzE4MjBmOWJiZDg2MjY3JlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.sUp3pSowyybwatLPQhP7ewtm0z-crdxI0hedFpvty_Q)
Or summarize the content of a webpage:
![image](https://private-user-images.githubusercontent.com/107250451/405898942-7da5ae4d-40d5-49b4-9e76-6ce2a39ac6d1.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkzNjc5NjcsIm5iZiI6MTczOTM2NzY2NywicGF0aCI6Ii8xMDcyNTA0NTEvNDA1ODk4OTQyLTdkYTVhZTRkLTQwZDUtNDliNC05ZTc2LTZjZTJhMzlhYzZkMS5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjEyJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxMlQxMzQxMDdaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT0zMGZjZDJkOWZiNmVmYWFiNTg5YmRiZGM4NTU1Y2E4NTk5ODM1NzRmMTE0MDFlNzA2YjNkYzQ1ZmY3OTIzYjk5JlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.r76ZDPD6UQCfs5dZ50n10d24kFUpoDiIqozwhUp7Jaw)
You can also customize the function you want to call, please refer to toolkits.py for writing rules.
-
Use
git clone https://github.com/Wannabeasmartguy/RAGENT.git
to pull the code; Then open your runtime environment in command prompt (CMD) and usepip install -r requirements.txt
to install the runtime dependencies. -
Configure the model dependencies: Modify the
.env_sample
file to.env
and fill in the following:LANGUAGE
: SupportEnglish
and简体中文
, if not set, default isEnglish
;OPENAI_API_KEY
: If you are using an OpenAI model, fill in the api key here;AZURE_OAI_KEY
: If you are using an Azure OpenAI model, fill in the api key here;AZURE_OAI_ENDPOINT
: If you are using an OpenAI model, fill in the end_point here;API_VERSION
: If you are using an Azure OpenAI model, fill in the api version here;API_TYPE
: if you are using an Azure OpenAI model, fill in the api type here;GROQ_API_KEY
: if you are using Groq as the model source, fill in the api key here;COZE_ACCESS_TOKEN
: if you need to use the created Coze Bot, fill in the access token here;
If you are using Llamafile, please set the endpoint within the application after starting the Llamafile model.
- launch the application:
Run: Run streamlit run RAGENT.py
on the command line can start it.
If you want to use the AgentChat page, please use python startup.py
to start the application rather than streamlit run RAGENT.py
.
For any issues encountered during use or new ideas, please submit issues and PRs!