Skip to content

m0st4f4-hub/Auto-Llama-cpp

 
 

Repository files navigation

Auto-Llama-cpp: An Autonomous Llama Experiment 🦙

...README content...

Memory/Disk Requirements 💾

As the models are currently fully loaded into memory, you will need adequate disk space to save them and sufficient RAM to load them. At the moment, memory and disk requirements are the same.

Model Original Size Quantized Size (4-bit)
7B 13 GB 3.9 GB
13B 24 GB 7.8 GB
30B 60 GB 19.5 GB
65B 120 GB 38.5 GB

Quick Tutorial 🚀

  1. Run these commands:
    git clone https://github.com/rhohndorf/Auto-Llama-cpp.git
    cd Auto-Llama-cpp
    pip install -r requirements.txt
  2. Download ggml-vicuna-13b-4bit.bin (8GB) and place it in your models folder (for testing I made one in the root directory of this project with another folder called 13B for the size but if you're trying out multiple llms you'll likely have a folder somewhere else for all of them).
  3. Rename "env.template" file to ".env" changing any environment variables that you need (like the path to your recently downloaded model).
  4. Run this command to start:
    python scripts/main.py
  5. Once you have things running and can see what it does, try changing ai_settings.yml and scripts/data/prompt.txt to change how the AI behaves.
### Acknowledgements and Credits 👏 This project is built on top of several open-source projects, and we would like to acknowledge and thank the original developers for their contributions to the AI and open-source community.

Original Projects:

  1. Auto-GPT: This project is a fork of Auto-GPT, which is an attempt to build an AI agent using GPT models that can interact with the user and perform tasks. Auto-GPT project can be found here.

  2. llama.cpp: The project uses llama.cpp under the hood for running the llama models locally. llama.cpp is an open-source project developed by Georgi Gerganov.

  3. Open Assistant: Auto-Llama-cpp plans to add support for Open Assistant models, which is a project aiming to create an open-source AI assistant based on the GPT-3 architecture.

Impressive Idea:

The creator of the original Auto-Llama-cpp repository, rhohndorf, came up with an impressive idea to integrate AutoGPT with open-source large language models, allowing developers to run these powerful models locally and experiment with various AI agents. This project demonstrates the potential of open-source AI development and encourages further exploration of AI applications.

Contributors:

  • rhohndorf: The creator of the Auto-Llama-cpp project and the main contributor.
  • Georgi Gerganov: Developer of llama.cpp, which is used in this project for running the models locally.
  • LAION-AI: The team behind the Open Assistant project.

We encourage everyone to contribute to these projects and help improve the open-source AI ecosystem. If you have any suggestions, issues, or would like to contribute, feel free to submit a pull request or open an issue.

About

Uses Auto-GPT with Llama.cpp

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.8%
  • Dockerfile 0.2%