Skip to content

Latest commit

 

History

History
52 lines (32 loc) · 2.02 KB

README.md

File metadata and controls

52 lines (32 loc) · 2.02 KB

This is a LlamaIndex multi-agents project using Workflows.

Overview

This example is using three agents to generate a blog post:

  • a researcher that retrieves content via a RAG pipeline,
  • a writer that specializes in writing blog posts and
  • a reviewer that is reviewing the blog post.

There are three different methods how the agents can interact to reach their goal:

  1. Choreography - the agents decide themselves to delegate a task to another agent
  2. Orchestator - a central orchestrator decides which agent should execute a task
  3. Explicit Workflow - a pre-defined workflow specific for the task is used to execute the tasks

Getting Started

First, setup the environment with poetry:

Note: This step is not needed if you are using the dev-container.

poetry install

Then check the parameters that have been pre-configured in the .env file in this directory. (E.g. you might need to configure an OPENAI_API_KEY if you're using OpenAI as model provider).

Second, generate the embeddings of the documents in the ./data directory:

poetry run generate

Third, run the agents in one command:

poetry run python main.py

Per default, the example is using the explicit workflow. You can change the example by setting the EXAMPLE_TYPE environment variable to choreography or orchestrator.

To add an API endpoint, set the FAST_API environment variable to true.

Learn More

To learn more about LlamaIndex, take a look at the following resources:

You can check out the LlamaIndex GitHub repository - your feedback and contributions are welcome!