Skip to content

Commit

Permalink
Merge pull request chenfei-wu#371 from dawnmsg/main
Browse files Browse the repository at this point in the history
update low-code LLM codes and project descriptions
  • Loading branch information
chenfei-wu authored Apr 23, 2023
2 parents c1acc25 + 8aefe16 commit d39909e
Show file tree
Hide file tree
Showing 19 changed files with 985 additions and 10 deletions.
15 changes: 15 additions & 0 deletions LowCodeLLM/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
FROM ubuntu:22.04

RUN apt-get -y update
RUN apt-get install -y git python3.11 python3-pip supervisor
RUN pip3 install --upgrade pip
RUN pip3 install --upgrade setuptools
RUN ln -s /usr/bin/python3 /usr/bin/python
COPY src/requirements.txt requirements.txt
RUN pip3 install -r requirements.txt

COPY src /app/src

WORKDIR /app/src
ENV WORKERS 2
CMD supervisord -c supervisord.conf
54 changes: 44 additions & 10 deletions LowCodeLLM/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,17 +2,51 @@

**Low-code LLM** is a novel human-LLM interaction pattern, involving human in the loop to achieve more controllable and stable responses.

As shown in the following figure:
- A Planning LLM generates a workflow for the complex tasks. The workflow is highly structured and support users to easily edit it with dragging and dropping.
- Users can edit the workflow in six pre-defined low-code operations, which are all supported by clicking, dragging or text editing.
- The reviewed workflow will guide the Implement LLM to generate responses.
- Users can continue refining the workflow until getting a satisfactory results.
See our paper: [Low-code LLM: Visual Programming over LLMs](https://arxiv.org/abs/2304.08103)

<img src="https://github.com/microsoft/visual-chatgpt/blob/main/assets/low-code-llm.png" width="1000">
In the future, [TaskMatrix.AI](https://arxiv.org/abs/2304.08103) can enhance task automation by breaking down tasks more effectively and utilizing existing foundation models and APIs of other AI models and systems to achieve diversified tasks in both digital and physical domains. And the low-code human-LLM interaction pattern can enhance user's experience on controling over the process and expressing their preference.

## Paper
Low Code LLM [Coming Soon]
## Video Demo

## Codes and System
In progress, coming in a very near future...
https://github.com/microsoft/TaskMatrix/blob/main/assets/low-code-demovideo.mp4

(This is a conceptual video demo to demonstrate the complete process)

## Quick Start
Please note that due to time constraints, the code we provide is only the minimum viable version of the low-code LLM interactive code, i.e. only demonstrating the core concept of Low-code LLM human-LLM interaction. We welcome anyone who is interested in improving our front-end interface.
```
# clone the repo
git clone https://github.com/microsoft/TaskMatrix.git
# go to directlory
cd LowCodeLLM
# build and run docker
docker build -t lowcode:latest .
docker run -p 8888:8888 --env OPENAIKEY={Your_Private_Openai_Key} lowcode:latest
# Open the webpage (./src/index.html)
```

## System Overview

<img src="https://github.com/microsoft/TaskMatrix/blob/main/assets/low-code-llm.png" alt="overview" width="800"/>

As shown in the above figure, human-LLM interaction can be completed by:
- A Planning LLM that generates a highly structured workflow for complex tasks.
- Users editing the workflow with predefined low-code operations, which are all supported by clicking, dragging, or text editing.
- An Executing LLM that generates responses with the reviewed workflow.
- Users continuing to refine the workflow until satisfactory results are obtained.

## Six Kinds of Pre-defined Low-code Operations
<img src="https://github.com/microsoft/TaskMatrix/blob/main/assets/low-code-operation.png" alt="operations" width="800"/>

## Advantages

- **Controllable Generation.** Complicated tasks are decomposed into structured conducting plans and presented to users as workflows. Users can control the LLMs’ execution through low-code operations to achieve more controllable responses. The responses generated followed the customized workflow will be more aligned with the user’s requirements.
- **Friendly Interaction.** The intuitive workflow enables users to swiftly comprehend the LLMs' execution logic, and the low-code operation through a graphical user interface empowers users to conveniently modify the workflow in a user-friendly manner. In this way, time-consuming prompt engineering is mitigated, allowing users to efficiently implement their ideas into detailed instructions to achieve high-quality results.
- **Wide applicability.** The proposed framework can be applied to a wide range of complex tasks across various domains, especially in situations where human's intelligence or preference are indispensable.


## Acknowledgement
Part of this paper has been collaboratively crafted through interactions with the proposed Low-code LLM. The process began with GPT-4 outlining the framework, followed by the authors supplementing it with innovative ideas and refining the structure of the workflow. Ultimately, GPT-4 took charge of generating cohesive and compelling text.
59 changes: 59 additions & 0 deletions LowCodeLLM/src/app.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
import os
from flask import Flask, request
from flask_cors import CORS, cross_origin
from lowCodeLLM import lowCodeLLM
from flask.logging import default_handler
import logging

app = Flask('lowcode-llm', static_url_path='', template_folder='')
app.debug = True
llm = lowCodeLLM()
gunicorn_logger = logging.getLogger('gunicorn.error')
app.logger = gunicorn_logger
logging_format = logging.Formatter(
'%(asctime)s - %(levelname)s - %(filename)s - %(funcName)s - %(lineno)s - %(message)s')
default_handler.setFormatter(logging_format)

@app.route('/api/get_workflow', methods=['POST'])
@cross_origin()
def get_workflow():
try:
request_content = request.get_json()
task_prompt = request_content['task_prompt']
workflow = llm.get_workflow(task_prompt)
return workflow, 200
except Exception as e:
app.logger.error(
'failed to get_workflow, msg:%s, request data:%s' % (str(e), request.json))
return {'errmsg': str(e)}, 500

@app.route('/api/extend_workflow', methods=['POST'])
@cross_origin()
def extend_workflow():
try:
request_content = request.get_json()
task_prompt = request_content['task_prompt']
current_workflow = request_content['current_workflow']
step = request_content['step']
sub_workflow = llm.extend_workflow(task_prompt, current_workflow, step)
return sub_workflow, 200
except Exception as e:
app.logger.error(
'failed to extend_workflow, msg:%s, request data:%s' % (str(e), request.json))
return {'errmsg': str(e)}, 500

@app.route('/api/execute', methods=['POST'])
@cross_origin()
def execute():
try:
request_content = request.get_json()
task_prompt = request_content['task_prompt']
confirmed_workflow = request_content['confirmed_workflow']
curr_input = request_content['curr_input']
history = request_content['history']
response = llm.execute(task_prompt,confirmed_workflow, history, curr_input)
return response, 200
except Exception as e:
app.logger.error(
'failed to execute, msg:%s, request data:%s' % (str(e), request.json))
return {'errmsg': str(e)}, 500
41 changes: 41 additions & 0 deletions LowCodeLLM/src/executingLLM.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
from openAIWrapper import OpenAIWrapper

EXECUTING_LLM_PREFIX = """Executing LLM is designed to provide outstanding responses.
Executing LLM will be given a overall task as the background of the conversation between the Executing LLM and human.
When providing response, Executing LLM MUST STICTLY follow the provided standard operating procedure (SOP).
the SOP is formatted as:
'''
STEP 1: [step name][step descriptions][[[if 'condition1'][Jump to STEP]], [[if 'condition2'][Jump to STEP]], ...]
STEP 2: [step name][step descriptions][[[if 'condition1'][Jump to STEP]], [[if 'condition2'][Jump to STEP]], ...]
'''
here "[[[if 'condition1'][Jump to STEP n]], [[if 'condition2'][Jump to STEP m]], ...]" is judgmental logic. It means when you're performing this step,
and if 'condition1' is satisfied, you will perform STEP n next. If 'condition2' is satisfied, you will perform STEP m next.
Remember:
Executing LLM is facing a real human, who does not know what SOP is.
So, Do not show him/her the SOP steps you are following, or the process and middle results of performing the SOP. It will make him/her confused. Just response the answer.
"""

EXECUTING_LLM_SUFFIX = """
Remember:
Executing LLM is facing a real human, who does not know what SOP is.
So, Do not show him/her the SOP steps you are following, or the process and middle results of performing the SOP. It will make him/her confused. Just response the answer.
"""

class executingLLM:
def __init__(self, temperature) -> None:
self.prefix = EXECUTING_LLM_PREFIX
self.suffix = EXECUTING_LLM_SUFFIX
self.LLM = OpenAIWrapper(temperature)
self.messages = [{"role": "system", "content": "You are a helpful assistant."},
{"role": "system", "content": self.prefix}]

def execute(self, current_prompt, history):
''' provide LLM the dialogue history and the current prompt to get response '''
messages = self.messages + history
messages.append({'role': 'user', "content": current_prompt + self.suffix})
response, status = self.LLM.run(messages)
if status:
return response
else:
return "OpenAI API error."
Loading

0 comments on commit d39909e

Please sign in to comment.