diff --git a/LowCodeLLM/Dockerfile b/LowCodeLLM/Dockerfile
new file mode 100644
index 00000000..6633a00a
--- /dev/null
+++ b/LowCodeLLM/Dockerfile
@@ -0,0 +1,15 @@
+FROM ubuntu:22.04
+
+RUN apt-get -y update
+RUN apt-get install -y git python3.11 python3-pip supervisor
+RUN pip3 install --upgrade pip
+RUN pip3 install --upgrade setuptools
+RUN ln -s /usr/bin/python3 /usr/bin/python
+COPY src/requirements.txt requirements.txt
+RUN pip3 install -r requirements.txt
+
+COPY src /app/src
+
+WORKDIR /app/src
+ENV WORKERS 2
+CMD supervisord -c supervisord.conf
diff --git a/LowCodeLLM/README.md b/LowCodeLLM/README.md
index e2c87b57..29271ef0 100644
--- a/LowCodeLLM/README.md
+++ b/LowCodeLLM/README.md
@@ -2,17 +2,51 @@
**Low-code LLM** is a novel human-LLM interaction pattern, involving human in the loop to achieve more controllable and stable responses.
-As shown in the following figure:
-- A Planning LLM generates a workflow for the complex tasks. The workflow is highly structured and support users to easily edit it with dragging and dropping.
-- Users can edit the workflow in six pre-defined low-code operations, which are all supported by clicking, dragging or text editing.
-- The reviewed workflow will guide the Implement LLM to generate responses.
-- Users can continue refining the workflow until getting a satisfactory results.
+See our paper: [Low-code LLM: Visual Programming over LLMs](https://arxiv.org/abs/2304.08103)
-
+In the future, [TaskMatrix.AI](https://arxiv.org/abs/2304.08103) can enhance task automation by breaking down tasks more effectively and utilizing existing foundation models and APIs of other AI models and systems to achieve diversified tasks in both digital and physical domains. And the low-code human-LLM interaction pattern can enhance user's experience on controling over the process and expressing their preference.
-## Paper
-Low Code LLM [Coming Soon]
+## Video Demo
-## Codes and System
-In progress, coming in a very near future...
+https://github.com/microsoft/TaskMatrix/blob/main/assets/low-code-demovideo.mp4
+(This is a conceptual video demo to demonstrate the complete process)
+
+## Quick Start
+Please note that due to time constraints, the code we provide is only the minimum viable version of the low-code LLM interactive code, i.e. only demonstrating the core concept of Low-code LLM human-LLM interaction. We welcome anyone who is interested in improving our front-end interface.
+```
+# clone the repo
+git clone https://github.com/microsoft/TaskMatrix.git
+
+# go to directlory
+cd LowCodeLLM
+
+# build and run docker
+docker build -t lowcode:latest .
+docker run -p 8888:8888 --env OPENAIKEY={Your_Private_Openai_Key} lowcode:latest
+
+# Open the webpage (./src/index.html)
+```
+
+## System Overview
+
+
+
+As shown in the above figure, human-LLM interaction can be completed by:
+- A Planning LLM that generates a highly structured workflow for complex tasks.
+- Users editing the workflow with predefined low-code operations, which are all supported by clicking, dragging, or text editing.
+- An Executing LLM that generates responses with the reviewed workflow.
+- Users continuing to refine the workflow until satisfactory results are obtained.
+
+## Six Kinds of Pre-defined Low-code Operations
+
+
+## Advantages
+
+- **Controllable Generation.** Complicated tasks are decomposed into structured conducting plans and presented to users as workflows. Users can control the LLMs’ execution through low-code operations to achieve more controllable responses. The responses generated followed the customized workflow will be more aligned with the user’s requirements.
+- **Friendly Interaction.** The intuitive workflow enables users to swiftly comprehend the LLMs' execution logic, and the low-code operation through a graphical user interface empowers users to conveniently modify the workflow in a user-friendly manner. In this way, time-consuming prompt engineering is mitigated, allowing users to efficiently implement their ideas into detailed instructions to achieve high-quality results.
+- **Wide applicability.** The proposed framework can be applied to a wide range of complex tasks across various domains, especially in situations where human's intelligence or preference are indispensable.
+
+
+## Acknowledgement
+Part of this paper has been collaboratively crafted through interactions with the proposed Low-code LLM. The process began with GPT-4 outlining the framework, followed by the authors supplementing it with innovative ideas and refining the structure of the workflow. Ultimately, GPT-4 took charge of generating cohesive and compelling text.
diff --git a/LowCodeLLM/src/app.py b/LowCodeLLM/src/app.py
new file mode 100644
index 00000000..bfba4675
--- /dev/null
+++ b/LowCodeLLM/src/app.py
@@ -0,0 +1,59 @@
+import os
+from flask import Flask, request
+from flask_cors import CORS, cross_origin
+from lowCodeLLM import lowCodeLLM
+from flask.logging import default_handler
+import logging
+
+app = Flask('lowcode-llm', static_url_path='', template_folder='')
+app.debug = True
+llm = lowCodeLLM()
+gunicorn_logger = logging.getLogger('gunicorn.error')
+app.logger = gunicorn_logger
+logging_format = logging.Formatter(
+ '%(asctime)s - %(levelname)s - %(filename)s - %(funcName)s - %(lineno)s - %(message)s')
+default_handler.setFormatter(logging_format)
+
+@app.route('/api/get_workflow', methods=['POST'])
+@cross_origin()
+def get_workflow():
+ try:
+ request_content = request.get_json()
+ task_prompt = request_content['task_prompt']
+ workflow = llm.get_workflow(task_prompt)
+ return workflow, 200
+ except Exception as e:
+ app.logger.error(
+ 'failed to get_workflow, msg:%s, request data:%s' % (str(e), request.json))
+ return {'errmsg': str(e)}, 500
+
+@app.route('/api/extend_workflow', methods=['POST'])
+@cross_origin()
+def extend_workflow():
+ try:
+ request_content = request.get_json()
+ task_prompt = request_content['task_prompt']
+ current_workflow = request_content['current_workflow']
+ step = request_content['step']
+ sub_workflow = llm.extend_workflow(task_prompt, current_workflow, step)
+ return sub_workflow, 200
+ except Exception as e:
+ app.logger.error(
+ 'failed to extend_workflow, msg:%s, request data:%s' % (str(e), request.json))
+ return {'errmsg': str(e)}, 500
+
+@app.route('/api/execute', methods=['POST'])
+@cross_origin()
+def execute():
+ try:
+ request_content = request.get_json()
+ task_prompt = request_content['task_prompt']
+ confirmed_workflow = request_content['confirmed_workflow']
+ curr_input = request_content['curr_input']
+ history = request_content['history']
+ response = llm.execute(task_prompt,confirmed_workflow, history, curr_input)
+ return response, 200
+ except Exception as e:
+ app.logger.error(
+ 'failed to execute, msg:%s, request data:%s' % (str(e), request.json))
+ return {'errmsg': str(e)}, 500
\ No newline at end of file
diff --git a/LowCodeLLM/src/executingLLM.py b/LowCodeLLM/src/executingLLM.py
new file mode 100644
index 00000000..2fa11d15
--- /dev/null
+++ b/LowCodeLLM/src/executingLLM.py
@@ -0,0 +1,41 @@
+from openAIWrapper import OpenAIWrapper
+
+EXECUTING_LLM_PREFIX = """Executing LLM is designed to provide outstanding responses.
+Executing LLM will be given a overall task as the background of the conversation between the Executing LLM and human.
+When providing response, Executing LLM MUST STICTLY follow the provided standard operating procedure (SOP).
+the SOP is formatted as:
+'''
+STEP 1: [step name][step descriptions][[[if 'condition1'][Jump to STEP]], [[if 'condition2'][Jump to STEP]], ...]
+STEP 2: [step name][step descriptions][[[if 'condition1'][Jump to STEP]], [[if 'condition2'][Jump to STEP]], ...]
+'''
+here "[[[if 'condition1'][Jump to STEP n]], [[if 'condition2'][Jump to STEP m]], ...]" is judgmental logic. It means when you're performing this step,
+and if 'condition1' is satisfied, you will perform STEP n next. If 'condition2' is satisfied, you will perform STEP m next.
+
+Remember:
+Executing LLM is facing a real human, who does not know what SOP is.
+So, Do not show him/her the SOP steps you are following, or the process and middle results of performing the SOP. It will make him/her confused. Just response the answer.
+"""
+
+EXECUTING_LLM_SUFFIX = """
+Remember:
+Executing LLM is facing a real human, who does not know what SOP is.
+So, Do not show him/her the SOP steps you are following, or the process and middle results of performing the SOP. It will make him/her confused. Just response the answer.
+"""
+
+class executingLLM:
+ def __init__(self, temperature) -> None:
+ self.prefix = EXECUTING_LLM_PREFIX
+ self.suffix = EXECUTING_LLM_SUFFIX
+ self.LLM = OpenAIWrapper(temperature)
+ self.messages = [{"role": "system", "content": "You are a helpful assistant."},
+ {"role": "system", "content": self.prefix}]
+
+ def execute(self, current_prompt, history):
+ ''' provide LLM the dialogue history and the current prompt to get response '''
+ messages = self.messages + history
+ messages.append({'role': 'user', "content": current_prompt + self.suffix})
+ response, status = self.LLM.run(messages)
+ if status:
+ return response
+ else:
+ return "OpenAI API error."
\ No newline at end of file
diff --git a/LowCodeLLM/src/index.html b/LowCodeLLM/src/index.html
new file mode 100644
index 00000000..0f869cb4
--- /dev/null
+++ b/LowCodeLLM/src/index.html
@@ -0,0 +1,529 @@
+
+
+
+
+ Tutorial Demo
+
+
+
+
Low Code Demo
+
+
+
+ Task:
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/LowCodeLLM/src/lowCodeLLM.py b/LowCodeLLM/src/lowCodeLLM.py
new file mode 100644
index 00000000..4466fd46
--- /dev/null
+++ b/LowCodeLLM/src/lowCodeLLM.py
@@ -0,0 +1,44 @@
+from planningLLM import planningLLM
+from executingLLM import executingLLM
+import json
+
+class lowCodeLLM:
+ def __init__(self, PLLM_temperature=0.4, ELLM_temperature=0):
+ self.PLLM = planningLLM(PLLM_temperature)
+ self.ELLM = executingLLM(ELLM_temperature)
+
+ def get_workflow(self, task_prompt):
+ return self.PLLM.get_workflow(task_prompt)
+
+ def extend_workflow(self, task_prompt, current_workflow, step=''):
+ ''' generate a sub-workflow for one of steps
+ - input: the current workflow, the step needs to extend
+ - output: sub-workflow '''
+ workflow = self._json2txt(current_workflow)
+ return self.PLLM.extend_workflow(task_prompt, workflow, step)
+
+ def execute(self, task_prompt,confirmed_workflow, history, curr_input):
+ ''' chat with the workflow-equipped low-code LLM '''
+ prompt = [{'role': 'system', "content": 'The overall task you are facing is: '+task_prompt+
+ '\nThe standard operating procedure(SOP) is:\n'+self._json2txt(confirmed_workflow)}]
+ history = prompt + history
+ response = self.ELLM.execute(curr_input, history)
+ return response
+
+ def _json2txt(self, workflow_json):
+ ''' convert the json workflow to text'''
+ def json2text_step(step):
+ step_res = ""
+ step_res += step["stepId"] + ": [" + step["stepName"] + "]"
+ step_res += "[" + step["stepDescription"] + "]["
+ for jump in step["jumpLogic"]:
+ step_res += "[[" + jump["Condition"] + "][" + jump["Target"] + "]],"
+ step_res += "]\n"
+ return step_res
+
+ workflow_txt = ""
+ for step in json.loads(workflow_json):
+ workflow_txt += json2text_step(step)
+ for substep in step['extension']:
+ workflow_txt += json2text_step(substep)
+ return workflow_txt
\ No newline at end of file
diff --git a/LowCodeLLM/src/openAIWrapper.py b/LowCodeLLM/src/openAIWrapper.py
new file mode 100644
index 00000000..298cfe4e
--- /dev/null
+++ b/LowCodeLLM/src/openAIWrapper.py
@@ -0,0 +1,30 @@
+import os
+import openai
+
+class OpenAIWrapper:
+ def __init__(self, temperature):
+ self.key = os.environ.get("OPENAIKEY")
+ self.chat_model_id = "gpt-3.5-turbo"
+ self.temperature = temperature
+ self.max_tokens = 2048
+ self.top_p = 1
+ self.time_out = 7
+
+ def run(self, prompt):
+ return self._post_request_chat(prompt)
+
+ def _post_request_chat(self, messages):
+ try:
+ openai.api_key = self.key
+ response = openai.ChatCompletion.create(
+ model=self.chat_model_id,
+ messages=messages,
+ temperature=self.temperature,
+ max_tokens=self.max_tokens,
+ frequency_penalty=0,
+ presence_penalty=0
+ )
+ res = response['choices'][0]['message']['content']
+ return res, True
+ except Exception as e:
+ return "", False
diff --git a/LowCodeLLM/src/planningLLM.py b/LowCodeLLM/src/planningLLM.py
new file mode 100644
index 00000000..7c5f7865
--- /dev/null
+++ b/LowCodeLLM/src/planningLLM.py
@@ -0,0 +1,109 @@
+import re
+import json
+from openAIWrapper import OpenAIWrapper
+
+PLANNING_LLM_PREFIX = """Planning LLM is designed to provide a standard operating procedure so that an abstract and difficult task will be broken down into several steps, and the task will be easily solved by following these steps.
+Planning LLM is a powerful problem-solving assistant, so it only needs to analyze the task and provide standard operating procedure as guidance, but does not need actually to solve the problem.
+Sometimes there exists some unknown or undetermined situation, thus judgmental logic is needed: some "conditions" are listed, and the next step that should be carried out if a "condition" is satisfied is also listed. The judgmental logics are not necessary, so the jump actions are provided only when needed.
+Planning LLM MUST only provide standard operating procedure in the following format without any other words:
+'''
+STEP 1: [step name][step descriptions][[[if 'condition1'][Jump to STEP]], [[[if 'condition1'][Jump to STEP]], [[if 'condition2'][Jump to STEP]], ...]
+STEP 2: [step name][step descriptions][[[if 'condition1'][Jump to STEP]], [[[if 'condition1'][Jump to STEP]], [[if 'condition2'][Jump to STEP]], ...]
+...
+'''
+
+For example:
+'''
+STEP 1: [Brainstorming][Choose a topic or prompt, and generate ideas and organize them into an outline][]
+STEP 2: [Research][Gather information, take notes and organize them into the outline][[[lack of ideas][Jump to STEP 1]]]
+...
+'''
+"""
+
+EXTEND_PREFIX = """
+\nsome steps of the SOP provided by Planning LLM are too rough, so Planning LLM can also provide a detailed sub-SOP for the given step.
+Remember, Planning LLM take the overall SOP into consideration, and the sub-SOP MUST be consistent with the rest of the steps, and there MUST be no duplication in content between the extension and the original SOP.
+Besides, the extension MUST be logically consistent with the given step.
+
+For example:
+If the overall SOP is:
+'''
+STEP 1: [Brainstorming][Choose a topic or prompt, and generate ideas and organize them into an outline][]
+STEP 2: [Research][Gather information from credible sources, and take notes and organize them into the outline][[[if lack of ideas][Jump to STEP 1]]]
+STEP 3: [Write][write the text][]
+'''
+If the STEP 3: "write the text" is too rough and needs to be extended, then the response could be:
+'''
+STEP 3.1: [Write the title][write the title of the essay][]
+STEP 3.2: [Write the body][write the body of the essay][[[if lack of materials][Jump to STEP 2]]]
+STEP 3.3: [Write the conclusion][write the conclusion of the essay][]
+'''
+
+Remember:
+1. Extension is focused on the step descriptions, but not on the judgmental logic;
+2. Planning LLM ONLY needs to response the extension.
+"""
+
+PLANNING_LLM_SUFFIX = """\nRemember: Planning LLM is very strict to the format and NEVER reply any word other than the standard operating procedure. The reply MUST start with "STEP".
+"""
+
+class planningLLM:
+ def __init__(self, temperature) -> None:
+ self.prefix = PLANNING_LLM_PREFIX
+ self.suffix = PLANNING_LLM_SUFFIX
+ self.LLM = OpenAIWrapper(temperature)
+ self.messages = [{"role": "system", "content": "You are a helpful assistant."}]
+
+ def get_workflow(self, task_prompt):
+ '''
+ - input: task_prompt
+ - output: workflow (json)
+ '''
+ messages = self.messages + [{'role': 'user', "content": PLANNING_LLM_PREFIX+'\nThe task is:\n'+task_prompt+PLANNING_LLM_SUFFIX}]
+ response, status = self.LLM.run(messages)
+ if status:
+ return self._txt2json(response)
+ else:
+ return "OpenAI API error."
+
+ def extend_workflow(self, task_prompt, current_workflow, step):
+ messages = self.messages + [{'role': 'user', "content": PLANNING_LLM_PREFIX+'\nThe task is:\n'+task_prompt+PLANNING_LLM_SUFFIX}]
+ messages.append({'role': 'user', "content": EXTEND_PREFIX+
+ 'The current SOP is:\n'+current_workflow+
+ '\nThe step needs to be extended is:\n'+step+
+ PLANNING_LLM_SUFFIX})
+ response, status = self.LLM.run(messages)
+ if status:
+ return self._txt2json(response)
+ else:
+ return "OpenAI API error."
+
+ def _txt2json(self, workflow_txt):
+ ''' convert the workflow in natural language to json format '''
+ workflow = []
+ try:
+ steps = workflow_txt.split('\n')
+ for step in steps:
+ if step[0:4] != "STEP":
+ continue
+ left_indices = [_.start() for _ in re.finditer("\[", step)]
+ right_indices = [_.start() for _ in re.finditer("\]", step)]
+ step_id = step[: left_indices[0]-2]
+ step_name = step[left_indices[0]+1: right_indices[0]]
+ step_description = step[left_indices[1]+1: right_indices[1]]
+ jump_str = step[left_indices[2]+1: right_indices[-1]]
+ if re.findall(re.compile(r'[A-Za-z]',re.S), jump_str) == []:
+ workflow.append({"stepId": step_id, "stepName": step_name, "stepDescription": step_description, "jumpLogic": [], "extension": []})
+ continue
+ jump_logic = []
+ left_indices = [_.start() for _ in re.finditer('\[', jump_str)]
+ right_indices = [_.start() for _ in re.finditer('\]', jump_str)]
+ i = 1
+ while i < len(left_indices):
+ jump = {"Condition": jump_str[left_indices[i]+1: right_indices[i-1]], "Target": re.search(r'STEP\s\d', jump_str[left_indices[i+1]+1: right_indices[i]]).group(0)}
+ jump_logic.append(jump)
+ i += 3
+ workflow.append({"stepId": step_id, "stepName": step_name, "stepDescription": step_description, "jumpLogic": jump_logic, "extension": []})
+ return json.dumps(workflow)
+ except:
+ print("Format error, please try again.")
\ No newline at end of file
diff --git a/LowCodeLLM/src/requirements.txt b/LowCodeLLM/src/requirements.txt
new file mode 100644
index 00000000..055892d5
--- /dev/null
+++ b/LowCodeLLM/src/requirements.txt
@@ -0,0 +1,5 @@
+Flask==2.2.3
+Flask_Cors==3.0.10
+openai==0.27.2
+gunicorn==20.1.0
+gevent==21.8.0
\ No newline at end of file
diff --git a/LowCodeLLM/src/supervisord.conf b/LowCodeLLM/src/supervisord.conf
new file mode 100644
index 00000000..6c2e6ad8
--- /dev/null
+++ b/LowCodeLLM/src/supervisord.conf
@@ -0,0 +1,12 @@
+[supervisord]
+nodaemon=true
+loglevel=info
+
+[program:flask]
+command=gunicorn --timeout 300 --bind "0.0.0.0:8888" "app:app" --log-level debug --capture-output --worker-class gevent
+priority=999
+stdout_logfile=/dev/stdout
+stdout_logfile_maxbytes=0
+stderr_logfile=/dev/stdout
+stderr_logfile_maxbytes=0
+autorestart=true
\ No newline at end of file
diff --git a/LowCodeLLM/src/test/test_execute.py b/LowCodeLLM/src/test/test_execute.py
new file mode 100644
index 00000000..affa3dad
--- /dev/null
+++ b/LowCodeLLM/src/test/test_execute.py
@@ -0,0 +1,19 @@
+import json
+import sys
+import os
+import time
+sys.path.append(os.getcwd())
+
+def test_extend_workflow():
+ from lowCodeLLM import lowCodeLLM
+ cases = json.load(open("./test/testcases/execute_test_cases.json", "r"))
+ llm = lowCodeLLM(0.5, 0)
+ for c in cases:
+ task_prompt = c["task_prompt"]
+ confirmed_workflow = c["confirmed_workflow"]
+ history = c["history"]
+ curr_input = c["curr_input"]
+ result = llm.execute(task_prompt, confirmed_workflow, history, curr_input)
+ time.sleep(5)
+ assert type(result) == str
+ assert len(result) > 0
\ No newline at end of file
diff --git a/LowCodeLLM/src/test/test_extend_workflow.py b/LowCodeLLM/src/test/test_extend_workflow.py
new file mode 100644
index 00000000..8f9a9a38
--- /dev/null
+++ b/LowCodeLLM/src/test/test_extend_workflow.py
@@ -0,0 +1,17 @@
+import json
+import sys
+import os
+import time
+sys.path.append(os.getcwd())
+
+def test_extend_workflow():
+ from lowCodeLLM import lowCodeLLM
+ cases = json.load(open("./test/testcases/extend_workflow_test_cases.json", "r"))
+ llm = lowCodeLLM(0.5, 0)
+ for c in cases:
+ task_prompt = c["task_prompt"]
+ current_workflow = c["current_workflow"]
+ step = c["step"]
+ result = llm.extend_workflow(task_prompt, current_workflow, step)
+ time.sleep(5)
+ assert len(json.loads(result)) >= 1
\ No newline at end of file
diff --git a/LowCodeLLM/src/test/test_get_workflow.py b/LowCodeLLM/src/test/test_get_workflow.py
new file mode 100644
index 00000000..5d029a45
--- /dev/null
+++ b/LowCodeLLM/src/test/test_get_workflow.py
@@ -0,0 +1,13 @@
+import json
+import sys
+import os
+sys.path.append(os.getcwd())
+
+def test_get_workflow():
+ from lowCodeLLM import lowCodeLLM
+ cases = json.load(open("./test/testcases/get_workflow_test_cases.json", "r"))
+ llm = lowCodeLLM(0.5, 0)
+ for c in cases:
+ task_prompt = c["task_prompt"]
+ result = llm.get_workflow(task_prompt)
+ assert len(json.loads(result)) >= 1
\ No newline at end of file
diff --git a/LowCodeLLM/src/test/testcases/execute_test_cases.json b/LowCodeLLM/src/test/testcases/execute_test_cases.json
new file mode 100644
index 00000000..a2d62b8e
--- /dev/null
+++ b/LowCodeLLM/src/test/testcases/execute_test_cases.json
@@ -0,0 +1,20 @@
+[
+ {
+ "task_prompt": "Write an essay about drunk driving issue.",
+ "confirmed_workflow": "[{\"stepId\": \"STEP 1\", \"stepName\": \"Research\", \"stepDescription\": \"Gather statistics and information about drunk driving issue\", \"jumpLogic\": [], \"extension\": []}, {\"stepId\": \"STEP 2\", \"stepName\": \"Identify the causes\", \"stepDescription\": \"Analyze the reasons behind drunk driving\", \"jumpLogic\": [], \"extension\": []}, {\"stepId\": \"STEP 3\", \"stepName\": \"Examine the consequences\", \"stepDescription\": \"Investigate the outcomes of drunk driving\", \"jumpLogic\": [], \"extension\": []}, {\"stepId\": \"STEP 4\", \"stepName\": \"Develop a prevention plan\", \"stepDescription\": \"Create a plan to prevent drunk driving\", \"jumpLogic\": [{\"Condition\": \"if 'lack of information'\", \"Target\": \"STEP 1\"}, {\"Condition\": \"if 'unclear causes'\", \"Target\": \"STEP 2\"}, {\"Condition\": \"if 'incomplete analysis'\", \"Target\": \"STEP 2\"}, {\"Condition\": \"if 'unrealistic plan'\", \"Target\": \"STEP 4\"}], \"extension\": []}]",
+ "history": [],
+ "curr_input": "write the essay and show me."
+ },
+ {
+ "task_prompt": "Write an bolg about Microsoft.",
+ "confirmed_workflow": "[{\"stepId\": \"STEP 1\", \"stepName\": \"Research\", \"stepDescription\": \"Gather information about Microsoft's history, products, and services\", \"jumpLogic\": [], \"extension\": []}, {\"stepId\": \"STEP 2\", \"stepName\": \"Analyze\", \"stepDescription\": \"Organize the gathered information into categories and identify key points\", \"jumpLogic\": [], \"extension\": []}, {\"stepId\": \"STEP 3\", \"stepName\": \"Outline\", \"stepDescription\": \"Create an outline based on the key points and categories\", \"jumpLogic\": [], \"extension\": []}, {\"stepId\": \"STEP 4\", \"stepName\": \"Write\", \"stepDescription\": \"Write a draft of the blog post using the outline as a guide\", \"jumpLogic\": [], \"extension\": []}, {\"stepId\": \"STEP 5\", \"stepName\": \"Edit\", \"stepDescription\": \"Review and revise the draft for clarity and accuracy\", \"jumpLogic\": [{\"Condition\": \"need for further editing\", \"Target\": \"STEP 4\"}], \"extension\": []}, {\"stepId\": \"STEP 6\", \"stepName\": \"Publish\", \"stepDescription\": \"Post the final version of the blog post on a suitable platform\", \"jumpLogic\": [], \"extension\": []}]",
+ "history": [],
+ "curr_input": "write the blog and show me."
+ },
+ {
+ "task_prompt": "I want to write a two-player battle game.",
+ "confirmed_workflow": "[{\"stepId\": \"STEP 1\", \"stepName\": \"Game Concept\", \"stepDescription\": \"Decide on the game concept and mechanics\", \"jumpLogic\": [], \"extension\": []}, {\"stepId\": \"STEP 2\", \"stepName\": \"Game Design\", \"stepDescription\": \"Create a rough sketch of the game, including the game board, characters, and rules\", \"jumpLogic\": [], \"extension\": []}, {\"stepId\": \"STEP 3\", \"stepName\": \"Programming\", \"stepDescription\": \"Write the code for the game\", \"jumpLogic\": [{\"Condition\": \"if 'game mechanics are too complex'\", \"Target\": \"STEP 1\"}], \"extension\": []}, {\"stepId\": \"STEP 4\", \"stepName\": \"Testing\", \"stepDescription\": \"Test the game for bugs and glitches\", \"jumpLogic\": [{\"Condition\": \"if 'gameplay is not balanced'\", \"Target\": \"STEP 2\"}], \"extension\": []}, {\"stepId\": \"STEP 5\", \"stepName\": \"Polishing\", \"stepDescription\": \"Add finishing touches to the game, including graphics and sound effects\", \"jumpLogic\": [], \"extension\": []}, {\"stepId\": \"STEP 6\", \"stepName\": \"Release\", \"stepDescription\": \"Publish the game for players to enjoy\", \"jumpLogic\": [], \"extension\": []}]",
+ "history": [{"role": "asistant", "content": "sure, I can write it for you, do you want me show you the code?"}],
+ "curr_input": "Sure."
+ }
+]
\ No newline at end of file
diff --git a/LowCodeLLM/src/test/testcases/extend_workflow_test_cases.json b/LowCodeLLM/src/test/testcases/extend_workflow_test_cases.json
new file mode 100644
index 00000000..57aad472
--- /dev/null
+++ b/LowCodeLLM/src/test/testcases/extend_workflow_test_cases.json
@@ -0,0 +1,17 @@
+[
+ {
+ "task_prompt": "Write an essay about drunk driving issue.",
+ "current_workflow": "[{\"stepId\": \"STEP 1\", \"stepName\": \"Research\", \"stepDescription\": \"Gather statistics and information about drunk driving issue\", \"jumpLogic\": [], \"extension\": []}, {\"stepId\": \"STEP 2\", \"stepName\": \"Identify the causes\", \"stepDescription\": \"Analyze the reasons behind drunk driving\", \"jumpLogic\": [], \"extension\": []}, {\"stepId\": \"STEP 3\", \"stepName\": \"Examine the consequences\", \"stepDescription\": \"Investigate the outcomes of drunk driving\", \"jumpLogic\": [], \"extension\": []}, {\"stepId\": \"STEP 4\", \"stepName\": \"Develop a prevention plan\", \"stepDescription\": \"Create a plan to prevent drunk driving\", \"jumpLogic\": [{\"Condition\": \"if 'lack of information'\", \"Target\": \"STEP 1\"}, {\"Condition\": \"if 'unclear causes'\", \"Target\": \"STEP 2\"}, {\"Condition\": \"if 'incomplete analysis'\", \"Target\": \"STEP 2\"}, {\"Condition\": \"if 'unrealistic plan'\", \"Target\": \"STEP 4\"}], \"extension\": []}]",
+ "step": "STEP 1"
+ },
+ {
+ "task_prompt": "Write an bolg about Microsoft.",
+ "current_workflow": "[{\"stepId\": \"STEP 1\", \"stepName\": \"Research\", \"stepDescription\": \"Gather information about Microsoft's history, products, and services\", \"jumpLogic\": [], \"extension\": []}, {\"stepId\": \"STEP 2\", \"stepName\": \"Analyze\", \"stepDescription\": \"Organize the gathered information into categories and identify key points\", \"jumpLogic\": [], \"extension\": []}, {\"stepId\": \"STEP 3\", \"stepName\": \"Outline\", \"stepDescription\": \"Create an outline based on the key points and categories\", \"jumpLogic\": [], \"extension\": []}, {\"stepId\": \"STEP 4\", \"stepName\": \"Write\", \"stepDescription\": \"Write a draft of the blog post using the outline as a guide\", \"jumpLogic\": [], \"extension\": []}, {\"stepId\": \"STEP 5\", \"stepName\": \"Edit\", \"stepDescription\": \"Review and revise the draft for clarity and accuracy\", \"jumpLogic\": [{\"Condition\": \"need for further editing\", \"Target\": \"STEP 4\"}], \"extension\": []}, {\"stepId\": \"STEP 6\", \"stepName\": \"Publish\", \"stepDescription\": \"Post the final version of the blog post on a suitable platform\", \"jumpLogic\": [], \"extension\": []}]",
+ "step": "STEP 2"
+ },
+ {
+ "task_prompt": "I want to write a two-player battle game.",
+ "current_workflow": "[{\"stepId\": \"STEP 1\", \"stepName\": \"Game Concept\", \"stepDescription\": \"Decide on the game concept and mechanics\", \"jumpLogic\": [], \"extension\": []}, {\"stepId\": \"STEP 2\", \"stepName\": \"Game Design\", \"stepDescription\": \"Create a rough sketch of the game, including the game board, characters, and rules\", \"jumpLogic\": [], \"extension\": []}, {\"stepId\": \"STEP 3\", \"stepName\": \"Programming\", \"stepDescription\": \"Write the code for the game\", \"jumpLogic\": [{\"Condition\": \"if 'game mechanics are too complex'\", \"Target\": \"STEP 1\"}], \"extension\": []}, {\"stepId\": \"STEP 4\", \"stepName\": \"Testing\", \"stepDescription\": \"Test the game for bugs and glitches\", \"jumpLogic\": [{\"Condition\": \"if 'gameplay is not balanced'\", \"Target\": \"STEP 2\"}], \"extension\": []}, {\"stepId\": \"STEP 5\", \"stepName\": \"Polishing\", \"stepDescription\": \"Add finishing touches to the game, including graphics and sound effects\", \"jumpLogic\": [], \"extension\": []}, {\"stepId\": \"STEP 6\", \"stepName\": \"Release\", \"stepDescription\": \"Publish the game for players to enjoy\", \"jumpLogic\": [], \"extension\": []}]",
+ "step": "STEP 3"
+ }
+]
\ No newline at end of file
diff --git a/LowCodeLLM/src/test/testcases/get_workflow_test_cases.json b/LowCodeLLM/src/test/testcases/get_workflow_test_cases.json
new file mode 100644
index 00000000..eac7cb3e
--- /dev/null
+++ b/LowCodeLLM/src/test/testcases/get_workflow_test_cases.json
@@ -0,0 +1,11 @@
+[
+ {
+ "task_prompt": "Write an essay about drunk driving issue."
+ },
+ {
+ "task_prompt": "Write an bolg about Microsoft."
+ },
+ {
+ "task_prompt": "I want to write a two-player battle game."
+ }
+]
\ No newline at end of file
diff --git a/assets/low-code-demovideo.mp4 b/assets/low-code-demovideo.mp4
new file mode 100644
index 00000000..e2d0b7d0
Binary files /dev/null and b/assets/low-code-demovideo.mp4 differ
diff --git a/assets/low-code-llm.png b/assets/low-code-llm.png
index 0202a747..e9729cbc 100644
Binary files a/assets/low-code-llm.png and b/assets/low-code-llm.png differ
diff --git a/assets/low-code-operation.png b/assets/low-code-operation.png
new file mode 100644
index 00000000..7d59f9d6
Binary files /dev/null and b/assets/low-code-operation.png differ