Skip to content

Commit

Permalink
update docs: development
Browse files Browse the repository at this point in the history
  • Loading branch information
PengJiazhen408 committed Jul 22, 2024
1 parent 8c8afc8 commit f59404a
Show file tree
Hide file tree
Showing 2 changed files with 56 additions and 79 deletions.
87 changes: 40 additions & 47 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,53 +100,46 @@ Once the extension is installed:
}
```

## How to develop

This extension project is divided into two parts:

- **Frontend** is responsible for the user interaction interface, communication with the backend service, and executing the API calls returned by the backend service.

- **Backend** utilizes large language models (LLMs) to orchestrate the optimal API calls to fulfill user requirements based on [App-Controller](https://github.com/alibaba/app-controller) framework.

When you only need to develop the frontend of the extension, you can install the frontend from source code and start your own backend service from a container for testing and development.

### Install the extension frontend from source code
- Before you start, ensure that you have `Node.js` and `npm` installed on your system.
- Clone the [repository](https://github.com/alibaba/smart-vscode-extension.git) to your local machine
- Install the `Yarn` package manager by running `npm install --global yarn`
- On the root directory, run `yarn` command to install the dependencies listed in `package.json`
- Within VS Code - run the project by simply hitting `F5`

### Start your own backend service from a container
- Download the `Dockerfile` from the link by running
```bash
wget https://raw.githubusercontent.com/alibaba/pilotscope/master/Dockerfile
```
- Build a Docker image named `llm4api` locally by running:
```bash
docker build -t llm4api:latest \
--build-arg OPENAI_BASE_URL='https://api.openai.com' \
--build-arg OPENAI_API_KEY='your_api_key' \
--build-arg SERVER_PORT='your_server_port_number_in_contain' \
.
```
Replace `your_api_key` and `your-server-port-number-in-contain` with your own settings.

- `OPENAI_BASE_URL` is the base URL for the OpenAI API. It specifies the endpoint for API calls. The default value is `'https://api.openai.com'`. Modify it only if you are using a different endpoint. If you customize it, please make sure you have the same format. e.g. starts with `https://` without a trailing slash. The completions endpoint suffix is added internally, e.g. for reference: `${apiBaseUrl}/v1/completions`
- `OPENAI_API_KEY` is your unique API key for authenticating requests to the OpenAI API. Replace `your_api_key` with your actual API key obtained from [OpenAI](https://beta.openai.com/account/api-keys).
- `SERVER_PORT` (optional) specifies the port number inside the container where the service will start. The default setting is 5000. You can modify it based on your preference or requirements.

- Start a Docker container by running:
```bash
docker run -it --name llm4api_test --shm-size 5gb --cap-add sys_ptrace -p your_server_port_number_in_host:your_server_port_number_in_contain -p 5022:22 -d llm4api /bin/bash
```
This command boots up a container named `llm4api_test`. `your_server_port_number_in_host` and `your_server_port_number_in_contain` should be replaced with your own settings. Port mappings are configured as follows:

- The container’s port `your_server_port_number_in_contain` is mapped to host machine’s port `your_server_port_number_in_host` (5000 is recommended), allowing starting the backend service via host port `your_server_port_number_in_host`.
- (Recommended) The container’s port 22 (SSH) is mapped to host machine’s port 5022, enabling SSH access to the container from the host via port 5022.

### Configure Frontend to Communicate with Backend
- Ensure that the frontend can correctly access the backend service. Modify `llm4apisServiceBaseUrl` in the `src/Common/Config.ts` file of the frontend project to set the backend service URL. Ensure that the `IP address` and the `port number` correspond to the **host machine** where the service is deployed.
## Supported Tasks
Here we provide an overview of the tasks supported by SmartVscode.


| Task | Input Example |
| --------------------- | ----------------------------------------------- |
| **Settings** | |
| Theme | Switch to dark theme |
| Font | Set font size to 14 |
| Keybinding | I want to set a shortcut key for saving as |
| **Editor** | |
| Format | Format this file |
| Format | Format the selected code |
| Replace | Replace "var" with "let" |
| Comment | Comment the selected code |
| Comment | Uncomment the selected code |
| Duplicate | Duplicate the current line |
| Duplicate | Duplicate the selected code |
| File | Open the file named "main.py" |
| Navigate | Go to line 20 |
| Navigate | Jump to the function "greet" |
| Navigate | Navigate back to the previous location |
| Fold | Collapse all sections in the current JSON file |
| Fold | Unfold all sections in the current JSON file |
| **View** | |
| Workspace | Open a workspace folder in a new window |
| Workspace | Close current workspace folder |
| Sidebar | Close the sidebar on the left |
| **Execution** | |
| Breakpoint | Set a breakpoint at line 50 |
| Debug | Start debugging |
| Debug | Run the file named "main.py" |
| **Remote Connection** | |
| Config | Open the remote configuration file |
| Connection | Create a remote ssh server connection in vscode |
| **Extension** | |
| Install | Install the extension named "python" |

## Documentation
[Documentation] provides the comprehensive information on the supported tasks and how to develop SmartVscode. You can refer to these documentations for an improved experience with SmartVscode.

## License
SmartVscode is released under Apache License 2.0.
Expand Down
48 changes: 16 additions & 32 deletions docs/sphinx_doc/en/source/tutorial/development.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,42 +8,26 @@ This extension project is divided into two parts:

- **Backend** utilizes large language models (LLMs) to orchestrate the optimal API calls to fulfill user requirements based on [App-Controller](https://github.com/alibaba/app-controller) framework.

When you only need to develop the frontend of the extension, you can install the frontend from source code and start your own backend service from a container for testing and development.
When you need to develop the frontend of the extension, you can install the frontend from source code and start your own backend service for testing and development.

## Install the extension frontend from source code
## Step 1: Install and run the extension frontend from source code
- Before you start, ensure that you have `Node.js` and `npm` installed on your system.
- Clone the [repository](https://github.com/alibaba/smart-vscode-extension.git) to your local machine
- Install the `Yarn` package manager by running `npm install --global yarn`
- On the root directory, run `yarn` command to install the dependencies listed in `package.json`
- Configure the frontend to commuicate with backend by modifying `llm4apisServiceBaseUrl` in the `src/Common/Config.ts` file. Ensure that the `llm4apisServiceBaseUrl` correspond to the service to be deployed.
- Within VS Code - run the project by simply hitting `F5`

## Start your own backend service from a container
- Download the `Dockerfile` from the link by running
```bash
wget https://raw.githubusercontent.com/alibaba/pilotscope/master/Dockerfile
```
- Build a Docker image named `llm4api` locally by running:
```bash
docker build -t llm4api:latest \
--build-arg OPENAI_BASE_URL='https://api.openai.com' \
--build-arg OPENAI_API_KEY='your_api_key' \
--build-arg SERVER_PORT='your_server_port_number_in_contain' \
.
```
Replace `your_api_key` and `your-server-port-number-in-contain` with your own settings.

- `OPENAI_BASE_URL` is the base URL for the OpenAI API. It specifies the endpoint for API calls. The default value is `'https://api.openai.com'`. Modify it only if you are using a different endpoint. If you customize it, please make sure you have the same format. e.g. starts with `https://` without a trailing slash. The completions endpoint suffix is added internally, e.g. for reference: `${apiBaseUrl}/v1/completions`
- `OPENAI_API_KEY` is your unique API key for authenticating requests to the OpenAI API. Replace `your_api_key` with your actual API key obtained from [OpenAI](https://beta.openai.com/account/api-keys).
- `SERVER_PORT` (optional) specifies the port number inside the container where the service will start. The default setting is 5000. You can modify it based on your preference or requirements.

- Start a Docker container by running:
```bash
docker run -it --name llm4api_test --shm-size 5gb --cap-add sys_ptrace -p your_server_port_number_in_host:your_server_port_number_in_contain -p 5022:22 -d llm4api /bin/bash
```
This command boots up a container named `llm4api_test`. `your_server_port_number_in_host` and `your_server_port_number_in_contain` should be replaced with your own settings. Port mappings are configured as follows:

- The container’s port `your_server_port_number_in_contain` is mapped to host machine’s port `your_server_port_number_in_host` (5000 is recommended), allowing starting the backend service via host port `your_server_port_number_in_host`.
- (Recommended) The container’s port 22 (SSH) is mapped to host machine’s port 5022, enabling SSH access to the container from the host via port 5022.

## Configure Frontend to Communicate with Backend
- Ensure that the frontend can correctly access the backend service. Modify `llm4apisServiceBaseUrl` in the `src/Common/Config.ts` file of the frontend project to set the backend service URL. Ensure that the `IP address` and the `port number` correspond to the **host machine** where the service is deployed.
## Step 2: Start your own backend service from source code
- Install the backend (i.e. App-Controller) by following the [docs](https://alibaba.github.io/app-controller/en/tutorial/installation.html).

- Configure the service by following the [docs](https://alibaba.github.io/app-controller/en/tutorial/deploy.html#step3-configuration-your-app-controller).

- Start your own backend service by following the [docs](https://alibaba.github.io/app-controller/en/tutorial/deploy.html#step4-start-the-service).

## Step3: Start to develop

- You can expand the functionality of SmartVscode by providing more API knowledge, specifically referring to: [Data Preparation](https://alibaba.github.io/app-controller/en/tutorial/deploy.html#step1-data-preparation).

- You can expand more interactions between SmartVscode (fontend) and App-Controller (backend) by referring to:
[Communication Interface Implementation](https://alibaba.github.io/app-controller/en/tutorial/deploy.html#step2-communication-interface-implementation).

0 comments on commit f59404a

Please sign in to comment.