diff --git a/docs/core/api.md b/docs/core/api.md index b570624a..b443f483 100644 --- a/docs/core/api.md +++ b/docs/core/api.md @@ -1,9 +1,6 @@ # Setting Up the GHOSTS API -???+ info "GHOSTS Source Code" - The [GHOSTS Source Code Repository](https://github.com/cmu-sei/GHOSTS) is hosted on GitHub. - -*Updated on July 24, 2024* +_Updated on October 30, 2024_ The GHOSTS API enables the control and orchestration of non-player characters (NPCs) within a deployment. It supports logging, reporting, and managing individual, groups of, or entire deployments of client installs. @@ -11,10 +8,10 @@ The GHOSTS API consists of three components: the API itself for configuring and Steps to set up the GHOSTS API: - 1. Choose where to host the API - 2. Install Docker and Docker Compose - 3. Build the GHOSTS containers - 4. Test the API +1. Choose where to host the API +2. Install Docker and Docker Compose +3. Build the GHOSTS containers +4. Test the API ## Step 1 — Choose Where to Host the API @@ -42,62 +39,53 @@ Once you have confirmed that Docker and Docker Compose are installed, you can bu Create a directory where you want to store the build files and containers. -``` -$ mkdir ghosts-project -$ cd ghosts-project +```shell +mkdir ghosts-project +cd ghosts-project ``` Download the docker compose file for GHOSTS. -``` -$ curl https://raw.githubusercontent.com/cmu-sei/GHOSTS/master/src/Ghosts.Api/docker-compose.yml -o docker-compose.yml +```shell +curl https://raw.githubusercontent.com/cmu-sei/GHOSTS/master/src/Ghosts.Api/docker-compose.yml -o docker-compose.yml ``` Build all of the containers at once using docker-compose. ``` -$ docker-compose up -d +docker-compose up -d ``` Check for the running containers. ``` -$ docker ps -a +docker ps -a ``` -If everything succeeds you should see the three new containers for the API, Grafana, and Postgres. +If everything succeeds you should see four new containers for the API, UI, Grafana, and Postgres. -![Running Containers](../../images/api/installing-the-api-running-containers.png) +![Running Containers](../images/installing-the-api-running-containers.png) ## Step 4 — Testing the API By default, the API is hosted on port 5000. You should be able to reach the API from [http://localhost:5000](http://localhost:5000). If you open this page in your browser, you should see the initial API page outlining the version of the install, and a few test machine entries. If this page renders, your API is up, running, and available. -![Success!](../../images/api/installing-the-api-success.png) +![Success!](../images/installing-the-api-success.png) ## Troubleshooting ### Problem: The API home page has an error -![API Home Page Error](../../images/api/installing-the-api-error.png) +![API Home Page Error](../images/installing-the-api-error.png) Answer: Make sure the docker container for Postgres is running using Docker Desktop or the command `docker ps -a` -![Running Containers](../../images/api/installing-the-api-running-containers.png) +![Running Containers](../images/installing-the-api-running-containers.png) You can check the logs with the command `docker logs ghosts-postgres` to look for container errors. ### Problem: The social graph link has an error -![API Social Graph Page Error](../../images/api/installing-the-api-social-error.png) +![API Social Graph Page Error](../images/installing-the-api-social-error.png) Answer: You haven't created a social network yet, this is normal. - -### Problem: Is the API up and running? - -- Go to `/api/home` in the browser, it should return the current API version and the number of machines and groups under management. If it says relationship not found, restart the API application and it should create the database automatically. -- Run `docker ps --all` and see that all containers are running normally. If one or more is not running, look at the logs for that machine via `docker logs [machine name]`. - -> The ClientId, ClientResults, and other Client* endpoints are failing. - -The Client* endpoints are for the Clients to use only. There are specific header values set by the client in the request that is used to authenticate the request. If you are not using the client, you will not have these headers set, and these endpoints will fail. diff --git a/docs/core/grafana.md b/docs/core/grafana.md index f2279d2d..dfd0e40c 100644 --- a/docs/core/grafana.md +++ b/docs/core/grafana.md @@ -1,12 +1,12 @@ # Configuring Grafana -*Updated on July 25, 2024* +_Updated on October 30, 2024_ -[Grafana](https://grafana.com/) for GHOSTS allows the simulation administrator to visualize all activities carried out by the NPCs across the simulation in dashboard with pretty colors and charts. +[Grafana](https://grafana.com/) for GHOSTS allows the simulation administrator to visualize all activities carried out by the NPCs across the simulation in dashboard with pretty colors and charts. ## Prerequisites -- Your GHOSTS API should be installed and running (see [Setting Up the GHOSTS API](installing-the-api.md)) +- Your GHOSTS API should be installed and running (see [Setting Up the GHOSTS API](installing-the-api.md)) ## Step 1 — Container Set Up @@ -15,26 +15,26 @@ The Grafana docker container will be installed during the process of [Setting Up You can check its status with the docker command ``` -$ docker ps -a +docker ps -a ``` If the container is continuously restarting, Grafana does not have the permissions it needs. -![Grafana Restarting](../../images/grafana/configuring-grafana-restarting.png) +![Grafana Restarting](../images/configuring-grafana-restarting.png) You can also check the docker logs: ``` -$ docker logs ghosts-grafana +docker logs ghosts-grafana ``` -![Grafana Permission Denied](../../images/grafana/configuring-grafana-permission-denied.png) +![Grafana Permission Denied](../images/configuring-grafana-permission-denied.png) If you don't see this issue, you can continue to Step 2. If you do see this issue you will need to grant permissions on the `_g` directory of the ghosts-api folder (which stores the Grafana data for the GHOSTS API). -``` -$ cd ghosts-api -$ chmod 777 _g +```shell +cd ghosts-api +chmod 777 _g ``` Ensure the container is running with `docker ps`. @@ -43,7 +43,7 @@ Ensure the container is running with `docker ps`. Once the container is running you can access its front end by default at [localhost:3000](http://localhost:3000) -![Grafana Front end](../../images/grafana/configuring-grafana-front-end.png) +![Grafana Front end](../images/configuring-grafana-front-end.png) The default login is: @@ -57,19 +57,22 @@ Continue through the setup prompts. Now you need to tell Grafana where it will be getting its data. -1. Open the "Connections" from the left menu. -2. Click on "Add new data source" in the top right corner +1. From the "Connections" drop down menu on the left side, choose the "Data Sources" option +2. Click the "Add new data source" button 3. Search for "Postgres" and choose the PostgreSQL option 4. Name the datasource "ghosts" and leave it as the default 5. Under the "Connection" section of the config, set -- host url to "ghosts-postgres:5432" -- database name to "ghosts" + +- host url to "ghosts-postgres:5432" +- database name to "ghosts" + 6. Under the "Authentication" section of the config, set - - username to "ghosts" - - password to "scotty@1" - - TLS/SSL Mode to "disable" -7. Leave everything else at its default and click the "Save and test" button at the bottom of the page +- username to "ghosts" +- password to "scotty@1" +- TLS/SSL Mode to "disable" + +7. Leave everything else at its default and click the "Save and test" button at the bottom of the page ### Step 4 — Choosing a Dashboard @@ -77,21 +80,21 @@ Grafana dashboards are very flexible and can be configured to show any statistic GHOSTS comes with some premade dashboards to get you started. You can download those here: https://github.com/cmu-sei/GHOSTS/tree/master/configuration/grafana/dashboards -- GHOSTS-5-default Grafana dashboard — shows status across all machines -- GHOSTS-5-group-default Grafana dashboard — shows status with machines grouped by enclave +- GHOSTS-5-default Grafana dashboard — shows status across all machines +- GHOSTS-5-group-default Grafana dashboard — shows status with machines grouped by enclave #### Loading an existing dashboard Navigate to "Dashboards" in the left menu. There will be a blue "New" button in the top right corner. -![Empty Dashboard](../../images/grafana/configuring-grafana-empty-dashboard.png) +![Empty Dashboard](../images/configuring-grafana-empty-dashboard.png) -Click "New". Then, "import". +Click "New". Then, "import". -You can either upload one of the dashboard json files from the GHOSTS repository or simply copy and paste the json into the "import via dashboard json model" panel. +You can either upload one of the dashboard json files from the [GHOSTS repository](https://github.com/cmu-sei/GHOSTS/tree/master/configuration/grafana/dashboards) or simply copy and paste the json into the "import via dashboard json model" panel. Choose the ghosts datasource you added earlier from the drop down menu and then click "import". -![Ghosts In the Dashboard](../../images/grafana/configuring-grafana-dashboard.png) +![Ghosts In the Dashboard](../images/configuring-grafana-dashboard.png) You are now set up with Grafana! diff --git a/docs/images/configuring-grafana-dashboard.png b/docs/images/configuring-grafana-dashboard.png new file mode 100644 index 00000000..25e080de Binary files /dev/null and b/docs/images/configuring-grafana-dashboard.png differ diff --git a/docs/images/configuring-grafana-empty-dashboard.png b/docs/images/configuring-grafana-empty-dashboard.png new file mode 100644 index 00000000..c8ea2560 Binary files /dev/null and b/docs/images/configuring-grafana-empty-dashboard.png differ diff --git a/docs/images/configuring-grafana-front-end.png b/docs/images/configuring-grafana-front-end.png new file mode 100644 index 00000000..d9d900b3 Binary files /dev/null and b/docs/images/configuring-grafana-front-end.png differ diff --git a/docs/images/configuring-grafana-permission-denied.png b/docs/images/configuring-grafana-permission-denied.png new file mode 100644 index 00000000..528abb3e Binary files /dev/null and b/docs/images/configuring-grafana-permission-denied.png differ diff --git a/docs/images/configuring-grafana-restarting.png b/docs/images/configuring-grafana-restarting.png new file mode 100644 index 00000000..a0151c64 Binary files /dev/null and b/docs/images/configuring-grafana-restarting.png differ diff --git a/docs/images/installing-the-api-error.png b/docs/images/installing-the-api-error.png new file mode 100644 index 00000000..167262ee Binary files /dev/null and b/docs/images/installing-the-api-error.png differ diff --git a/docs/images/installing-the-api-running-containers.png b/docs/images/installing-the-api-running-containers.png new file mode 100644 index 00000000..85e6177e Binary files /dev/null and b/docs/images/installing-the-api-running-containers.png differ diff --git a/docs/images/installing-the-api-social-error.png b/docs/images/installing-the-api-social-error.png new file mode 100644 index 00000000..45329af1 Binary files /dev/null and b/docs/images/installing-the-api-social-error.png differ diff --git a/docs/images/installing-the-api-success.png b/docs/images/installing-the-api-success.png new file mode 100644 index 00000000..e60d5c5e Binary files /dev/null and b/docs/images/installing-the-api-success.png differ diff --git a/docs/images/setting-up-shadows-containers-running.png b/docs/images/setting-up-shadows-containers-running.png new file mode 100644 index 00000000..93119e1e Binary files /dev/null and b/docs/images/setting-up-shadows-containers-running.png differ diff --git a/docs/images/setting-up-shadows-services-on.png b/docs/images/setting-up-shadows-services-on.png new file mode 100644 index 00000000..5d74832e Binary files /dev/null and b/docs/images/setting-up-shadows-services-on.png differ diff --git a/docs/shadows/index.md b/docs/shadows/index.md index b45c0c51..371ff74e 100644 --- a/docs/shadows/index.md +++ b/docs/shadows/index.md @@ -1,253 +1,230 @@ -# **GHOSTS Shadows** +# Setting Up Shadows -Shadows provides access to a locally-hosted LLM for various GHOSTS agent purposes. It offers multiple interfaces: - -- **A REST API**: For GHOSTS agents. -- **A UI web interface**: For testing and demo purposes. - -## **Default API Endpoints** +_Updated on October 30, 2024_ -- **Activity**: Answers the question of "what should an NPC do next?" -- **Chat**: Provides content for an NPC to chat with a player or other NPC. -- **Excel Content**: Provides content for documents related to spreadsheets. -- **Image Content**: Provides content for documents related to images. -- **Lessons**: Provides content related to educational materials or lessons. -- **Social**: Provides content for an NPC to post on social media systems such as GHOSTS Socializer. -- **Web Content**: Provides content for documents related to web pages. - -We anticipate that there will be many more endpoints in the future. +Shadows provides access to a locally-hosted LLM for various GHOSTS agent purposes. It offers multiple interfaces: ---- +- A REST API: For GHOSTS agents. +- A UI web interface: For testing and demo purposes. -## **Running Shadows with Docker** +The REST API contains endpoints for: -To run Shadows with Ollama in Docker, follow these steps: +- Activity: Answers the question of "what should an NPC do next?" +- Chat: Provides content for an NPC to chat with a player or other NPC. +- Excel Content: Provides content for documents related to spreadsheets. +- Image Content: Provides content for documents related to images. +- Lessons: Provides content related to educational materials or lessons. +- Social: Provides content for an NPC to post on social media systems such as GHOSTS Socializer. +- Web Content: Provides content for documents related to web pages. -### **1. Set Up and Run Ollama in Docker** +## Prerequisites -1. **Pull the Ollama Docker Image**: +If you are just looking to try out Shadows, you do not need any prerequistes other than Docker compose (see Step 2 of [Setting Up the GHOSTS API](installing-the-api.md)). - ```bash - docker pull ollama/ollama:latest - ``` +If you plan to use Shadows with Non-Player Character (NPC) clients, you should go ahead and complete the [installation for the GHOSTS API](installing-the-api.md) and [configured Grafana](./configuring-grafana.md) before moving on to Step 1 of this tutorial. -2. **Run Ollama Container**: Start Ollama in a Docker container and bind it to port 11434. +The following steps assume you are using docker for Ollama. Review the Notes at the end of this tutorial for setting up Ollama for Shadows without Docker. - ```bash - docker run -d --name ollama \ - -p 11434:11434 \ - ollama/ollama:latest \ - ollama serve --port 11434 - ``` +## Step 1 — Ollama Container Setup - - **-p 11434:11434**: Maps port 11434 on your host to port 11434 in the container (Ollama's API port). +Pull (download) the latest Ollama Docker image -### **2. Run Shadows in Docker** +_Be aware this is a very large container (~800MB)_ -1. **Export the Environment Variable**: Define the `GHOSTS_OLLAMA_URL` environment variable to point Shadows to the Ollama container. +```shell +docker pull ollama/ollama:latest +``` - ```bash - export GHOSTS_OLLAMA_URL=http://localhost:11434 - ``` +Run the container -2. **Run Shadows Container**: Start Shadows in Docker and connect it to the running Ollama instance. +```shell +docker run -d --name ollama \ + -p 11434:11434 \ + ollama/ollama:latest +``` - ```bash - docker run -d --name shadows \ - -p 5900:5900 \ - -p 7860:7860 \ - -e GHOSTS_OLLAMA_URL=http://localhost:11434 \ - dustinupdyke/ghosts-shadows - ``` +**Explanation:** - - **-p 5900:5900**: Maps port 5900 on your host to port 5900 in the container (Shadows API). - - **-p 7860:7860**: Maps port 7860 on your host to port 7860 in the container (Shadows UI). - - **-e GHOSTS_OLLAMA_URL=http://localhost:11434**: Passes the Ollama URL to Shadows. +- `-p 11434:11434`: Maps port 11434 on your host to port 11434 in the container (Ollama's API port). +- `ollama serve --port 11434` tells Ollama to serve its API on port 11434 -3. **Access Shadows**: - - **API**: Available at `http://localhost:5900`. - - **UI**: Available at `http://localhost:7860` for testing and demos. +## Step 2 — Shadows Container Setup -### **Additional Notes** +Pull (download) the latest Shadows image -- **Network Configuration**: Ensure that the Docker containers for Ollama and Shadows are on the same network. By default, Docker containers on the same host can communicate using `localhost`, but you can create a Docker network if needed. +_Be aware this is a very large container (~3GB)_ - ```bash - docker network create ghosts-network - docker run -d --name ollama --network ghosts-network -p 11434:11434 ollama/ollama:latest ollama serve --port 11434 - docker run -d --name shadows --network ghosts-network -p 5900:5900 -p 7860:7860 -e GHOSTS_OLLAMA_URL=http://ollama:11434 dustinupdyke/ghosts-shadows - ``` +```shell +docker pull dustinupdyke/ghosts-shadows:latest +``` - In this setup: - - Replace `http://localhost:11434` with `http://ollama:11434` to refer to the Ollama container by name within the Docker network. +Set the environment variable for Shadows to use the Ollama API you configured in Step 1. -- **Troubleshooting**: If you face issues, check the logs of each container: +```shell +export GHOSTS_OLLAMA_URL=http://localhost:11434 +``` - ```bash - docker logs ollama - docker logs shadows - ``` +Run the container -- **Port Conflicts**: Ensure that ports 11434, 5900, and 7860 are not in use by other applications. +```shell +docker run -d --name shadows \ + -p 5900:5900 \ + -p 7860:7860 \ + -e GHOSTS_OLLAMA_URL=http://localhost:11434 \ + dustinupdyke/ghosts-shadows +``` ---- +**Explanation:** -## **Using Docker Compose** +- -p 5900:5900: Maps port 5900 on your host to port 5900 in the container (Shadows API). +- -p 7860:7860: Maps port 7860 on your host to port 7860 in the container (Shadows UI). +- -e GHOSTS_OLLAMA_URL=http://localhost:11434: Passes the Ollama URL to Shadows. -Docker Compose simplifies managing multiple Docker containers. Here’s how to use Docker Compose to run both Ollama and Shadows: +## Step 3 — Accessing and Testing the Services -### **1. Create a Docker Compose File** +Check that both the Ollama and Shadows containers are running: -Create a file named `docker-compose.yml` in your project directory with the following content: +```shell +docker ps -a +``` -```yaml -version: '3.8' +![Containers running](../images/setting-up-shadows-containers-running.png) -services: - ollama: - image: ollama/ollama:latest - container_name: ollama - ports: - - "11434:11434" - command: serve - networks: - - ghosts-network - - shadows: - image: dustinupdyke/ghosts-shadows - container_name: shadows - ports: - - "5900:5900" - - "7860:7860" - environment: - - GHOSTS_OLLAMA_URL=http://ollama:11434 - networks: - - ghosts-network - depends_on: - - ollama +If the either container is not running, review its logs with docker -networks: - ghosts-network: - driver: bridge +```shell +docker logs ollama +docker logs shadows ``` -### **2. Explanation of the Compose File** +Access each service with -- **version**: Specifies the version of the Docker Compose file format. -- **services**: Defines the different containers. - - **ollama**: - - **image**: Docker image for Ollama. - - **container_name**: Name for the Ollama container. - - **ports**: Maps port 11434. - - **command**: Command to start Ollama. - - **networks**: Connects to the specified network. - - **shadows**: - - **image**: Docker image for Shadows. - - **container_name**: Name for the Shadows container. - - **ports**: Maps ports 5900 and 7860. - - **environment**: Sets environment variable for Ollama URL. - - **networks**: Connects to the specified network. - - **depends_on**: Ensures Ollama starts before Shadows. +- Ollama: http://localhost:11434 +- Shadows API: http://localhost:5900 for GHOSTS NPC access +- Shadows UI: http://localhost:7860 for testing and demos -- **networks**: Defines a custom network for communication. +![Services working](../images/setting-up-shadows-services-on.png) -### **3. Start the Services** +### Troubleshooting -In your project directory, run: +**Network Configuration**: Ensure that the Docker containers for Ollama and Shadows are on the same network. By default, Docker containers on the same host can communicate using localhost, but you can create a Docker network if needed. -```bash -docker-compose up -d +```shell +docker network create ghosts-network +docker run -d --name ollama --network ghosts-network -p 11434:11434 ollama/ollama:latest +docker run -d --name shadows --network ghosts-network -p 5900:5900 -p 7860:7860 -e GHOSTS_OLLAMA_URL=http://ollama:11434 dustinupdyke/ghosts-shadows ``` -- **-d**: Runs containers in detached mode. - -### **4. Access the Services** +In this setup: - Replace `http://localhost:11434` with `http://ollama:11434` to refer to the Ollama container by name within the Docker network. -- **Ollama**: Accessible at `http://localhost:11434`. -- **Shadows**: Accessible at `http://localhost:5900` (API) and `http://localhost:7860` (UI). +**Port Conflicts**: Ensure that ports specified above (5900, 7860, and 11434) are not in use by other applcations -### **5. Manage the Containers** +## Step 4 — Configuring the Docker Compose File -- **Stop the services**: +Rather than building the containers individually, you can add them to a docker compose file to run everything you need for simulations with GHOSTS all at once. - ```bash - docker-compose down - ``` +You can create a new docker-compose file for starting just these two containers using the following configuration, or you can add the two service configurations to the existing docker-compose.yml file you downloaded as part of [installing the GHOSTS API](./installing-the-api.md). -- **View logs**: +```yml +# Filename: docker-compose.yml +services: + ollama: + image: ollama/ollama:latest + container_name: ollama + ports: + - "11434:11434" + command: serve + networks: + - ghosts-network + + shadows: + image: dustinupdyke/ghosts-shadows + container_name: shadows + ports: + - "5900:5900" + - "7860:7860" + environment: + - GHOSTS_OLLAMA_URL=http://ollama:11434 + networks: + - ghosts-network + depends_on: + - ollama - ```bash - docker-compose logs - ``` +networks: + ghosts-network: + driver: bridge +``` -- **Rebuild the services**: +Note, you will need to delete the existing containers you created in steps 1 and 2 in order to use the same names with docker-compose. - ```bash - docker-compose up -d --build - ``` +``` +docker rm ollama +docker rm shadows +``` -### **6. Troubleshooting** +You can then use docker-compose to build the new containers all at once. -- **Check Container Status**: +``` +docker-compose up -d +``` - ```bash - docker-compose ps - ``` +You should at this point be able to again access the three APIs listed in Step 3. -- **Inspect Logs**: +### Docker-Compose Commands - ```bash - docker-compose logs ollama - docker-compose logs shadows - ``` +Some useful docker-compose commands are ---- +- `docker-compose down` - stops the services +- `docker-compose up -d --build` - rebuilds the servcies +- `docker-compose ps` - check container status +- `docker-compose logs ollama` - show logs for the ollama container +- `docker-compose logs shadows` - show logs for the shadows container -## **Running Shadows on Bare Metal** +## Notes: Running Shadows Without Docker If you prefer to run Shadows on bare metal, follow these steps: -### **1. Get Ollama Up and Running** +### Get Ollama Up and Running In separate terminal windows, execute the following commands: -1. **Create and Run Models**: +Create and Run Models: - ```bash - cd content-models/activity - ollama create activity +```shell +cd content-models/activity +ollama create activity - cd ../chat - ollama create chat +cd ../chat +ollama create chat - cd ../excel_content - ollama create excel_content +cd ../excel_content +ollama create excel_content - cd ../img_content - ollama create img_content +cd ../img_content +ollama create img_content - cd ../lessons - ollama create lessons +cd ../lessons +ollama create lessons - cd ../social - ollama create social +cd ../social +ollama create social - cd ../web_content - ollama create web_content - ``` +cd ../web_content +ollama create web_content +``` -2. **Run the API and UI Servers**: +Run the API and UI Servers: - ```bash - python api.py - python ui.py - ``` +```shell +python api.py +python ui.py +``` -### **2. Run Multiple Models** +### Run Multiple Models Eventually, Ollama will serve multiple models concurrently. Use the following loop to set up and start models: -```bash +```shell cd content-models/activity ollama create activity @@ -272,11 +249,11 @@ ollama create web_content ollama serve ``` -### **3. Expose Ollama Beyond Localhost** +### Expose Ollama Beyond Localhost If you want Ollama to be available beyond localhost, use: -```bash +```shell OLLAMA_HOST=0.0.0.0:11434 ollama serve ```