diff --git a/integrations/lambda/README.md b/integrations/lambda/README.md
new file mode 100644
index 00000000..d3df857e
--- /dev/null
+++ b/integrations/lambda/README.md
@@ -0,0 +1,148 @@
+# Timestream for LiveAnalytics Lambda Sample Application
+
+## Overview
+
+This sample application demonstrates how [time series data](https://docs.aws.amazon.com/timestream/latest/developerguide/concepts.html) can be ingested into [Timestream for LiveAnalytics](https://docs.aws.amazon.com/timestream/latest/developerguide/what-is-timestream.html) using an [AWS Lambda function](https://aws.amazon.com/lambda/) and [`boto3`](https://boto3.amazonaws.com/v1/documentation/api/latest/index.html).
+
+This sample application is comprised of three files:
+- `demo.ipynb`: A [Jupyter notebook](https://jupyter.org/) that:
+ - Generates simulated time series data from a selection of predefined scenarios or a user-defined scenario.
+ - Deploys a Lambda function that receives the data and ingests the data into Timestream for LiveAnalytics.
+ - Sends the generated time series data to the Lambda's URL using SigV4 authentication.
+ - Creates an Amazon Managed Grafana workspace.
+ - Generates and uploads a dashboard to the Amazon Managed Grafana workspace.
+- `requirements.txt`: A file containing required packages for the Jupyter notebook, for quick environment setup.
+- `environment.yml`: A Conda environment file that specifies the environment name, channel, and dependencies.
+
+The following diagram depicts the deployed Lambda function receiving generated data and ingesting the data to Timestream for LiveAnalytics that then is queried and displayed in [Amazon Managed Grafana](https://aws.amazon.com/grafana/).
+
+
+
+## Prerequisites
+
+1. [Configure AWS credentials for use with boto3](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html).
+2. [Install Conda](https://docs.conda.io/projects/conda/en/latest/user-guide/install/index.html).
+3. On Linux and macOS, run the following command to enable `conda`, replacing `` with your shell, whether that be `zsh`, `bash`, or `fish`:
+ ```shell
+ conda init
+ ```
+4. On Linux and macOS, restart your shell or `source` your shell configuration file (`.zshrc`, `.bashrc`, etc.).
+5. Initialize a Conda environment named `sample_app_env` with the required packages by running:
+ ```shell
+ conda env create -f environment.yml
+ ```
+6. Activate the environment by running:
+ ```shell
+ conda activate sample_app_env
+ ```
+7. Install an application to run the `demo.ipynb` file. We recommend [Visual Studio Code](https://code.visualstudio.com/) with the [Jupyter extension](https://marketplace.visualstudio.com/items?itemName=ms-toolsai.jupyter).
+
+## Using the Jupyter Notebook Locally
+
+These steps assume you have followed the prerequisites and are using Visual Studio Code with the Jupyter extension.
+
+To run the notebook locally:
+
+1. After configuring the Conda environment, open `demo.ipynb` in Visual Studio Code.
+2. Click the search bar and from the dropdown menu select **Show and Run Commands**.
+3. Search for "Python: Select Interpreter".
+4. Select the Python "sample_app_env" environment from the dropdown menu.
+5. Scroll through the steps of the Jupyter notebook. Adjust values in the "Define Timestream for LiveAnalytics Settings" and "Generate Data" sections to your desire. Default values have been set for all steps.
+6. Once the kernel is running, press the **Run All** button.
+7. When all cells in the notebook have finished executing, records will have been ingested to the `sample_app_table` table in the `sample_app_database` database in Timestream for LiveAnalytics.
+
+## Using the Jupyter Notebook in Amazon SageMaker
+
+### IAM Configuration
+
+When deployed in Amazon SageMaker, the instance hosting the Jupyter notebook must use an IAM role with the following permissions, replacing `` with your desired AWS region name and `` with your AWS account ID:
+
+```json
+{
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Effect": "Allow",
+ "Action": [
+ "lambda:GetFunction",
+ "lambda:GetFunctionUrlConfig",
+ "lambda:InvokeFunctionUrl",
+ "lambda:UpdateFunctionUrlConfig",
+ "lambda:CreateFunction",
+ "lambda:CreateFunctionUrlConfig",
+ "lambda:AddPermission"
+ ],
+ "Resource": "arn:aws:lambda:::function:TimestreamSampleLambda"
+ },
+ {
+ "Effect": "Allow",
+ "Action": [
+ "iam:CreateRole",
+ "iam:GetRole",
+ "iam:GetRolePolicy",
+ "iam:CreatePolicy",
+ "iam:CreatePolicyVersion",
+ "iam:UpdateAssumeRolePolicy",
+ "iam:GetPolicy",
+ "iam:AttachRolePolicy",
+ "iam:AttachGroupPolicy",
+ "iam:PutRolePolicy",
+ "iam:PutGroupPolicy",
+ "iam:PassRole"
+ ],
+ "Resource": [
+ "arn:aws:iam:::role/TimestreamLambdaRole",
+ "arn:aws:iam:::role/GrafanaWorkspaceRole"
+ ]
+ },
+ {
+ "Effect": "Allow",
+ "Action": [
+ "sso:DescribeRegisteredRegions",
+ "sso:CreateManagedApplicationInstance"
+ ],
+ "Resource": "*"
+ },
+ {
+ "Effect": "Allow",
+ "Action": [
+ "grafana:DescribeWorkspace",
+ "grafana:CreateWorkspace",
+ "grafana:ListWorkspaces",
+ "grafana:CreateWorkspaceServiceAccount",
+ "grafana:CreateWorkspaceServiceAccountToken",
+ "grafana:DeleteWorkspaceServiceAccountToken",
+ "grafana:DescribeWorkspaceConfiguration",
+ "grafana:UpdateWorkspaceConfiguration",
+ "grafana:ListWorkspaceServiceAccounts",
+ "grafana:ListWorkspaceServiceAccountTokens"
+ ],
+ "Resource": "arn:aws:grafana:>::/workspaces*"
+ }
+ ]
+}
+```
+
+The Lambda function name `TimestreamSampleLambda` and the role name `TimestreamLambdaRole` are the default names used in the Jupyter notebook.
+
+### SageMaker Configuration
+
+To host the Jupyter notebook in SageMaker and run the notebook:
+
+1. Go to the Amazon SageMaker console.
+2. In the navigation panel, choose **Notebooks**.
+3. Choose **Create notebook instance**.
+4. For IAM role, select the role created in the above [**IAM Configuration**](#iam-configuration) section.
+5. After configuring the rest of the notebook settings to your liking, choose **Create notebook instance**.
+6. Once the notebook's status is **InService**, choose the notebook's **Open Jupyter** link.
+7. Choose **Upload** and select `demo.ipynb`.
+8. Choose the uploaded notebook.
+9. In the **Kernel not found** popup window, select `conda_python3` form the dropdown menu and choose **Set Kernel**.
+10. Once the kernel has started, choose **Kernel** > **Restart & Run All**.
+11. When all cells in the notebook have finished executing, records will have been ingested to the `sample_app_table` table in the `sample_app_database` database in Timestream for LiveAnalytics.
+
+## Viewing Data in Amazon Managed Grafana
+
+The notebook will create an Amazon Managed Grafana workspace and create a dashboard.
+
+Before accessing the dashboard, an IAM Identity Center user must be created and added to the workspace manually. The last two steps of the notebook provide instructions for how to do this and access the dashboard. The "Generate and Upload Grafana Dashboard" cell will output the login url for the workspace.
diff --git a/integrations/lambda/demo.ipynb b/integrations/lambda/demo.ipynb
new file mode 100644
index 00000000..29545f07
--- /dev/null
+++ b/integrations/lambda/demo.ipynb
@@ -0,0 +1,2116 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "6efcc07e",
+ "metadata": {},
+ "source": [
+ "# Timestream Lambda Function Sample Application"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "dfb5f3c3",
+ "metadata": {},
+ "source": [
+ "This notebook demonstrates generating data, according to a schema defined by the user; deploying an AWS Lambda function to process it; and visualizing the data using Grafana."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "318886b7",
+ "metadata": {},
+ "source": [
+ "## Step 1: Generate Data"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "26c9edae",
+ "metadata": {},
+ "source": [
+ "### Imports"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "59510d93",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import random\n",
+ "from datetime import datetime, timedelta, timezone\n",
+ "import json\n",
+ "import math"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "abd60adf",
+ "metadata": {},
+ "source": [
+ "### Data Generator Base Class Definition\n",
+ "\n",
+ "Defines the `DataGenerator` class, a base class for generating time series data. Data scenarios are defined by subclasses, in which subclasses set `measure_templates` and `dimension_templates` values, which define the possible values and restrictions for record measures and dimensions."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "de3bda06",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "class DataGenerator:\n",
+ " # A list of dicts used to define the format of dimensions.\n",
+ " #\n",
+ " # Format:\n",
+ " # \"name\": str. Required. The name of the dimension.\n",
+ " # \"value_length\": int. Optional. The length of the dimension value, when it is randomly generated. If neither this nor\n",
+ " # \"options\" are provided, defaults to 10.\n",
+ " # \"random_options\": [str]. Optional. An array of strings to pick at random as options for the dimension value.\n",
+ " # Has precedence over \"value_length\". Values will be reused.\n",
+ " # \"unique_options\": [str]. Optional. An array of strings to pick serially for the dimension value. Each value in\n",
+ " # this array will be used once.\n",
+ " #\n",
+ " # Example:\n",
+ " # [\n",
+ " # {\n",
+ " # \"name\": \"device_id\",\n",
+ " # \"value_length\": 9\n",
+ " # },\n",
+ " # {\n",
+ " # \"name\": \"region\",\n",
+ " # \"random_options\": [\"us-east-1\", \"us-west-2\"]\n",
+ " # }\n",
+ " # ]\n",
+ " dimension_templates: list\n",
+ "\n",
+ " # A list of dicts used to define the format of measures.\n",
+ " #\n",
+ " # Format:\n",
+ " # \"name\": str. Required. The name of the measure.\n",
+ " # \"type\": str. Optional. The Timestream for LiveAnalytics data type of the measure. Valid options are \"DOUBLE\",\n",
+ " # \"BIGINT\", \"BOOLEAN\", and \"VARCHAR\". Defaults to \"DOUBLE\".\n",
+ " # \"max_variation\": Optional. The maximum amount a measure value can changed over time, positively or negatively.\n",
+ " # For example, with a value of 5.0, measure values will increment by a max of 5.0 and a min of -5.0. Defaults to 1.5.\n",
+ " # \"max\": Optional. The maximum measure value. Defaults to 100.0.\n",
+ " # \"min\": Optional. The minimum measure value. Defaults to 0.0.\n",
+ " # \"random_options\": Optional. A list of values the measure value can have. All elements of the list should be the same data type and\n",
+ " # match the data type specified by the \"type\" field. This field overrides \"max_variation\", \"max\", and \"min\".\n",
+ " # Values will be reused, in the same way as the dimension_templates \"unique_options\" field.\n",
+ " #\n",
+ " # Example:\n",
+ " # [\n",
+ " # {\n",
+ " # \"name\": \"temperature_celsius\",\n",
+ " # \"type\": \"DOUBLE\",\n",
+ " # \"max_variation\": 2.0,\n",
+ " # \"max\": 40.0,\n",
+ " # \"min\": 30.0\n",
+ " # },\n",
+ " # {\n",
+ " # \"name\": \"symptoms\",\n",
+ " # \"type\": \"VARCHAR\",\n",
+ " # \"random_options\": [\"none\", \"headache\", \"shortness of breath\", \"fatigue\", \"nausea\"]\n",
+ " # }\n",
+ " # ]\n",
+ " measure_templates: list\n",
+ "\n",
+ " def __init__(self):\n",
+ " # All subclasses need to do is provide values for measure_templates and dimension_templates\n",
+ " self.measure_templates = []\n",
+ " self.dimension_templates = []\n",
+ "\n",
+ " def generate(self, start_date: datetime, end_date: datetime, reporting_frequency: timedelta,\n",
+ " num_entities: int, precision=\"MILLISECONDS\", generate_unique_options_fallback=False) -> list:\n",
+ " \"\"\"\n",
+ " Generates time series data.\n",
+ "\n",
+ " :param start_date: The start date to use when generating records. This cannot be older in hours\n",
+ " than the memory retention period in hours value for the table.\n",
+ " :param end_date: The end date to use when generating records. The maximum end date\n",
+ " Timestream for LiveAnalytics allows is 15 minutes in the future.\n",
+ " :param reporting_frequency: The frequency that records are generated by all entities, for example,\n",
+ " every 2 seconds, every 5 hours, etc.\n",
+ " :param num_entities: The number of entities that will report for each timestamp, for example,\n",
+ " the number of servers or number of stocks.\n",
+ " :param precision: The precision to use for record timestamps. Valid options are \"MILLISECONDS\",\n",
+ " \"SECONDS\", and \"MICROSECONDS\".\n",
+ " :param generate_unique_options_fallback: Whether to generate random strings for dimension values\n",
+ " after all values in a dimension template's \"unique_options\" array have been used.\n",
+ " \"\"\"\n",
+ "\n",
+ " # Construct entities (e.g., servers, weather reporting stations, stocks, etc.)\n",
+ " entities = []\n",
+ " for _ in range(num_entities):\n",
+ " entity = {\"latest_measures\": {}}\n",
+ " for dimension_template in self.dimension_templates:\n",
+ " dimension_value_length = 20\n",
+ "\n",
+ " if \"value_length\" in dimension_template:\n",
+ " dimension_value_length = dimension_template[\"value_length\"]\n",
+ "\n",
+ " # Unique options that should not be reused. Stock symbols are\n",
+ " # an example of this.\n",
+ " if \"unique_options\" in dimension_template:\n",
+ " if len(dimension_template[\"unique_options\"]) > 0:\n",
+ " dimension_value = dimension_template[\"unique_options\"][0]\n",
+ " # Dimensions must be unique. Each time a choice is chosen, remove it from the list.\n",
+ " dimension_template[\"unique_options\"].pop(0)\n",
+ " elif generate_unique_options_fallback:\n",
+ " # Generate a fallback value, since we've run out of options and the user\n",
+ " # has specified that they want more unique values generated.\n",
+ " dimension_value = self._generate_random_string(dimension_value_length)\n",
+ " \n",
+ " # Options that are reused. Server regions are an example of this.\n",
+ " elif \"random_options\" in dimension_template:\n",
+ " dimension_value = random.choice(dimension_template[\"random_options\"])\n",
+ "\n",
+ " elif \"unique_options\" not in dimension_template and \"random_options\" not in dimension_template:\n",
+ " dimension_value = self._generate_random_string(dimension_value_length)\n",
+ "\n",
+ " entity[dimension_template[\"name\"]] = dimension_value\n",
+ " \n",
+ " entities.append(entity)\n",
+ "\n",
+ " records = []\n",
+ " current_date = start_date\n",
+ " while current_date <= end_date:\n",
+ " # Each entity has a record for a single timestamp\n",
+ " for entity in entities:\n",
+ " dimensions = []\n",
+ " for key in entity:\n",
+ " if key != \"latest_measures\":\n",
+ " dimension = {\n",
+ " \"Name\": key,\n",
+ " \"Value\": entity[key],\n",
+ " \"DimensionValueType\": \"VARCHAR\" # Not configurable\n",
+ " }\n",
+ " dimensions.append(dimension)\n",
+ "\n",
+ " measures = []\n",
+ "\n",
+ " for measure_template in self.measure_templates:\n",
+ " if \"name\" not in measure_template:\n",
+ " raise Exception(f\"Measure template was missing name: {measure_template}\")\n",
+ " measure_name = measure_template[\"name\"]\n",
+ "\n",
+ " # Optional template fields\n",
+ " measure_value_type = \"DOUBLE\"\n",
+ " if \"type\" in measure_template:\n",
+ " measure_value_type = str(measure_template[\"type\"]).strip().upper()\n",
+ " max_variation = 1.5\n",
+ " if \"max_variation\" in measure_template:\n",
+ " max_variation = measure_template[\"max_variation\"]\n",
+ " max_value = 100.0\n",
+ " if \"max\" in measure_template:\n",
+ " max_value = measure_template[\"max\"]\n",
+ " min_value = 0.0\n",
+ " if \"min\" in measure_template:\n",
+ " min_value = measure_template[\"min\"]\n",
+ " varchar_length = 10\n",
+ " if \"varchar_length\" in measure_template:\n",
+ " varchar_length = measure_template[\"varchar_length\"]\n",
+ "\n",
+ " measure = {\n",
+ " \"MeasureName\": measure_name,\n",
+ " \"MeasureValueType\": measure_value_type\n",
+ " }\n",
+ "\n",
+ " measure_value = None\n",
+ " if \"random_options\" in measure_template:\n",
+ " measure_value = random.choice(measure_template[\"random_options\"])\n",
+ " else:\n",
+ " if current_date == start_date:\n",
+ " if measure_value_type == \"DOUBLE\":\n",
+ " measure_value = random.uniform(min_value, max_value)\n",
+ " elif measure_value_type == \"VARCHAR\":\n",
+ " measure_value = self._generate_random_string(varchar_length)\n",
+ " elif measure_value_type == \"BIGINT\":\n",
+ " measure_value = int(random.uniform(min_value, max_value))\n",
+ " elif measure_value_type == \"BOOLEAN\":\n",
+ " measure_value = random.choice(True, False)\n",
+ " else:\n",
+ " raise Exception(\"Measure value type not recognized\")\n",
+ " else:\n",
+ " if measure_value_type == \"DOUBLE\":\n",
+ " measure_value = max(min_value, min(entity[\"latest_measures\"][measure_name] + random.uniform(-max_variation, max_variation), max_value))\n",
+ " elif measure_value_type == \"VARCHAR\":\n",
+ " measure_value = self._generate_random_string(varchar_length)\n",
+ " elif measure_value_type == \"BIGINT\":\n",
+ " measure_value = int(max(min_value, min(entity[\"latest_measures\"][measure_name] + int(random.uniform(-max_variation, max_variation)), max_value)))\n",
+ " elif measure_value_type == \"BOOLEAN\":\n",
+ " measure_value = random.choice(True, False)\n",
+ " else:\n",
+ " raise Exception(\"Measure value type not recognized\")\n",
+ " \n",
+ " # Store the actual value in the entity for future iteration\n",
+ " entity[\"latest_measures\"][measure_name] = measure_value\n",
+ "\n",
+ " # Timestream requires that all data be inserted as a string\n",
+ " measure[\"MeasureValue\"] = str(measure_value)\n",
+ "\n",
+ " measures.append(measure)\n",
+ " \n",
+ " precision = precision.strip().upper()\n",
+ " if precision == \"SECONDS\":\n",
+ " timestamp = str(int(current_date.timestamp()))\n",
+ " if precision == \"MICROSECONDS\":\n",
+ " timestamp = str(int(current_date.timestamp() * 1_000_000))\n",
+ " else:\n",
+ " # Default to millisecond precision\n",
+ " timestamp = str(int(current_date.timestamp() * 1_000))\n",
+ "\n",
+ " record = {\n",
+ " \"Dimensions\": dimensions,\n",
+ " \"Time\": timestamp,\n",
+ " \"Measures\": measures\n",
+ " }\n",
+ " records.append(record)\n",
+ " current_date += reporting_frequency\n",
+ " return records\n",
+ " \n",
+ " def _generate_random_string(self, length: int):\n",
+ " \"\"\"\n",
+ " Generates a random alphanumeric string.\n",
+ "\n",
+ " :param length: The length of the string to generate.\n",
+ " \"\"\"\n",
+ "\n",
+ " letters = \"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789\"\n",
+ " return ''.join(random.choice(letters) for _ in range(length))\n",
+ " "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "0241c11e",
+ "metadata": {},
+ "source": [
+ "### Data Generator Subclass Definitions\n",
+ "\n",
+ "Defines subclasses of DataGenerator that define `measure_templates` and `dimension_templates`.\n",
+ "\n",
+ "The following subclasses are defined:\n",
+ "- `DevOpsDataGenerator`: Generates generic DevOps time series data for servers.\n",
+ "- `IoTDateGenerator`: Generates generic IoT time series data for devices.\n",
+ "- `StockMarketGenerator`: Generates time series data simulating stock market prices.\n",
+ "- `WeatherDataGenerator`: Generates time series data simulating weather reporting for different US cities.\n",
+ "- `GamingDataGenerator`: Generates time series data simulating player activity in a competitive online video game.\n",
+ "- `AirQualityDataGenerator`: Generates time series data simulating air quality in different cities around the world.\n",
+ "- `PatientDataGenerator`: Generates time series data simulating the status of healthcare patients.\n",
+ "- `EnergyDataGenerator`: Generates time series data simulating building energy usage.\n",
+ "- `FlightDataGenerator`: Generates time series data simulating different airline flights and the status of in-flight planes.\n",
+ "- `ExchangeRateDataGenerator`: Generates time series data simulating the fluctuating exchange rates of different currency pairs.\n",
+ "- `CustomDataGenerator`: Allows users to define their own `measure_templates` and `dimension_templates` to generate data of their choosing."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "4d3deb01",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "class DevOpsDataGenerator(DataGenerator):\n",
+ " def __init__(self):\n",
+ " self.measure_templates = [\n",
+ " {\n",
+ " \"name\": \"cpu_usage\",\n",
+ " \"type\": \"DOUBLE\",\n",
+ " \"max_variation\": 1.2,\n",
+ " \"max\": 100.0,\n",
+ " \"min\": 0.0\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"mem_usage\",\n",
+ " \"type\": \"DOUBLE\",\n",
+ " \"max_variation\": 2.3,\n",
+ " \"max\": 100.0,\n",
+ " \"min\": 0.0\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"disk_usage\",\n",
+ " \"type\": \"DOUBLE\",\n",
+ " \"max_variation\": 0.5,\n",
+ " \"max\": 100.0,\n",
+ " \"min\": 0.0\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"network_in\",\n",
+ " \"type\": \"DOUBLE\",\n",
+ " \"max_variation\": 100,\n",
+ " \"max\": 5000,\n",
+ " \"min\": 0\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"network_out\",\n",
+ " \"type\": \"DOUBLE\",\n",
+ " \"max_variation\": 20,\n",
+ " \"max\": 2000,\n",
+ " \"min\": 0\n",
+ " }\n",
+ " ]\n",
+ " self.dimension_templates = [\n",
+ " {\n",
+ " \"name\": \"server_id\",\n",
+ " \"value_length\": 14\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"region\",\n",
+ " \"random_options\": [\"ca-central-1\", \"ca-west-1\", \"us-east-1\", \"us-east-2\", \"us-west-1\", \"us-west-2\", \"sa-east-1\", \"eu-central-1\", \"eu-west-1\", \"eu-west-2\", \"eu-south-1\", \"eu-west-3\"]\n",
+ " }\n",
+ " ]\n",
+ "\n",
+ "class IoTDataGenerator(DataGenerator):\n",
+ " def __init__(self):\n",
+ " self.measure_templates = [\n",
+ " {\n",
+ " \"name\": \"temperature_celsius\",\n",
+ " \"type\": \"DOUBLE\",\n",
+ " \"max_variation\": 0.5,\n",
+ " \"max\": 60,\n",
+ " \"min\": -30\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"relative_humidity\",\n",
+ " \"type\": \"DOUBLE\",\n",
+ " \"max_variation\": 0.3,\n",
+ " \"max\": 100.0,\n",
+ " \"min\": 0.0\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"battery_level\",\n",
+ " \"type\": \"BIGINT\",\n",
+ " \"max_variation\": 1,\n",
+ " \"max\": 100,\n",
+ " \"min\": 1 # All devices have enough battery to report\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"velocity\",\n",
+ " \"type\": \"DOUBLE\",\n",
+ " \"max_variation\": 4.2,\n",
+ " \"max\": 100.0,\n",
+ " \"min\": 0.0\n",
+ " }\n",
+ " ]\n",
+ " self.dimension_templates = [\n",
+ " {\n",
+ " \"name\": \"device_id\",\n",
+ " \"value_length\": 14\n",
+ " }\n",
+ " ]\n",
+ "\n",
+ "class StockMarketDataGenerator(DataGenerator):\n",
+ " def __init__(self):\n",
+ " self.measure_templates = [\n",
+ " {\n",
+ " \"name\": \"volume\",\n",
+ " \"type\": \"BIGINT\",\n",
+ " \"max_variation\": 30000,\n",
+ " \"max\": 100000000,\n",
+ " \"min\": 1\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"market_cap\",\n",
+ " \"type\": \"BIGINT\",\n",
+ " \"max_variation\": 100,\n",
+ " \"max\": 100000000000,\n",
+ " \"min\": 1000000000,\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"price_change\",\n",
+ " \"type\": \"DOUBLE\",\n",
+ " \"max_variation\": 0.5,\n",
+ " \"max\": 1000.0,\n",
+ " \"min\": 1.0\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"percentage_change\",\n",
+ " \"type\": \"DOUBLE\",\n",
+ " \"max_variation\": 10.0,\n",
+ " \"max\": 100.0,\n",
+ " \"min\": 0.0\n",
+ " }\n",
+ " ]\n",
+ " self.dimension_templates = [\n",
+ " {\n",
+ " \"name\": \"stock_symbol\",\n",
+ " \"unique_options\": [\"AAPL\", \"TSLA\", \"DJIA\", \"SPOT\", \"NFLX\", \"MSFT\", \"MCD\", \"PG\", \"KO\", \"MMM\", \"IBM\", \"AMZN\", \"VZ\", \"JNJ\", \"WMT\"]\n",
+ " }\n",
+ " ]\n",
+ "\n",
+ " def generate(self, start_date, end_date, reporting_frequency, num_entities, precision=\"MILLISECONDS\", generate_unique_dimension_fallback=False):\n",
+ " num_stock_symbols = len(self.dimension_templates[0][\"unique_options\"])\n",
+ " if num_entities > num_stock_symbols and not generate_unique_dimension_fallback:\n",
+ " raise Exception(f\"num_entities ({num_entities}) was greater than the number of stock symbols ({num_stock_symbols})\")\n",
+ " return super().generate(start_date, end_date, reporting_frequency, num_entities, generate_unique_dimension_fallback)\n",
+ "\n",
+ "class WeatherDataGenerator(DataGenerator):\n",
+ " def __init__(self):\n",
+ " self.measure_templates = [\n",
+ " {\n",
+ " \"name\": \"temperature_celsius\",\n",
+ " \"type\": \"DOUBLE\",\n",
+ " \"max_variation\": 1.5,\n",
+ " \"max\": 65.0,\n",
+ " \"min\": -30\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"relative_humidity\",\n",
+ " \"type\": \"DOUBLE\",\n",
+ " \"max_variation\": 0.5,\n",
+ " \"max\": 100.0,\n",
+ " \"min\": 0.0\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"wind_speed_kph\",\n",
+ " \"type\": \"DOUBLE\",\n",
+ " \"max_variation\": 5.5,\n",
+ " \"max\": 407.164,\n",
+ " \"min\": 0.0\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"precipitation_mm\",\n",
+ " \"type\": \"DOUBLE\",\n",
+ " \"max_variation\": 1.5,\n",
+ " \"max\": 60.0,\n",
+ " \"min\": 0.0\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"cloud_percentage\",\n",
+ " \"type\": \"DOUBLE\",\n",
+ " \"max_variation\": 10.0,\n",
+ " \"max\": 100.0,\n",
+ " \"min\": 0.0\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"pressure_hpa\",\n",
+ " \"type\": \"BIGINT\",\n",
+ " \"max_variation\": 10,\n",
+ " \"max\": 1050,\n",
+ " \"min\": 950\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"visibility_km\",\n",
+ " \"type\": \"BIGINT\",\n",
+ " \"max_variation\": 20,\n",
+ " \"max\": 200,\n",
+ " \"min\": 1\n",
+ " }\n",
+ " ]\n",
+ " self.dimension_templates = [\n",
+ " {\n",
+ " \"name\": \"location\", \n",
+ " \"unique_options\": [\"San Francisco, CA\", \"Chicago, IL\", \"New York, NY\", \"Miami, FL\", \"Dallas, TX\", \"Gary, IN\", \"Las Vegas, NV\", \"San Diego, CA\", \"Portland, OR\", \"Seattle, WA\", \"New Orleans, LA\", \"Fargo, ND\", \"Albuquerque, NM\"]\n",
+ " }\n",
+ " ]\n",
+ "\n",
+ " def generate(self, start_date, end_date, reporting_frequency, num_entities, precision=\"MILLISECONDS\", generate_unique_options_fallback=False):\n",
+ " num_locations = len(self.dimension_templates[0][\"unique_options\"])\n",
+ " if num_entities > num_locations and not generate_unique_options_fallback:\n",
+ " raise Exception(f\"num_entities ({num_entities}) was greater than the number of locations ({num_locations})\")\n",
+ " return super().generate(start_date, end_date, reporting_frequency, num_entities, generate_unique_options_fallback)\n",
+ "\n",
+ "class GamingDataGenerator(DataGenerator):\n",
+ " def __init__(self):\n",
+ " self.measure_templates = [\n",
+ " {\n",
+ " \"name\": \"X\",\n",
+ " \"type\": \"DOUBLE\",\n",
+ " \"max_variation\": 2.1,\n",
+ " \"max\": 5000.0,\n",
+ " \"min\": -5000.0\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"Y\",\n",
+ " \"type\": \"DOUBLE\",\n",
+ " \"max_variation\": 2.1,\n",
+ " \"max\": 5000.0,\n",
+ " \"min\": -5000.0\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"Z\",\n",
+ " \"type\": \"DOUBLE\",\n",
+ " \"max_variation\": 2.1,\n",
+ " \"max\": 5000.0,\n",
+ " \"min\": -5000.0\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"health\",\n",
+ " \"type\": \"BIGINT\",\n",
+ " \"max_variation\": 60,\n",
+ " \"max\": 100,\n",
+ " \"min\": 1\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"ping\",\n",
+ " \"type\": \"BIGINT\",\n",
+ " \"max_variation\": 10,\n",
+ " \"max\": 250,\n",
+ " \"min\": 25\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"current_equip_value\",\n",
+ " \"type\": \"BIGINT\",\n",
+ " \"max_variation\": 150,\n",
+ " \"max\": 1000000,\n",
+ " \"min\": 10\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"flash_duration\",\n",
+ " \"type\": \"DOUBLE\",\n",
+ " \"max_variation\": 0.3,\n",
+ " \"max\": 10.0,\n",
+ " \"min\": 0.0\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"pitch\",\n",
+ " \"type\": \"DOUBLE\",\n",
+ " \"max_variation\": 20.0,\n",
+ " \"max\": 90.0,\n",
+ " \"min\": -90.0\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"yaw\",\n",
+ " \"type\": \"DOUBLE\",\n",
+ " \"max_variation\": 0.8,\n",
+ " \"max\": 360.0,\n",
+ " \"min\": 0.0\n",
+ " }\n",
+ " ]\n",
+ " self.dimension_templates = [\n",
+ " {\n",
+ " \"name\": \"player_id\",\n",
+ " \"value_length\": 25\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"player_name\",\n",
+ " \"value_length\": 15\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"clan\",\n",
+ " \"random_options\": [\"mosdeff\", \"green_berets\", \"golden_ducks\", \"roberts\", \"club_z\"]\n",
+ " }\n",
+ " ]\n",
+ "\n",
+ "class AirQualityDataGenerator(DataGenerator):\n",
+ " def __init__(self):\n",
+ " self.measure_templates = [\n",
+ " {\n",
+ " \"name\": \"PM2.5\",\n",
+ " \"type\": \"DOUBLE\",\n",
+ " \"max_variation\": 1.0,\n",
+ " \"max\": 150.0,\n",
+ " \"min\": 15.0\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"PM10\",\n",
+ " \"type\": \"DOUBLE\",\n",
+ " \"max_variation\": 1.0,\n",
+ " \"max\": 150.0,\n",
+ " \"min\": 15.0\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"CO_ppm\",\n",
+ " \"type\": \"DOUBLE\",\n",
+ " \"max_variation\": 5.0,\n",
+ " \"max\": 100.0,\n",
+ " \"min\": 0.1\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"NO2_ppb\",\n",
+ " \"type\": \"DOUBLE\",\n",
+ " \"max_variation\": 1.0,\n",
+ " \"max\": 300.0,\n",
+ " \"min\": 0.5\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"O2_percentage\",\n",
+ " \"type\": \"DOUBLE\",\n",
+ " \"max_variation\": 1.0,\n",
+ " \"max\": 25.0,\n",
+ " \"min\": 20.8\n",
+ " }\n",
+ " ]\n",
+ " self.dimension_templates = [\n",
+ " {\n",
+ " \"name\": \"city\",\n",
+ " \"unique_options\": [\"Los Angeles\", \"New York\", \"Vancouver\", \"Sydney\", \"Delhi\", \"Beijing\", \"London\", \"Miami\", \"Toronto\", \"Seattle\", \"Amsterdam\"]\n",
+ " }\n",
+ " ]\n",
+ "\n",
+ " def generate(self, start_date, end_date, reporting_frequency, num_entities, precision=\"MILLISECONDS\", generate_unique_options_fallback=False):\n",
+ " num_cities = len(self.dimension_templates[0][\"unique_options\"])\n",
+ " if num_entities > num_cities and not generate_unique_options_fallback:\n",
+ " raise Exception(f\"num_entities ({num_entities}) was greater than the number of cities ({num_cities})\")\n",
+ " return super().generate(start_date, end_date, reporting_frequency, num_entities, generate_unique_options_fallback)\n",
+ "\n",
+ "class PatientDataGenerator(DataGenerator):\n",
+ " def __init__(self):\n",
+ " self.measure_templates = [\n",
+ " {\n",
+ " \"name\": \"heart_rate_bpm\",\n",
+ " \"type\": \"BIGINT\",\n",
+ " \"max_variation\": 5.0,\n",
+ " \"max\": 100,\n",
+ " \"min\": 60\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"oxygen_saturation_percentage\",\n",
+ " \"type\": \"DOUBLE\",\n",
+ " \"max_variation\": 5.0,\n",
+ " \"max\": 100.0,\n",
+ " \"min\": 60.0\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"temperature_celsius\",\n",
+ " \"type\": \"DOUBLE\",\n",
+ " \"max_variation\": 2.0,\n",
+ " \"max\": 40.0,\n",
+ " \"min\": 30.0\n",
+ " }\n",
+ " ]\n",
+ " self.dimension_templates = [\n",
+ " {\n",
+ " \"name\": \"patient_id\",\n",
+ " \"value_length\": 14\n",
+ " }\n",
+ " ]\n",
+ "\n",
+ "class EnergyDataGenerator(DataGenerator):\n",
+ " def __init__(self):\n",
+ " self.measure_templates = [\n",
+ " {\n",
+ " \"name\": \"energy_usage_kWh\",\n",
+ " \"type\": \"BIGINT\",\n",
+ " \"max_variation\": 50,\n",
+ " \"max\": 300,\n",
+ " \"min\": 10\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"occupancy\",\n",
+ " \"type\": \"BIGINT\",\n",
+ " \"max_variation\": 20,\n",
+ " \"max\": 100,\n",
+ " \"min\": 0\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"temperature_celsius\",\n",
+ " \"type\": \"DOUBLE\",\n",
+ " \"max_variation\": 0.5,\n",
+ " \"max\": 60,\n",
+ " \"min\": -30\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"relative_humidity\",\n",
+ " \"type\": \"DOUBLE\",\n",
+ " \"max_variation\": 0.3,\n",
+ " \"max\": 100.0,\n",
+ " \"min\": 0.0\n",
+ " }\n",
+ " ]\n",
+ " self.dimension_templates = [\n",
+ " {\n",
+ " \"name\": \"building_id\",\n",
+ " \"value_length\": 14\n",
+ " }\n",
+ " ]\n",
+ "\n",
+ "class FlightDataGenerator(DataGenerator):\n",
+ " def __init__(self):\n",
+ " self.measure_templates = [\n",
+ " {\n",
+ " \"name\": \"fuel_level_gallons\",\n",
+ " \"type\": \"BIGINT\",\n",
+ " \"max_variation\": 40,\n",
+ " \"max\": 8000,\n",
+ " \"min\": 1\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"heading\",\n",
+ " \"type\": \"BIGINT\",\n",
+ " \"max_variation\": 10,\n",
+ " \"max\": 360,\n",
+ " \"min\": 0\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"air_temperature_celsius\",\n",
+ " \"type\": \"DOUBLE\",\n",
+ " \"max_variation\": 2.0,\n",
+ " \"max\": 40.0,\n",
+ " \"min\": -90.0\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"lat\",\n",
+ " \"type\": \"DOUBLE\",\n",
+ " \"max_variation\": 0.5,\n",
+ " \"max\": 90.0,\n",
+ " \"min\": -90.0\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"lon\",\n",
+ " \"type\": \"DOUBLE\",\n",
+ " \"max_variation\": 0.5,\n",
+ " \"max\": 180.0,\n",
+ " \"min\": -180.0\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"speed_knots\",\n",
+ " \"type\": \"BIGINT\",\n",
+ " \"max_variation\": 10,\n",
+ " \"max\": 250,\n",
+ " \"min\": 200\n",
+ " }\n",
+ " ]\n",
+ " self.dimension_templates = [\n",
+ " {\n",
+ " \"name\": \"flight_id\",\n",
+ " \"unique_options\": [\"FL123\", \"FL456\", \"FL890\", \"FL333\", \"FL100\", \"FL650\", \"FL256\", \"FL430\", \"FL211\", \"FL874\"]\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"aircraft_type\",\n",
+ " \"random_options\": [\"Boeing 737\", \"Airbus A320\"]\n",
+ " }\n",
+ " ]\n",
+ "\n",
+ " def generate(self, start_date, end_date, reporting_frequency, num_entities, generate_unique_options_fallback=False):\n",
+ " num_flight_ids = len(self.dimension_templates[0][\"unique_options\"])\n",
+ " if num_entities > num_flight_ids and not generate_unique_options_fallback:\n",
+ " raise Exception(f\"num_entities ({num_entities}) was greater than the number of flight IDs ({num_flight_ids})\")\n",
+ " return super().generate(start_date, end_date, reporting_frequency, num_entities, generate_unique_options_fallback)\n",
+ "\n",
+ "class ExchangeRateDataGenerator(DataGenerator):\n",
+ " def __init__(self):\n",
+ " self.measure_templates = [\n",
+ " {\n",
+ " \"name\": \"exchange_rate\",\n",
+ " \"type\": \"DOUBLE\",\n",
+ " \"max_variation\": 1.0,\n",
+ " \"max\": 90.0,\n",
+ " \"min\": 0.62\n",
+ " }\n",
+ " ]\n",
+ " self.dimension_templates = [\n",
+ " {\n",
+ " \"name\": \"currency_pair\",\n",
+ " \"unique_options\": [\"USD/EUR\", \"USD/CAD\", \"USD/GPP\", \"USD/CNY\", \"GBP/CAD\", \"GBP/JPY\", \"GBP/INR\", \"CHF/INR\", \"XAU/CNY\", \"UYU/CAD\", \"USD/XAU\"]\n",
+ " }\n",
+ " ]\n",
+ "\n",
+ " def generate(self, start_date, end_date, reporting_frequency, num_entities, precision=\"MILLISECONDS\", generate_unique_options_fallback=False):\n",
+ " num_currency_pairs = len(self.dimension_templates[0][\"unique_options\"])\n",
+ " if num_entities > num_currency_pairs and not generate_unique_options_fallback:\n",
+ " raise Exception(f\"num_entities ({num_entities}) was greater than the number of currency pairs ({num_currency_pairs})\")\n",
+ " return super().generate(start_date, end_date, reporting_frequency, num_entities, generate_unique_options_fallback)\n",
+ "\n",
+ "\n",
+ "class CustomDataGenerator(DataGenerator):\n",
+ " def __init__(self, measure_templates: list, dimension_templates: list):\n",
+ " self.measure_templates = measure_templates\n",
+ " self.dimension_templates = dimension_templates"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "17b8430c",
+ "metadata": {},
+ "source": [
+ "### Define Timestream for LiveAnalytics Settings\n",
+ "\n",
+ "These variables are used later, by the Lambda function, when creating tables and ingesting records. These variables are defined here as they are used to confirm the desired time range for generated records is acceptable and calculate metrics to be used for cost estimation."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "fc5279cf",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "DATABASE_NAME = \"sample_app_database\"\n",
+ "TABLE_NAME = \"sample_app_table\"\n",
+ "\n",
+ "# To be used later, by the Lambda function, to create the Timestream for LiveAnalytics table.\n",
+ "# If you created your table manually, update with the actual values you configured for your table.\n",
+ "# Default values when creating a new table in the AWS console.\n",
+ "MEM_STORE_RETENTION_PERIOD_IN_HOURS = 12\n",
+ "MAG_STORE_RETENTION_PERIOD_IN_DAYS = 3653 # 10 years\n",
+ "\n",
+ "# The number of records to ingest to Timestream for LiveAnalytics at a time.\n",
+ "# Timestream for LiveAnalytics accepts a maximum of 100 records at a time.\n",
+ "BATCH_SIZE = 100\n",
+ "# BATCH_SIZE = 1\n",
+ "\n",
+ "# The precision of the timestamp for each generated record. Valid options are \"MILLISECONDS\", \"SECONDS\", and \"MICROSECONDS\".\n",
+ "# This will also be included as a query parameter in the request sent to the Lambda function.\n",
+ "PRECISION = \"MICROSECONDS\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "7932f505",
+ "metadata": {},
+ "source": [
+ "### Generate Data\n",
+ "\n",
+ "The data generator classes use the `generate` function to generate data. The arguments to `generate` are as follows:\n",
+ "- `start_date`: The start date to use when generating records. This cannot be older in hours than the memory retention period in hours value for the table.\n",
+ "- `end_date`: The end date to use when generating records. The maximum end date Timestream for LiveAnalytics allows is 15 minutes in the future.\n",
+ "- `reporting_frequency`: The frequency that records are generated by all entities, for example, every 2 seconds, every 5 hours, etc.\n",
+ "- `num_entities` The number of entities that will report for each timestamp, for example, the number of servers or number of stocks.\n",
+ "- `precision`: The precision to use for record timestamps. Valid options are `\"MILLISECONDS\"`, `\"SECONDS\"`, and `\"MICROSECONDS\"`.\n",
+ "- `generate_unique_options_fallback`: Whether to generate random strings for dimension values after all values in a dimension template's \"unique_options\" array have been used."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "ea01eea0",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# All timestamps default to UTC\n",
+ "end_date = datetime.now(timezone.utc)\n",
+ "start_date = end_date - timedelta(hours=2)\n",
+ "reporting_frequency = timedelta(minutes=1)\n",
+ "num_entities = 10\n",
+ "\n",
+ "if end_date > datetime.now(timezone.utc) + timedelta(minutes=15):\n",
+ " raise Exception(\"The end date for data generation cannot be more than 15 minutes in the future\")\n",
+ "if start_date < datetime.now(timezone.utc) - timedelta(hours=MEM_STORE_RETENTION_PERIOD_IN_HOURS):\n",
+ " raise Exception(f\"The start date for data generation cannot be more than {MEM_STORE_RETENTION_PERIOD_IN_HOURS} hours in the past\")\n",
+ "if start_date >= end_date:\n",
+ " raise Exception(\"The start date and end date for data generation are the same\")\n",
+ "if (end_date - start_date) < reporting_frequency:\n",
+ " raise Exception(\"The reporting frequency is too small for the data generation time range\")\n",
+ "\n",
+ "# By default, generate DevOps data, which simulates reporting from servers\n",
+ "# Define data_generator to help generate Grafana dashboard later\n",
+ "data_generator = DevOpsDataGenerator()\n",
+ "sample_data = data_generator.generate(start_date, end_date, reporting_frequency, num_entities, precision=PRECISION)\n",
+ "\n",
+ "# Custom data\n",
+ "#measure_templates = [\n",
+ "# {\n",
+ "# \"name\": \"exchange_rate\",\n",
+ "# \"type\": \"DOUBLE\",\n",
+ "# \"max_variation\": 1.0,\n",
+ "# \"max\": 90.0,\n",
+ "# \"min\": 0.62\n",
+ "# }\n",
+ "#]\n",
+ "\n",
+ "#dimension_templates = [\n",
+ "# {\n",
+ "# \"name\": \"currency_pair\",\n",
+ "# \"value_length\": 4,\n",
+ "# \"unique_options\": [\"USD/EUR\", \"USD/CAD\", \"USD/GPP\", \"USD/CNY\", \"GBP/CAD\", \"GBP/JPY\", \"GBP/INR\", \"CHF/INR\", \"XAU/CNY\", \"UYU/CAD\"]\n",
+ "# }\n",
+ "#]\n",
+ "\n",
+ "#data_generator = CustomDataGenerator(measure_templates=measure_templates, dimension_templates=dimension_templates)\n",
+ "#sample_data = data_generator.generate(start_date, end_date, reporting_frequency, num_entities, precision=PRECISION)\n",
+ "\n",
+ "# Print generated data\n",
+ "print(json.dumps(sample_data, indent=2))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "af707378",
+ "metadata": {},
+ "source": [
+ "## Step 2: Calculate Cost Metrics\n",
+ "\n",
+ "The following cell provides metrics that can be input into the [AWS pricing calculator](https://calculator.aws/#/) to give an estimate of costs for ingesting data to Timestream for LiveAnalytics.\n",
+ "\n",
+ "The metrics are:\n",
+ "\n",
+ "- Memory store writes.\n",
+ " - This is calculated by determining the number of records that would be ingested within the `MEM_STORE_RETENTION_PERIOD_IN_HOURS` time frame.\n",
+ "\n",
+ "Magnetic store writes are not calculated since Timestream for LiveAnalytics does not allow ingesting records with timestamps outside of the `MEM_STORE_RETENTION_PERIOD_IN_HOURS` time frame. In order for records to be stored in magnetic storage, they need to first be stored in memory then be moved to magnetic storage once enough time has passed.\n",
+ "\n",
+ "These cost metrics may not be accurate, as there may be a delay between generating the data and ingesting it, causing some amounts of records to be put into magnetic store or rejected due to being too old."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "cb5d1e1f",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# The AWS pricing calculator only allows per second, per minute, per hour, per day, and per month.\n",
+ "\n",
+ "num_records = len(sample_data)\n",
+ "time_diff = end_date - start_date\n",
+ "\n",
+ "if time_diff <= timedelta(seconds=1):\n",
+ " unit = \"second\"\n",
+ " scaled_count = num_records\n",
+ "elif time_diff <= timedelta(minutes=1):\n",
+ " unit = \"minute\"\n",
+ " scaled_count = num_records\n",
+ "elif time_diff <= timedelta(hours=1):\n",
+ " unit = \"hour\"\n",
+ " scaled_count = num_records\n",
+ "elif time_diff <= timedelta(days=1):\n",
+ " unit = \"day\"\n",
+ " scaled_count = num_records\n",
+ "# 30 days in a month is standard for billing\n",
+ "elif time_diff <= timedelta(days=30):\n",
+ " unit = \"month\"\n",
+ " scaled_count = num_records\n",
+ "else:\n",
+ " unit = \"month\"\n",
+ " print(time_diff.days)\n",
+ " # Round up, since the AWS pricing calculator does not accept decimal numbers\n",
+ " scaled_count = math.ceil(num_records / (time_diff.days / 30))\n",
+ "\n",
+ "print(f\"Total records: {num_records}\")\n",
+ "print(f\"Memory store writes: {scaled_count} per {unit}\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "a37a6cd5",
+ "metadata": {},
+ "source": [
+ "## Step 3: Deploy AWS Lambda Function"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "3368d6f6",
+ "metadata": {},
+ "source": [
+ "The following code will construct and deploy a Lambda function that ingests data to Timestream."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "86546694",
+ "metadata": {},
+ "source": [
+ "### Imports"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "3c08c93e",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import boto3\n",
+ "import zipfile\n",
+ "import os\n",
+ "import json"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "92213f98",
+ "metadata": {},
+ "source": [
+ "### Generate and Deploy Lambda Function"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "f87489d2",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "REGION_NAME='us-west-2'\n",
+ "\n",
+ "# Initialize clients\n",
+ "\n",
+ "iam_client = boto3.client('iam', region_name=REGION_NAME)\n",
+ "lambda_client = boto3.client('lambda', region_name=REGION_NAME)\n",
+ "sts_client = boto3.client('sts', region_name=REGION_NAME)\n",
+ "account_id = sts_client.get_caller_identity()['Account']\n",
+ "\n",
+ "lambda_name = \"TimestreamSampleLambda\"\n",
+ "\n",
+ "# Create IAM Role for Lambda\n",
+ "role_name = \"TimestreamLambdaRole\"\n",
+ "assume_role_policy = {\n",
+ " \"Version\": \"2012-10-17\",\n",
+ " \"Statement\": [\n",
+ " {\n",
+ " \"Effect\": \"Allow\",\n",
+ " \"Principal\": {\"Service\": \"lambda.amazonaws.com\"},\n",
+ " \"Action\": \"sts:AssumeRole\"\n",
+ " }\n",
+ " ]\n",
+ "}\n",
+ "\n",
+ "role_arn = \"\"\n",
+ "\n",
+ "try:\n",
+ " create_role_response = iam_client.create_role(\n",
+ " RoleName=role_name,\n",
+ " AssumeRolePolicyDocument=json.dumps(assume_role_policy),\n",
+ " Description=\"Role for Lambda to write to Timestream\"\n",
+ " )\n",
+ " print(f\"Created IAM Role: {role_name}\")\n",
+ " role_arn = create_role_response['Role']['Arn']\n",
+ "except iam_client.exceptions.EntityAlreadyExistsException:\n",
+ " print(f\"IAM Role {role_name} already exists\")\n",
+ " try:\n",
+ " role_arn = iam_client.get_role(RoleName=role_name)['Role']['Arn']\n",
+ " except iam_client.exceptions.NoSuchEntityException:\n",
+ " print(\"IAM Role could not be found\")\n",
+ " raise\n",
+ "\n",
+ "# CloudWatch logs policy to be added to the role\n",
+ "cloudwatch_logs_policy = {\n",
+ " \"Version\": \"2012-10-17\",\n",
+ " \"Statement\": [\n",
+ " {\n",
+ " \"Effect\": \"Allow\",\n",
+ " \"Action\": [\n",
+ " \"logs:CreateLogGroup\",\n",
+ " \"logs:CreateLogStream\",\n",
+ " \"logs:PutLogEvents\"\n",
+ " ],\n",
+ " \"Resource\": f\"arn:aws:logs:{REGION_NAME}:{account_id}:log-group:/aws/lambda/{lambda_name}*\"\n",
+ " }\n",
+ " ]\n",
+ "}\n",
+ "\n",
+ "# Add the CloudWatch logs policy to the role\n",
+ "try:\n",
+ " iam_client.put_role_policy(\n",
+ " RoleName=role_name,\n",
+ " PolicyName='CloudWatchLogsPolicy',\n",
+ " PolicyDocument=json.dumps(cloudwatch_logs_policy)\n",
+ " )\n",
+ " print(f\"Attached CloudWatch logs policy to role: {role_name}\")\n",
+ "except Exception as e:\n",
+ " print(f\"Error attaching CloudWatch logs policy: {e}\")\n",
+ "\n",
+ "# Attach Policy to the IAM Role\n",
+ "policy_arn = \"arn:aws:iam::aws:policy/AmazonTimestreamFullAccess\"\n",
+ "iam_client.attach_role_policy(\n",
+ " RoleName=role_name,\n",
+ " PolicyArn=policy_arn\n",
+ ")\n",
+ "\n",
+ "print(f\"Attached Timestream write policy to {role_name}\")\n",
+ "\n",
+ "# Create Lambda function code\n",
+ "lambda_function_code = '''\n",
+ "import json\n",
+ "import os\n",
+ "import boto3\n",
+ "from botocore.exceptions import ClientError\n",
+ "\n",
+ "REGION_NAME = os.environ['REGION_NAME']\n",
+ "DATABASE_NAME = os.environ['DATABASE_NAME']\n",
+ "TABLE_NAME = os.environ['TABLE_NAME']\n",
+ "BATCH_SIZE = int(os.environ['BATCH_SIZE'])\n",
+ "MEM_STORE_RETENTION_PERIOD_IN_HOURS = int(os.environ['MEM_STORE_RETENTION_PERIOD_IN_HOURS'])\n",
+ "MAG_STORE_RETENTION_PERIOD_IN_DAYS = int(os.environ['MAG_STORE_RETENTION_PERIOD_IN_DAYS'])\n",
+ "\n",
+ "# Initialize the Timestream client\n",
+ "timestream_client = boto3.client('timestream-write', REGION_NAME)\n",
+ "\n",
+ "# Define your table retention properties\n",
+ "RETENTION_PROPERTIES = {\n",
+ " 'MemoryStoreRetentionPeriodInHours': 24, # Adjust as needed\n",
+ " 'MagneticStoreRetentionPeriodInDays': 365 # Adjust as needed\n",
+ "}\n",
+ "\n",
+ "def create_timestream_database_and_table():\n",
+ " \"\"\"\n",
+ " Create Timestream database and table if they do not exist.\n",
+ " \"\"\"\n",
+ " try:\n",
+ " # Create database if it does not exist\n",
+ " timestream_client.create_database(DatabaseName=DATABASE_NAME)\n",
+ " print(f\"Database '{DATABASE_NAME}' created successfully.\")\n",
+ " except ClientError as e:\n",
+ " if e.response['Error']['Code'] == 'ConflictException':\n",
+ " print(f\"Database '{DATABASE_NAME}' already exists.\")\n",
+ " else:\n",
+ " raise e # Raise if it's a different error\n",
+ "\n",
+ " try:\n",
+ " # Create table if it does not exist\n",
+ " timestream_client.create_table(\n",
+ " DatabaseName=DATABASE_NAME,\n",
+ " TableName=TABLE_NAME,\n",
+ " RetentionProperties=RETENTION_PROPERTIES\n",
+ " )\n",
+ " print(f\"Table '{TABLE_NAME}' created successfully in database '{DATABASE_NAME}'.\")\n",
+ " except ClientError as e:\n",
+ " if e.response['Error']['Code'] == 'ConflictException':\n",
+ " print(f\"Table '{TABLE_NAME}' already exists in database '{DATABASE_NAME}'.\")\n",
+ " else:\n",
+ " raise e # Raise if it's a different error\n",
+ "\n",
+ "def lambda_handler(event, context):\n",
+ " \"\"\"\n",
+ " Lambda function to process the request and ingest records into Timestream.\n",
+ " The function accepts a list of records, handles MULTI measure types, \n",
+ " and sends data in batches to Timestream.\n",
+ " \"\"\"\n",
+ "\n",
+ " print(event)\n",
+ " query_params = event.get('queryStringParameters', {})\n",
+ " precision = query_params.get('precision', 'MILLISECONDS')\n",
+ "\n",
+ " # Create the database and table if they do not exist\n",
+ " create_timestream_database_and_table()\n",
+ "\n",
+ " try:\n",
+ " # Extract the records from the event\n",
+ " body = event.get('body', '{}')\n",
+ " parsed_body = json.loads(body)\n",
+ " records = parsed_body.get('records', [])\n",
+ " if not records:\n",
+ " return {\n",
+ " \"statusCode\": 400,\n",
+ " \"body\": json.dumps(\"No records found in the request.\")\n",
+ " }\n",
+ "\n",
+ " # Process records in batches\n",
+ " for i in range(0, len(records), BATCH_SIZE):\n",
+ " records_batch = records[i:i + 100]\n",
+ " # Prepare the records for Timestream\n",
+ " prepared_records = []\n",
+ "\n",
+ " for record in records_batch:\n",
+ " dimensions = record.get(\"Dimensions\", [])\n",
+ " time_value = record.get(\"Time\")\n",
+ " measures = record.get(\"Measures\", [])\n",
+ "\n",
+ " # Check if there are multiple measures, in which case we'll use MULTI\n",
+ " if len(measures) > 1:\n",
+ " measure_value_type = 'MULTI'\n",
+ " multi_value_measure = {\n",
+ " 'MeasureName': 'metrics', # General measure name, used for any multi-measure dataset\n",
+ " 'MeasureValues': [\n",
+ " {\n",
+ " 'Name': m['MeasureName'],\n",
+ " 'Value': m['MeasureValue'],\n",
+ " 'Type': m['MeasureValueType']\n",
+ " }\n",
+ " for m in measures\n",
+ " ]\n",
+ " }\n",
+ " prepared_record = {\n",
+ " 'Dimensions': dimensions,\n",
+ " 'Time': time_value,\n",
+ " 'TimeUnit': precision,\n",
+ " 'MeasureName': multi_value_measure['MeasureName'],\n",
+ " 'MeasureValueType': measure_value_type,\n",
+ " 'MeasureValues': multi_value_measure['MeasureValues']\n",
+ " }\n",
+ " else:\n",
+ " # Handle the case where there is only one measure\n",
+ " measure = measures[0]\n",
+ " prepared_record = {\n",
+ " 'Dimensions': dimensions,\n",
+ " 'Time': time_value,\n",
+ " 'TimeUnit': precision,\n",
+ " 'MeasureName': measure['MeasureName'],\n",
+ " 'MeasureValue': measure['MeasureValue'],\n",
+ " 'MeasureValueType': measure['MeasureValueType']\n",
+ " }\n",
+ "\n",
+ " prepared_records.append(prepared_record)\n",
+ "\n",
+ " # Write to Timestream using the `write_records` API\n",
+ " response = timestream_client.write_records(\n",
+ " DatabaseName=DATABASE_NAME,\n",
+ " TableName=TABLE_NAME,\n",
+ " Records=prepared_records\n",
+ " )\n",
+ " print(f\"Batch write successful for records {i} to {i + len(records_batch) - 1}: {response}\")\n",
+ "\n",
+ " return {\n",
+ " \"statusCode\": 200,\n",
+ " \"body\": json.dumps(f\"Successfully ingested {len(records)} records into Timestream.\")\n",
+ " }\n",
+ " \n",
+ " except ClientError as e:\n",
+ " print(f\"Failed to write to Timestream: {e}\")\n",
+ " return {\n",
+ " \"statusCode\": 500,\n",
+ " \"body\": json.dumps(f\"Error writing to Timestream: {str(e)}\")\n",
+ " }\n",
+ "'''\n",
+ "\n",
+ "# Save the Lambda function code to a file\n",
+ "lambda_function_file = \"lambda_function.py\"\n",
+ "with open(lambda_function_file, 'w') as f:\n",
+ " f.write(lambda_function_code)\n",
+ "\n",
+ "# Create a deployment package (zip file)\n",
+ "lambda_zip = \"lambda_function.zip\"\n",
+ "with zipfile.ZipFile(lambda_zip, 'w') as zipf:\n",
+ " zipf.write(lambda_function_file)\n",
+ "\n",
+ "try:\n",
+ " with open(lambda_zip, 'rb') as f:\n",
+ " lambda_client.create_function(\n",
+ " FunctionName=lambda_name,\n",
+ " Runtime='python3.12',\n",
+ " Role=role_arn,\n",
+ " Handler='lambda_function.lambda_handler',\n",
+ " Architectures=['arm64'],\n",
+ " Code={'ZipFile': f.read()},\n",
+ " Environment={\n",
+ " 'Variables': {\n",
+ " 'REGION_NAME': REGION_NAME,\n",
+ " 'DATABASE_NAME': DATABASE_NAME,\n",
+ " 'TABLE_NAME': TABLE_NAME,\n",
+ " 'MEM_STORE_RETENTION_PERIOD_IN_HOURS': str(MEM_STORE_RETENTION_PERIOD_IN_HOURS),\n",
+ " 'MAG_STORE_RETENTION_PERIOD_IN_DAYS': str(MAG_STORE_RETENTION_PERIOD_IN_DAYS),\n",
+ " 'BATCH_SIZE': str(BATCH_SIZE)\n",
+ " }\n",
+ " },\n",
+ " Timeout=30,\n",
+ " MemorySize=128\n",
+ " )\n",
+ " print(f\"Lambda function {lambda_name} created successfully.\")\n",
+ "except lambda_client.exceptions.ResourceConflictException:\n",
+ " print(f\"Lambda function {lambda_name} already exists.\")\n",
+ "\n",
+ "# Clean up the files\n",
+ "os.remove(lambda_function_file)\n",
+ "os.remove(lambda_zip)\n",
+ "\n",
+ "# Add a resource policy to allow invocation via the function URL\n",
+ "\n",
+ "try:\n",
+ " lambda_client.add_permission(\n",
+ " FunctionName=lambda_name,\n",
+ " StatementId='FunctionURLAllowInvoke',\n",
+ " Action='lambda:InvokeFunctionUrl',\n",
+ " Principal=role_arn,\n",
+ " FunctionUrlAuthType='AWS_IAM'\n",
+ " )\n",
+ " print(f\"Added resource policy to allow function URL invocation for {lambda_name}.\")\n",
+ "except lambda_client.exceptions.ResourceConflictException:\n",
+ " print(f\"Resource policy for {lambda_name} already exists.\")\n",
+ "\n",
+ "# Create or get the Lambda Function URL\n",
+ "try:\n",
+ " response = lambda_client.create_function_url_config(\n",
+ " FunctionName=lambda_name,\n",
+ " AuthType='AWS_IAM'\n",
+ " )\n",
+ " function_url = response['FunctionUrl']\n",
+ " print(f\"Lambda Function URL: {function_url}\")\n",
+ "except lambda_client.exceptions.ResourceConflictException:\n",
+ " # If the URL configuration already exists, retrieve it\n",
+ " response = lambda_client.get_function_url_config(FunctionName=lambda_name)\n",
+ " function_url = response['FunctionUrl']\n",
+ " print(f\"Lambda Function URL (existing): {function_url}\")\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "cef5a36c",
+ "metadata": {},
+ "source": [
+ "## Step 4: Send Data to the Lambda Function"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "1066096a",
+ "metadata": {},
+ "source": [
+ "The following code will send the generated sample data to the Lambda function's URL with SigV4 authenticated requests, ensuring requests do not exceed Lambda's limit of 6 MB."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "63d4070a",
+ "metadata": {},
+ "source": [
+ "### Imports"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "960ad90c",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import json\n",
+ "import requests\n",
+ "import boto3\n",
+ "from botocore.auth import SigV4Auth\n",
+ "from botocore.awsrequest import AWSRequest"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "44bb87a9",
+ "metadata": {},
+ "source": [
+ "### Send Generated Data"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "e08f884b",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "MAX_REQUEST_SIZE = 6 * 1024 * 1024 # 6 MB in bytes\n",
+ "\n",
+ "def send_data_to_lambda(data, precision=\"MILLISECONDS\"):\n",
+ " \"\"\"Sends generated data to the Lambda function in chunks.\"\"\"\n",
+ " region = REGION_NAME\n",
+ " method = \"POST\"\n",
+ "\n",
+ " session = boto3.Session(region_name=region)\n",
+ "\n",
+ " # Calculate the size of the entire data payload\n",
+ " data_payload = json.dumps({'records': data})\n",
+ " total_size = len(data_payload.encode('utf-8'))\n",
+ "\n",
+ " # Check if the total size exceeds the maximum request size\n",
+ " if total_size <= MAX_REQUEST_SIZE:\n",
+ " send_request(session, method, data_payload, precision)\n",
+ " else:\n",
+ " # Chunk the data if it's too large\n",
+ " chunk_size = MAX_REQUEST_SIZE - len(b'{\"records\":[]}') # Reserve space for the JSON structure\n",
+ " chunks = [data[i:i + chunk_size] for i in range(0, len(data), chunk_size)]\n",
+ " for chunk in chunks:\n",
+ " chunk_payload = json.dumps({'records': chunk})\n",
+ " send_request(session, method, chunk_payload, precision)\n",
+ "\n",
+ "def send_request(session, method, payload, precision=\"MILLISECONDS\"):\n",
+ " \"\"\"Sends the request to the Lambda function.\"\"\"\n",
+ " request = AWSRequest(\n",
+ " method=method,\n",
+ " url=function_url,\n",
+ " params={'precision': precision},\n",
+ " headers={'Content-Type': 'application/json'},\n",
+ " data=payload\n",
+ " )\n",
+ "\n",
+ " SigV4Auth(session.get_credentials(), 'lambda', REGION_NAME).add_auth(request)\n",
+ "\n",
+ " try:\n",
+ " response = requests.request(method, function_url, params={\"precision\": precision}, headers=dict(request.headers), data=payload, timeout=30)\n",
+ " response.raise_for_status()\n",
+ " print(f'Response Status: {response.status_code}')\n",
+ " print(f'Response Body: {response.content.decode(\"utf-8\")}')\n",
+ " except Exception as e:\n",
+ " print(f'Error: {e}')\n",
+ "\n",
+ "# Send sample data to the Lambda function\n",
+ "send_data_to_lambda(sample_data, PRECISION)\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "dc25f96b",
+ "metadata": {},
+ "source": [
+ "## Step 5: Configure Grafana"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "8f9d9ce0",
+ "metadata": {},
+ "source": [
+ "### Imports"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "a279643c",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import boto3\n",
+ "from botocore.exceptions import ClientError\n",
+ "import json\n",
+ "import time\n",
+ "import requests"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "f10bfbc6",
+ "metadata": {},
+ "source": [
+ "### Create Role for Workspace"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "c5fcf4ef",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Initialize IAM and Managed Grafana clients\n",
+ "iam_client = boto3.client('iam', region_name=REGION_NAME)\n",
+ "grafana_client = boto3.client('grafana', region_name=REGION_NAME)\n",
+ "\n",
+ "workspace_role_arn = \"\"\n",
+ "\n",
+ "# Define the trust policy for Amazon Managed Grafana\n",
+ "trust_policy = {\n",
+ " \"Version\": \"2012-10-17\",\n",
+ " \"Statement\": [\n",
+ " {\n",
+ " \"Effect\": \"Allow\",\n",
+ " \"Principal\": {\n",
+ " \"Service\": \"grafana.amazonaws.com\"\n",
+ " },\n",
+ " \"Action\": \"sts:AssumeRole\"\n",
+ " }\n",
+ " ]\n",
+ "}\n",
+ "\n",
+ "workspace_role_name = 'GrafanaWorkspaceRole'\n",
+ "try:\n",
+ " # Create the IAM role\n",
+ " create_role_response = iam_client.create_role(\n",
+ " RoleName=workspace_role_name,\n",
+ " AssumeRolePolicyDocument=json.dumps(trust_policy),\n",
+ " Description=\"Role for Amazon Managed Grafana to access AWS resources\"\n",
+ " )\n",
+ " workspace_role_arn = create_role_response['Role']['Arn']\n",
+ "except iam_client.exceptions.EntityAlreadyExistsException:\n",
+ " print(f\"Workspace IAM role {role_name} already exists\")\n",
+ " try:\n",
+ " workspace_role_arn = iam_client.get_role(RoleName=workspace_role_name)['Role']['Arn']\n",
+ " except iam_client.exceptions.NoSuchEntityException:\n",
+ " print(\"Workspace IAM role could not be found\")\n",
+ " raise\n",
+ " \n",
+ "print(f\"Created workspace role with ARN: {workspace_role_arn}\")\n",
+ "\n",
+ "# Define an inline policy for Timestream and CloudWatch read access\n",
+ "inline_policy = {\n",
+ " \"Version\": \"2012-10-17\",\n",
+ " \"Statement\": [\n",
+ " {\n",
+ " \"Effect\": \"Allow\",\n",
+ " \"Action\": [\n",
+ " \"timestream:DescribeEndpoints\",\n",
+ " \"timestream:ListDatabases\"\n",
+ " ],\n",
+ " \"Resource\": \"*\"\n",
+ " },\n",
+ " {\n",
+ " \"Effect\": \"Allow\",\n",
+ " \"Action\": [\n",
+ " \"timestream:Select\"\n",
+ " ],\n",
+ " \"Resource\": f\"arn:aws:timestream:{REGION_NAME}:{account_id}:database/{DATABASE_NAME}/table/{TABLE_NAME}\"\n",
+ " },\n",
+ " {\n",
+ " \"Effect\": \"Allow\",\n",
+ " \"Action\": [\n",
+ " \"timestream:ListTables\"\n",
+ " ],\n",
+ " \"Resource\": f\"arn:aws:timestream:{REGION_NAME}:{account_id}:database/{DATABASE_NAME}\"\n",
+ " }\n",
+ " ]\n",
+ "}\n",
+ "\n",
+ "try:\n",
+ " # Attach the inline policy\n",
+ " iam_client.put_role_policy(\n",
+ " RoleName=workspace_role_name,\n",
+ " PolicyName='GrafanaWorkspaceAccessPolicy',\n",
+ " PolicyDocument=json.dumps(inline_policy)\n",
+ " )\n",
+ " print(f\"Attached inline policy to role {workspace_role_name}\")\n",
+ "except Exception as err:\n",
+ " print(\"Failed to attach policy to workspace role\")\n",
+ " raise"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "112cf6a6",
+ "metadata": {},
+ "source": [
+ "### Create Workspace or Use Existing Workspace"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "3b1d7609",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "WORKSPACE_NAME = \"sample_app_workspace\"\n",
+ "MAX_WAIT_SECONDS = 900 # 15 minutes\n",
+ "WAIT_PERIOD_SECONDS = 15\n",
+ "\n",
+ "workspace_id = \"\"\n",
+ "grafana_endpoint_url = \"\"\n",
+ "try:\n",
+ " list_workspaces_response = grafana_client.list_workspaces()\n",
+ " for workspace in list_workspaces_response['workspaces']:\n",
+ " if workspace['name'] == WORKSPACE_NAME:\n",
+ " print(f\"Workspace '{WORKSPACE_NAME}' already exists with ID: {workspace['id']}\")\n",
+ " workspace_id = workspace['id']\n",
+ " current_wait_seconds = 0\n",
+ " if workspace['status'] != 'ACTIVE':\n",
+ " status = \"\"\n",
+ " while current_wait_seconds < MAX_WAIT_SECONDS:\n",
+ " status = grafana_client.describe_workspace(workspaceId=workspace_id)['workspace']['status']\n",
+ " print(f\"Workspace status: {status}\")\n",
+ " if status == 'ACTIVE':\n",
+ " break\n",
+ " time.sleep(WAIT_PERIOD_SECONDS)\n",
+ " current_wait_seconds += WAIT_PERIOD_SECONDS\n",
+ " if current_wait_seconds >= MAX_WAIT_SECONDS and status != 'ACTIVE':\n",
+ " raise Exception(\"Timed out while waiting for workspace to become active\")\n",
+ "\n",
+ "except ClientError as e:\n",
+ " raise Exception(f\"Error checking for workspace: {e}\")\n",
+ "\n",
+ "if not workspace_id:\n",
+ " try:\n",
+ " configuration = {\n",
+ " \"plugins\": {\n",
+ " \"pluginAdminEnabled\": True,\n",
+ " }\n",
+ " }\n",
+ " create_workspace_response = grafana_client.create_workspace(\n",
+ " accountAccessType='CURRENT_ACCOUNT',\n",
+ " authenticationProviders=['AWS_SSO'],\n",
+ " permissionType='CUSTOMER_MANAGED',\n",
+ " workspaceName=WORKSPACE_NAME,\n",
+ " workspaceRoleArn=workspace_role_arn,\n",
+ " configuration=json.dumps(configuration)\n",
+ " )\n",
+ " except Exception as err:\n",
+ " print(f\"Failed to create workspace: {err}\")\n",
+ " workspace_id = create_workspace_response['workspace']['id']\n",
+ " print(f\"Workspace '{WORKSPACE_NAME}' created with ID: {workspace_id}\")\n",
+ "\n",
+ " # Wait until the workspace is active\n",
+ " current_wait_seconds = 0\n",
+ " while current_wait_seconds < MAX_WAIT_SECONDS:\n",
+ " status = grafana_client.describe_workspace(workspaceId=workspace_id)['workspace']['status']\n",
+ " print(f\"Workspace status: {status}\")\n",
+ " if status == 'ACTIVE':\n",
+ " break\n",
+ " time.sleep(WAIT_PERIOD_SECONDS)\n",
+ " current_wait_seconds += WAIT_PERIOD_SECONDS\n",
+ "\n",
+ " if current_wait_seconds >= MAX_WAIT_SECONDS and status != 'ACTIVE':\n",
+ " raise Exception(\"Timed out while waiting for workspace to become active\")\n",
+ "\n",
+ "grafana_workspace = grafana_client.describe_workspace(workspaceId=workspace_id)\n",
+ "grafana_endpoint_url = grafana_workspace['workspace']['endpoint']"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "314e1a4e",
+ "metadata": {},
+ "source": [
+ "### Create Grafana API Key\n",
+ "\n",
+ "A Grafana API key is required in order to make requests to Grafana to add the Timestream plugin, create the Timestream data source, and upload the dashboard."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "2b1d0960",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "workspace_service_account_name = 'admin'\n",
+ "workspace_service_account_id = ''\n",
+ "try:\n",
+ " create_service_account_response = grafana_client.create_workspace_service_account(grafanaRole='ADMIN', name='admin', workspaceId=workspace_id)\n",
+ " workspace_service_account_id = create_service_account_response['id']\n",
+ "except grafana_client.exceptions.ConflictException:\n",
+ " print(\"Using existing workspace service account\")\n",
+ " list_workspace_services_accounts_response = grafana_client.list_workspace_service_accounts(\n",
+ " maxResults=200,\n",
+ " workspaceId=workspace_id\n",
+ " )\n",
+ " next_token = list_workspace_services_accounts_response.get('nextToken', '')\n",
+ " for service_account in list_workspace_services_accounts_response['serviceAccounts']:\n",
+ " if service_account['name'] == workspace_service_account_name:\n",
+ " workspace_service_account_id = service_account['id']\n",
+ " if not workspace_service_account_id:\n",
+ " while next_token:\n",
+ " list_workspace_services_accounts_response = grafana_client.list_workspace_service_accounts(\n",
+ " maxResults=200,\n",
+ " workspaceId=workspace_id,\n",
+ " nextToken=next_token\n",
+ " )\n",
+ " for service_account in list_workspace_services_accounts_response['serviceAccounts']:\n",
+ " if service_account['name'] == workspace_service_account_name:\n",
+ " workspace_service_account_id = service_account['id']\n",
+ " if workspace_service_account_id:\n",
+ " break\n",
+ " next_token = list_workspace_services_accounts_response.get('nextToken', '')\n",
+ " if not workspace_service_account_id:\n",
+ " raise Exception(f\"Existing workspace service account with name {workspace_service_account_name} could not be found\")\n",
+ "except Exception as err:\n",
+ " print(f\"An unexpected exception occurred when creating workspace service account: {err}\")\n",
+ " raise\n",
+ "\n",
+ "service_account_token_name = 'admin_token'\n",
+ "# If the token already exists, it must be deleted and recreated. list_workspace_service_account_tokens\n",
+ "# will not return its key.\n",
+ "try:\n",
+ " service_account_token_id = ''\n",
+ " list_service_account_tokens_response = grafana_client.list_workspace_service_account_tokens(\n",
+ " maxResults=200,\n",
+ " workspaceId=workspace_id,\n",
+ " serviceAccountId=workspace_service_account_id\n",
+ " )\n",
+ " next_token = list_service_account_tokens_response.get(\"nextToken\", '')\n",
+ " for service_account_token in list_service_account_tokens_response['serviceAccountTokens']:\n",
+ " if service_account_token['name'] == service_account_token_name:\n",
+ " service_account_token_id = service_account_token['id']\n",
+ " if not workspace_service_account_id:\n",
+ " while next_token:\n",
+ " list_service_account_tokens_response = grafana_client.list_workspace_service_account_tokens(\n",
+ " maxResults=200,\n",
+ " workspaceId=workspace_id,\n",
+ " nextToken=next_token,\n",
+ " serviceAccountId=workspace_service_account_id\n",
+ " )\n",
+ " for service_account_token in list_service_account_tokens_response['serviceAccountTokens']:\n",
+ " if service_account_token['name'] == service_account_token_name:\n",
+ " service_account_token_id = service_account_token['id']\n",
+ " if service_account_token_id:\n",
+ " break\n",
+ " next_token = list_service_account_tokens_response.get('nextToken', '')\n",
+ " if service_account_token_id:\n",
+ " grafana_client.delete_workspace_service_account_token(\n",
+ " serviceAccountId=workspace_service_account_id,\n",
+ " tokenId=service_account_token_id,\n",
+ " workspaceId=workspace_id\n",
+ " )\n",
+ "except Exception as err:\n",
+ " print(f\"An unexpected exception occurred when checking for existing service tokens: {e}\")\n",
+ " raise\n",
+ "\n",
+ "try:\n",
+ " create_token_response = grafana_client.create_workspace_service_account_token(\n",
+ " name=service_account_token_name,\n",
+ " secondsToLive=86400, # 1 day\n",
+ " serviceAccountId=workspace_service_account_id,\n",
+ " workspaceId=workspace_id\n",
+ " )\n",
+ " service_account_token = create_token_response['serviceAccountToken']['key']\n",
+ "except Exception as err:\n",
+ " print(f\"An exception occurred when trying to create a new service token: {err}\")\n",
+ " raise"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "6a59e1de",
+ "metadata": {},
+ "source": [
+ "### Add Timestream Plugin to Workspace"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "339901c7",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "headers = {\n",
+ " \"Authorization\": f\"Bearer {service_account_token}\",\n",
+ " \"Accept\": \"application/json\",\n",
+ " \"Content-Type\": \"application/json\"\n",
+ "}\n",
+ "\n",
+ "install_timestream_plugin_response = requests.post(\n",
+ " f\"https://{grafana_endpoint_url}/api/plugins/grafana-timestream-datasource/install\",\n",
+ " headers=headers,\n",
+ ")\n",
+ "\n",
+ "if install_timestream_plugin_response.status_code == 409:\n",
+ " print(\"Amazon Timestream plugin already installed\")\n",
+ "elif not install_timestream_plugin_response.ok:\n",
+ " raise Exception(f\"Failed to install Amazon Timestream plugin for workspace: {install_timestream_plugin_response.content}\")\n",
+ "\n",
+ "current_wait_seconds = 0\n",
+ "while current_wait_seconds < MAX_WAIT_SECONDS:\n",
+ " installed_plugins_response = requests.get(\n",
+ " f\"https://{grafana_endpoint_url}/api/plugins\",\n",
+ " headers=headers\n",
+ " )\n",
+ " if installed_plugins_response.ok:\n",
+ " installed_plugins_response_json = installed_plugins_response.json()\n",
+ " if any(installed_plugin.get('id') == 'grafana-timestream-datasource' for installed_plugin in installed_plugins_response_json):\n",
+ " # Grafana will report the plugin as installed but needs more time\n",
+ " # for the installation to truly finish\n",
+ " time.sleep(WAIT_PERIOD_SECONDS)\n",
+ " print(\"Amazon Timestream plugin installed\")\n",
+ " break\n",
+ " else:\n",
+ " raise Exception(\"Failed to check currently installed plugins\")\n",
+ " print(\"Waiting for the Amazon Timestream plugin to finish installing . . .\")\n",
+ " time.sleep(WAIT_PERIOD_SECONDS)\n",
+ " current_wait_seconds += WAIT_PERIOD_SECONDS\n",
+ "\n",
+ "enable_plugin_payload = {\n",
+ " \"enabled\": True,\n",
+ " \"pinned\": True,\n",
+ " \"json\": None\n",
+ "}\n",
+ "\n",
+ "# Post request to add the data source\n",
+ "add_timestream_plugin_response = requests.post(\n",
+ " f\"https://{grafana_endpoint_url}/api/plugins/grafana-timestream-datasource/settings\",\n",
+ " headers=headers,\n",
+ " data=json.dumps(enable_plugin_payload)\n",
+ ")\n",
+ "\n",
+ "if add_timestream_plugin_response.ok:\n",
+ " print(\"Amazon Timestream plugin enabled\")\n",
+ "elif add_timestream_plugin_response.status_code == 409:\n",
+ " print(\"Amazon Timestream plugin already enabled\")\n",
+ "else:\n",
+ " raise Exception(f\"Failed to enable Amazon Timestream plugin for workspace: {add_timestream_plugin_response.content}\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "06f5d5e3",
+ "metadata": {},
+ "source": [
+ "### Add Timestream Grafana Data Source"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "362e8e86",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "headers = {\n",
+ " \"Authorization\": f\"Bearer {service_account_token}\",\n",
+ " \"Accept\": \"application/json\",\n",
+ " \"Content-Type\": \"application/json\"\n",
+ "}\n",
+ "\n",
+ "grafana_data_source_name = \"Amazon Timestream for LiveAnalytics Sample Data Source\"\n",
+ "\n",
+ "# Timestream data source payload\n",
+ "data_source_payload = {\n",
+ " \"name\": grafana_data_source_name,\n",
+ " \"type\": \"grafana-timestream-datasource\",\n",
+ " \"access\": \"proxy\",\n",
+ " \"jsonData\": {\n",
+ " \"defaultRegion\": REGION_NAME,\n",
+ " \"database\": DATABASE_NAME,\n",
+ " \"table\": TABLE_NAME,\n",
+ " \"authenticationType\": \"AWS_IAM\"\n",
+ " }\n",
+ "}\n",
+ "\n",
+ "# Post request to add the data source\n",
+ "create_data_source_response = requests.post(\n",
+ " f\"https://{grafana_endpoint_url}/api/datasources\",\n",
+ " headers=headers,\n",
+ " data=json.dumps(data_source_payload)\n",
+ ")\n",
+ "\n",
+ "data_source_id = \"\"\n",
+ "if create_data_source_response.ok:\n",
+ " print(\"Amazon Timestream for LiveAnalytics data source added successfully.\")\n",
+ " data_source_id = create_data_source_response.json()['id']\n",
+ "elif create_data_source_response.status_code == 409: # Conflict - Data source already exists\n",
+ " print(\"Amazon Timestream for LiveAnalytics data source already exists.\")\n",
+ " data_source_id_response = requests.get(f\"https://{grafana_endpoint_url}/api/datasources/name/{grafana_data_source_name}\", headers=headers)\n",
+ " if data_source_id_response.ok:\n",
+ " data_source_id = data_source_id_response.json().get(\"id\")\n",
+ " else:\n",
+ " raise Exception(f\"Failed to get ID of existing {grafana_data_source_name} data source\")\n",
+ "else:\n",
+ " raise Exception(f\"Failed to add Timestream data source: {create_data_source_response.content}\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "2d3293d5",
+ "metadata": {},
+ "source": [
+ "### Generate and Upload Grafana Dashboard"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "603364f0",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "headers = {\n",
+ " \"Authorization\": f\"Bearer {service_account_token}\",\n",
+ " \"Accept\": \"application/json\",\n",
+ " \"Content-Type\": \"application/json\"\n",
+ "}\n",
+ "\n",
+ "# Each measure will have a panel\n",
+ "panels = []\n",
+ "for i, measure_template in enumerate(data_generator.measure_templates):\n",
+ " measure_name = measure_template['name']\n",
+ " query = \"\"\n",
+ " if len(data_generator.measure_templates) > 1:\n",
+ " query = f\"SELECT time, {measure_name}, {', '.join(dimension_template['name'] for dimension_template in data_generator.dimension_templates)} FROM \\\"{DATABASE_NAME}\\\".\\\"{TABLE_NAME}\\\" ORDER BY time ASC\"\n",
+ " elif len(data_generator.measure_templates) == 1:\n",
+ " query = f\"SELECT * FROM \\\"{DATABASE_NAME}\\\".\\\"{TABLE_NAME}\\\" ORDER BY time ASC\"\n",
+ " \n",
+ " panel = {\n",
+ " \"datasource\": grafana_data_source_name,\n",
+ " \"fieldConfig\": {\n",
+ " \"defaults\": {\n",
+ " \"color\": {\n",
+ " \"mode\": \"palette-classic\"\n",
+ " },\n",
+ " \"custom\": {\n",
+ " \"axisBorderShow\": False,\n",
+ " \"axisCenteredZero\": False,\n",
+ " \"axisColorMode\": \"text\",\n",
+ " \"axisLabel\": \"\",\n",
+ " \"axisPlacement\": \"auto\",\n",
+ " \"barAlignment\": 0,\n",
+ " \"barWidthFactor\": 0.6,\n",
+ " \"drawStyle\": \"line\",\n",
+ " \"fillOpacity\": 0,\n",
+ " \"gradientMode\": \"none\",\n",
+ " \"hideFrom\": {\n",
+ " \"legend\": False,\n",
+ " \"tooltip\": False,\n",
+ " \"viz\": False\n",
+ " },\n",
+ " \"insertNulls\": False,\n",
+ " \"lineInterpolation\": \"linear\",\n",
+ " \"lineWidth\": 1,\n",
+ " \"pointSize\": 5,\n",
+ " \"scaleDistribution\": {\n",
+ " \"type\": \"linear\"\n",
+ " },\n",
+ " \"showPoints\": \"auto\",\n",
+ " \"spanNulls\": False,\n",
+ " \"stacking\": {\n",
+ " \"group\": \"A\",\n",
+ " \"mode\": \"none\"\n",
+ " },\n",
+ " \"thresholdsStyle\": {\n",
+ " \"mode\": \"off\"\n",
+ " }\n",
+ " },\n",
+ " \"mappings\": [],\n",
+ " \"thresholds\": {\n",
+ " \"mode\": \"absolute\",\n",
+ " \"steps\": [\n",
+ " {\n",
+ " \"color\": \"green\",\n",
+ " \"value\": None\n",
+ " },\n",
+ " {\n",
+ " \"color\": \"red\",\n",
+ " \"value\": 80\n",
+ " }\n",
+ " ]\n",
+ " }\n",
+ " },\n",
+ " \"overrides\": []\n",
+ " },\n",
+ " \"gridPos\": {\n",
+ " \"h\": 22,\n",
+ " \"w\": 20,\n",
+ " \"x\": 0,\n",
+ " \"y\": 0\n",
+ " },\n",
+ " \"id\": i + 1,\n",
+ " \"options\": {\n",
+ " \"legend\": {\n",
+ " \"calcs\": [],\n",
+ " \"displayMode\": \"list\",\n",
+ " \"placement\": \"bottom\",\n",
+ " \"showLegend\": True\n",
+ " },\n",
+ " \"tooltip\": {\n",
+ " \"mode\": \"single\",\n",
+ " \"sort\": \"none\"\n",
+ " }\n",
+ " },\n",
+ " \"targets\": [\n",
+ " {\n",
+ " \"datasource\": grafana_data_source_name,\n",
+ " \"format\": 1,\n",
+ " \"hide\": False,\n",
+ " \"measure\": \"\",\n",
+ " \"rawQuery\": query,\n",
+ " \"refId\": \"A\"\n",
+ " }\n",
+ " ],\n",
+ " \"title\": f\"{measure_name}\",\n",
+ " \"type\": \"timeseries\"\n",
+ " }\n",
+ " panels.append(panel)\n",
+ "\n",
+ "\n",
+ "dashboard = {\n",
+ " \"annotations\": {\n",
+ " \"list\": [\n",
+ " {\n",
+ " \"builtIn\": 1,\n",
+ " \"datasource\": {\n",
+ " \"type\": \"grafana\",\n",
+ " \"uid\": \"-- Grafana --\"\n",
+ " },\n",
+ " \"enable\": True,\n",
+ " \"hide\": True,\n",
+ " \"iconColor\": \"rgba(0, 211, 255, 1)\",\n",
+ " \"name\": \"Annotations & Alerts\",\n",
+ " \"type\": \"dashboard\"\n",
+ " }\n",
+ " ]\n",
+ " },\n",
+ " \"editable\": True,\n",
+ " \"fiscalYearStartMonth\": 0,\n",
+ " \"graphTooltip\": 0,\n",
+ " \"links\": [],\n",
+ " \"panels\": panels,\n",
+ " \"schemaVersion\": 39,\n",
+ " \"tags\": [],\n",
+ " \"templating\": {\n",
+ " \"list\": []\n",
+ " },\n",
+ " \"time\": {\n",
+ " \"from\": \"now-15m\",\n",
+ " \"to\": \"now\"\n",
+ " },\n",
+ " \"timepicker\": {},\n",
+ " \"timezone\": \"\",\n",
+ " \"title\": \"Amazon Timestream for LiveAnalytics Sample Dashboard\",\n",
+ " \"uid\": \"de0yzhhg7xo8wd\",\n",
+ " \"version\": 2,\n",
+ " \"weekStart\": \"\"\n",
+ "}\n",
+ "\n",
+ "# Write the dashboard JSON to a local file in order to\n",
+ "# upload manually\n",
+ "#with open('sample_app_dashboard.json', 'w') as f:\n",
+ "# json.dump(dashboard, f)\n",
+ "\n",
+ "dashboard_payload = {\n",
+ " \"dashboard\": dashboard,\n",
+ " \"overwrite\": True, # Ensures replacement if it exists\n",
+ " \"id\": None,\n",
+ " \"uid\": None\n",
+ "}\n",
+ "\n",
+ "create_dashboard_response = requests.post(\n",
+ " f\"https://{grafana_endpoint_url}/api/dashboards/db\",\n",
+ " headers=headers,\n",
+ " data=json.dumps(dashboard_payload)\n",
+ ")\n",
+ "\n",
+ "if create_dashboard_response.ok:\n",
+ " print(f\"Dashboard deployed successfully\")\n",
+ " print(f\"Workspace login url: https://{grafana_endpoint_url}/login\")\n",
+ "else:\n",
+ " print(f\"Failed to deploy dashboard: {create_dashboard_response.content}\")\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "dc2024be",
+ "metadata": {},
+ "source": [
+ "### Add IAM Identity User to Grafana Workspace\n",
+ "\n",
+ "This step must be done using the AWS management console. An IAM identity user must be added to the Grafana workspace. Only users added to the workspace will be able to log in to the workspace.\n",
+ "\n",
+ "#### Create IAM Identity User\n",
+ "\n",
+ "If you already have an IAM identity user you want to use to login to the workspace, skip to the next section.\n",
+ "\n",
+ "1. [Go to the IAM Identity Center console](https://console.aws.amazon.com/singlesignon/home).\n",
+ "2. In the navigation pane, choose **Users**.\n",
+ "3. Choose **Add user**.\n",
+ "4. Input user details.\n",
+ "5. Choose **Next**.\n",
+ "6. Add the user to a group if you wish.\n",
+ "7. Choose **Next**.\n",
+ "8. Choose **Add user**.\n",
+ "\n",
+ "#### Adding IAM Identity User to Workspace\n",
+ "\n",
+ "1. [Go to the Amazon Managed Grafana console]().\n",
+ "2. In the navigation pane, choose **All workspaces**.\n",
+ "3. From the list of workspaces, choose the created workspace. By default, it is named `sample_app_workspace`.\n",
+ "4. in the **Authentication** tab, under **AWS IAM Identity Center (successor to AWS SSO)** choose **Assign new user or group**.\n",
+ "5. From the list of users, choose the user(s) you want to allow to login to the workspace and then choose **Assign users and groups**.\n",
+ "6. By default, users are added as a Viewer. If you want to allow your user to manage data sources in Grafana, select your user, then, in the **Action** dropdown menu, select **Make admin**.\n",
+ "7. Go to the login page as output by the previous cell and input your user's username and password to sign into the workspace."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "93cd1479",
+ "metadata": {},
+ "source": [
+ "### Viewing the Dashboard\n",
+ "\n",
+ "1. Log in to the Amazon Managed Grafana workspace.\n",
+ "2. In the navigation pane, select **Dashboards**.\n",
+ "3. From the list of dashboards, select the deployed dashboard. By default, it is named `Amazon Timestream for LiveAnalytics Sample Dashboard`.\n",
+ "4. Adjust the time range as needed. By default, all data for the last 15 minutes is displayed. Timestamps are in UTC.\n",
+ "5. Measures are listed below the graph, select measures to display them."
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "sample_app_env",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.12.7"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/integrations/lambda/environment.yml b/integrations/lambda/environment.yml
new file mode 100644
index 00000000..662a694d
--- /dev/null
+++ b/integrations/lambda/environment.yml
@@ -0,0 +1,8 @@
+name: sample_app_env
+channels:
+ - conda-forge
+dependencies:
+ - python=3.12
+ - pip
+ - pip:
+ - -r requirements.txt
diff --git a/integrations/lambda/img/lambda_ingestion_overview.png b/integrations/lambda/img/lambda_ingestion_overview.png
new file mode 100644
index 00000000..96c27570
Binary files /dev/null and b/integrations/lambda/img/lambda_ingestion_overview.png differ
diff --git a/integrations/lambda/requirements.txt b/integrations/lambda/requirements.txt
new file mode 100644
index 00000000..b1968c03
--- /dev/null
+++ b/integrations/lambda/requirements.txt
@@ -0,0 +1,2 @@
+boto3==1.35.51
+requests==2.32.3