Storm is a microservice for managing user status updates using an event-driven architecture. The service is built with Go and utilizes gRPC for interface communication and Kafka for event streaming.
This service is designed to be lightweight, scalable, and production-ready, utilizing Kubernetes as its orchestration layer. This documentation includes:
- Local development setup.
- Production deployment approaches:
- End-to-End Automation (one command).
- Manual Steps with Automation.
To run or deploy the project, ensure you have the following installed:
Tool | Version | Purpose |
---|---|---|
Go | 1.22.7+ | Building and running the service. |
Docker + Compose | 27.1.1+ | Managing dependencies locally. |
Protobuf | 28.3+ | Generating gRPC code from .proto files. |
Tool | Version | Purpose |
---|---|---|
Terraform | v1.9.6+ | Automating resource provisioning. |
Ansible | 2.17.1+ | Setting up Kubernetes clusters. |
Kubectl | v1.31.2+ | Managing Kubernetes clusters. |
Python3 + pip | 3.12.4+ | Running auxiliary scripts. |
Bash | Latest | Executing automation scripts. |
-
Clone the Repository:
git clone https://github.com/0xAFz/storm.git cd storm
-
Start Dependencies (Kafka):
docker compose up -d
-
Set Up Environment Variables:
cp .env.example .env vim .env
Update the values in
.env
as needed (e.g., Kafka broker addresses, service ports). -
Generate gRPC Code: Run the following command to compile the
.proto
files into Go code:make proto
-
Run the Application:
make run # or go run main.go
-
Test the Service: Use
grpcurl
to test the service:grpcurl -plaintext localhost:50051 list
- Serivce sends a status update → Received via gRPC interface.
- A Kafka event is published to a specified topic.
Component | Description |
---|---|
gRPC Service | Handles incoming user status updates. |
Kafka | Stores and forwards events for consumers. |
Configurations | Managed via .env files for simplicity. |
Kubernetes | Ensures the service is scalable and fault-tolerant in production. |
Production deployment supports two approaches: fully automated or manual with automation.
This approach automates everything from VM provisioning to Kubernetes resource deployment.
-
Navigate to Production Directory:
cd prod/
-
Setup Environment Variables:
cp .env.example .env vim .env cp terraform/compute/.env.example terraform/compute/.env vim terraform/compute/.env cp terraform/storm/.env.example terraform/storm/.env vim terraform/storm/.env
Update the values for:
- OpenStack credentials (for Terraform).
- Cloudflare credentials (for DNS management).
- Gitlab repo credentials (for Container registery).
- Kubernetes configurations.
-
Setup Ansible Variables:
vim ansible/inventory/group_vars/...
Replace placeholders with actual values
-
Run the Deployment Script:
./deploy.sh up
-
Clean up all resources
./deploy.sh down
- Terraform provisions VMs using OpenStack.
- A Python script generates an Ansible inventory.
- A Python script add DNS records on Cloudflare.
- Ansible:
- Installs Kubernetes (K3s).
- Configures the cluster (e.g., ingress nginx, cert-manager).
- Terraform deploys:
- Kafka Cluster: A highly available Kafka setup.
- Storm Microservice: Configurations, deployment, service, and ingress.
This approach allows more flexibility and supports different cloud providers.
-
Manually Create VMs: Create VMs in your preferred cloud provider (ensure DNS records
(e.g., A storm.domain.tld 192.168.1.100)
are configured). -
Generate Ansible Inventory: Update the inventory file with your server details:
vim ansible/inventory/hosts.yml
-
Setup Ansible Variables:
vim ansible/inventory/group_vars/...
Replace placeholders with actual values
-
Run Ansible Playbook: Install Kubernetes on your nodes:
ansible-playbook -i ansible/inventory/hosts.yml ansible/playbooks/cluster.yml
-
Set Config for Kubectl: The ansible generates kube config in your local:
~/.kube/storm/config
export KUBECONFIG=~/.kube/storm/config
-
Deploy Kafka: Use the Strimzi operator:
# deploy strimzi opreator using helm helm repo add strimzi https://strimzi.io/charts/ helm repo update # create a namespace for kafka kubectl create namespace kafka # apply kafka manifest kubectl apply -f k8s/kafka/kafka-cluster.yml
-
Deploy Storm Microservice: Apply resources:
kubectl apply -f k8s/storm/storm.yml
- Deploy Kafka:
terraform chdir=terraform/kafka init terraform chdir=terraform/kafka apply
- Deploy Storm Microservice
terraform chdir=terraform/storm init terraform chdir=terraform/storm apply
-
-
Verify Kafka is Running:
kubectl get po -n kafka
-
Check Storm Service: Ensure the service is running:
kubectl get po -n storm
-
Test Service Endpoints:
- List gRPC methods:
grpcurl storm.domain.tld:443 list
- Send a request:
grpcurl -d '{"user_id": 1234, "status": true}' storm.domain.tld status.v1.Status/UpdateStatus
- List gRPC methods:
-
Expected Response:
{ "message": "ok" }