Skip to content

Projet Fin D'etude (Graduation Project) Of MicroService Instances Load Balancing in Fog Computing Using DRL

Notifications You must be signed in to change notification settings

madd47emz/pfe-microservices

Repository files navigation

system_archi drawio (2)

Fog Computing Setup:

In this research, we built a bare-metal fog environment using k3s (a lightweight Kuber-netes distribution) on multiple Raspberry Pi4 devices connected via Wi-Fi. A Single-server Setup with an Embedded DB. K3s cluster consists of:

  1. Control Plane Node(laptop): Manages the overall cluster and handles scheduling and scaling of pods. It also where the loadbalancer is deployed.

  2. Multiple Worker Nodes: These nodes are responsible for running the microservice instances. Each worker node runs a set of microservices in pods.

Monitoring CLuster & Metrics Collection for Model

To monitor the performance of the fog computing environment, we integrated Prometheus for data collection and Grafana for visualization.

  1. Prometheus gathered metrics such as CPU and memory usage, For nodes and pods, providing real-time insights into resource consumption and system health.

  2. Using Grafana, these metrics were visualized through customizable dashboards, allowing for real-time monitoring of resource utilization, service performance, and system alerts.

Microservices:

  1. The system consists of three microservices built using Spring Boot, each exposing a common API endpoint at /api/test.

  2. These microservices are containerized and built using the linux/arm64/v8 architecture supported by raspberry pi4, and the images are stored in the Docker Hub public repository madd47emz/pfe-microservices in order to make it available to other fog nodes.

  3. Each microservice will be deployed as NodePort kubernetes service with:

externalTrafficPolicy: Local ##to prevent fowarding request to other pods in another nodes. Such service configuration(.yaml) overrides the loadbalancing bihaviour to none.

To make the simulation more effective, Each microservice performs different workloads depending on its specific tag, as described below:

First Microservice

• Tag: madd47emz/pfe-microservices:ms1 • Workload: This microservice performs a light load when the API endpoint is requested. It allocates 50MB of RAM and executes a bubble sort on 1,000 elements.

Second Microservice

• Tag: madd47emz/pfe-microservices:ms2 • Workload: This microservice performs a medium load, allocating 200MB of RAM when processing requests at the API endpoint.

Third Microservice

• Tag: madd47emz/pfe-microservices:ms3 • Workload: This microservice performs a heavy load by allocating 500MB of RAM and calculating 1,000,000 prime numbers when the API endpoint /api/test is triggered.

About

Projet Fin D'etude (Graduation Project) Of MicroService Instances Load Balancing in Fog Computing Using DRL

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published