Skip to content

Commit

Permalink
feat(lb): AWS Load Balancer Controller (#90)
Browse files Browse the repository at this point in the history
  • Loading branch information
Young-ook authored Jul 31, 2021
1 parent 1895930 commit 64abacd
Show file tree
Hide file tree
Showing 14 changed files with 688 additions and 0 deletions.
83 changes: 83 additions & 0 deletions examples/lb/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
# AWS Load Balancer Controller
AWS Load Balancer Controller is a controller to help manage Elastic Load Balancers for a Kubernetes cluster.
- It satisfies Kubernetes Ingress resources by provisioning Application Load Balancers.
- It satisfies Kubernetes Service resources by provisioning Network Load Balancers.
The AWS Load Balancer Controller makes it easy for users to take advantage of the loadbalancer management. For more details, please visit [this](https://github.com/kubernetes-sigs/aws-load-balancer-controller)

## Download example
Download this example on your workspace
```sh
git clone https://github.com/Young-ook/terraform-aws-eks
cd terraform-aws-eks/examples/lb
```

## Setup
[This](https://github.com/Young-ook/terraform-aws-eks/blob/main/examples/lb/main.tf) is the example of terraform configuration file to create an EKS cluster and to install load balancer controller on it.

Run terraform:
```
terraform init
terraform apply
```
Also you can use the `-var-file` option for customized paramters when you run the terraform plan/apply command.
```
terraform plan -var-file tc1.tfvars
terraform apply -var-file tc1.tfvars
```

### Update kubeconfig
Update and download kubernetes config file to local. You can see the bash command like below after terraform apply is complete. The output looks like below. Copy and run it to save the kubernetes configuration file to your local workspace. And export it as an environment variable to apply to the terminal.
```
bash -e .terraform/modules/eks/script/update-kubeconfig.sh -r ap-northeast-2 -n eks-lbc -k kubeconfig
export KUBECONFIG=kubeconfig
```

## Verify
All steps are finished, check that there are pods that are `Ready` in `kube-system` namespace:
Ensure the `aws-load-balancer-controller` pod is generated and running:

```sh
kubectl get deployment -n kube-system aws-load-balancer-controller
```
Output:
```
NAME READY UP-TO-DATE AVAILABLE AGE
aws-load-balancer-controller 2/2 2 2 84s
```

For more details, please refer to [this](https://github.com/Young-ook/terraform-aws-eks/blob/main/modules/lb-controller) module description.

## Application
You can run the sample application on a cluster. Deploy the game 2048 as a sample application to verify that the AWS load balancer controller creates an AWS ALB as a result of the Ingress object.
```sh
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/examples/2048/2048_full.yaml
```

After a few minutes, verify that the Ingress resource was created with the following command. Describe ingress resource using kubectl. You will see the amazon resource name (ARN) of the generated application load balancer (ALB). Copy the address from output and open on the web browser.
```sh
kubectl -n game-2048 get ing
```
Output:
```
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-2048 <none> * k8s-game2048-ingress2-9e5ab32c61-1003956951.ap-northeast-2.elb.amazonaws.com 80 29s
```

![aws-ec2-lbc-game-2048](../../images/aws-ec2-lbc-game-2048.png)

## Clean up
### Remove Application
Delete the example from kubernetes:
```sh
kubectl delete -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/examples/2048/2048_full.yaml
```

### Remove Infrastructure
Run terraform to destroy infrastructure:
```sh
terraform destroy
```
Don't forget you have to use the `-var-file` option when you run terraform destroy command to delete the aws resources created with extra variable files.
```sh
terraform destroy -var-file tc1.tfvars
```
24 changes: 24 additions & 0 deletions examples/lb/default.auto.tfvars
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
aws_region = "ap-northeast-2"
azs = ["ap-northeast-2a", "ap-northeast-2b", "ap-northeast-2c"]
cidr = "10.1.0.0/16"
enable_igw = true
enable_ngw = true
single_ngw = true
name = "eks-lbc"
tags = {
env = "dev"
}
kubernetes_version = "1.20"
node_groups = [
{
name = "spot"
min_size = 1
max_size = 3
desired_size = 1
instance_type = "t3.large"
instances_distribution = {
spot_allocation_strategy = "lowest-price"
spot_max_price = "0.036"
}
}
]
50 changes: 50 additions & 0 deletions examples/lb/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
# Amazon EKS with AWS LoadBalancers

terraform {
required_version = "0.13.5"
}

provider "aws" {
region = var.aws_region
}

# vpc
module "vpc" {
source = "Young-ook/spinnaker/aws//modules/spinnaker-aware-aws-vpc"
name = var.name
tags = merge(var.tags, module.eks.tags.shared)
azs = var.azs
cidr = var.cidr
enable_igw = var.enable_igw
enable_ngw = var.enable_ngw
single_ngw = var.single_ngw
vpc_endpoint_config = []
}

# eks
module "eks" {
source = "Young-ook/eks/aws"
name = var.name
tags = var.tags
subnets = values(module.vpc.subnets["private"])
kubernetes_version = var.kubernetes_version
managed_node_groups = var.managed_node_groups
node_groups = var.node_groups
fargate_profiles = var.fargate_profiles
}

provider "helm" {
kubernetes {
host = module.eks.helmconfig.host
token = module.eks.helmconfig.token
cluster_ca_certificate = base64decode(module.eks.helmconfig.ca)
}
}

module "lb-controller" {
source = "../../modules/lb-controller"
enabled = module.eks.features.managed_node_groups_enabled || module.eks.features.node_groups_enabled
cluster_name = module.eks.cluster.name
oidc = module.eks.oidc
tags = var.tags
}
19 changes: 19 additions & 0 deletions examples/lb/outputs.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
output "eks" {
description = "The generated AWS EKS cluster"
value = module.eks.cluster
}

output "role" {
description = "The generated role of the EKS node group"
value = module.eks.role
}

output "kubeconfig" {
description = "Bash script to update the kubeconfig file for the EKS cluster"
value = module.eks.kubeconfig
}

output "features" {
description = "Features configurations of the AWS EKS cluster"
value = module.eks.features
}
21 changes: 21 additions & 0 deletions examples/lb/tc1.tfvars
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
aws_region = "ap-northeast-2"
azs = ["ap-northeast-2a", "ap-northeast-2b", "ap-northeast-2c"]
cidr = "10.1.0.0/16"
enable_igw = true
enable_ngw = true
single_ngw = true
name = "eks-lbc-tc1"
tags = {
env = "dev"
test = "tc1"
}
kubernetes_version = "1.20"
node_groups = [
{
name = "default"
min_size = 1
max_size = 3
desired_size = 1
instance_type = "t3.large"
}
]
35 changes: 35 additions & 0 deletions examples/lb/tc2.tfvars
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
aws_region = "ap-northeast-1"
azs = ["ap-northeast-1a", "ap-northeast-1c", "ap-northeast-1d"]
cidr = "10.1.0.0/16"
enable_igw = true
enable_ngw = true
single_ngw = true
name = "eks-lbc-tc2"
tags = {
env = "dev"
test = "tc2"
}
kubernetes_version = "1.20"
node_groups = [
{
name = "mixed"
min_size = 1
max_size = 3
desired_size = 3
instance_type = "t3.medium"
instances_distribution = {
on_demand_percentage_above_base_capacity = 50
spot_allocation_strategy = "capacity-optimized"
}
instances_override = [
{
instance_type = "t3.small"
weighted_capacity = 2
},
{
instance_type = "t3.large"
weighted_capacity = 1
}
]
}
]
79 changes: 79 additions & 0 deletions examples/lb/variables.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
# Variables for providing to module fixture codes

### network
variable "aws_region" {
description = "The aws region to deploy"
type = string
default = "us-east-1"
}

variable "cidr" {
description = "The vpc CIDR (e.g. 10.0.0.0/16)"
type = string
default = "10.0.0.0/16"
}

variable "azs" {
description = "A list of availability zones for the vpc to deploy resources"
type = list(string)
default = ["us-east-1a", "us-east-1b", "us-east-1c"]
}

variable "subnets" {
description = "The list of subnets to deploy an eks cluster"
type = list(string)
default = null
}

variable "enable_igw" {
description = "Should be true if you want to provision Internet Gateway for internet facing communication"
type = bool
default = true
}

variable "enable_ngw" {
description = "Should be true if you want to provision NAT Gateway(s) across all of private networks"
type = bool
default = false
}

variable "single_ngw" {
description = "Should be true if you want to provision a single shared NAT Gateway across all of private networks"
type = bool
default = false
}

### kubernetes cluster
variable "kubernetes_version" {
description = "The target version of kubernetes"
type = string
}

variable "node_groups" {
description = "Node groups definition"
default = []
}

variable "managed_node_groups" {
description = "Amazon managed node groups definition"
default = []
}

variable "fargate_profiles" {
description = "Amazon Fargate for EKS profiles"
default = []
}

### description
variable "name" {
description = "The logical name of the module instance"
type = string
default = "eks"
}

### tags
variable "tags" {
description = "The key-value maps for tagging"
type = map(string)
default = {}
}
Binary file added images/aws-ec2-lbc-game-2048.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
60 changes: 60 additions & 0 deletions modules/lb-controller/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
# AWS Load Balancer Controller
AWS Load Balancer Controller is a controller to help manage Elastic Load Balancers for a Kubernetes cluster.
- It satisfies Kubernetes Ingress resources by provisioning Application Load Balancers.
- It satisfies Kubernetes Service resources by provisioning Network Load Balancers.
The AWS Load Balancer Controller makes it easy for users to take advantage of the loadbalancer management.

You can load balance application traffic across pods using the AWS Application Load Balancer (ALB). To learn more, see [What is an Application Load Balancer?](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html) in the Application Load Balancers User Guide. You can share an ALB across multiple applications in your Kubernetes cluster using Ingress groups. In the past, you needed to use a separate ALB for each application. The controller automatically provisions AWS ALBs in response to Kubernetes Ingress objects. ALBs can be used with pods deployed to nodes or to AWS Fargate. You can deploy an ALB to public or private subnets.

The [AWS load balancer controller](https://github.com/kubernetes-sigs/aws-load-balancer-controller) (formerly named AWS ALB Ingress Controller) creates ALBs and the necessary supporting AWS resources whenever a Kubernetes Ingress resource is created on the cluster with the kubernetes.io/ingress.class: alb annotation. The Ingress resource configures the ALB to route HTTP or HTTPS traffic to different pods within the cluster. To ensure that your Ingress objects use the AWS load balancer controller, add the following annotation to your Kubernetes Ingress specification. For more information, see [Ingress specification](https://kubernetes-sigs.github.io/aws-load-balancer-controller/guide/ingress/spec/) on GitHub.

## Examples
- [Quickstart Example](https://github.com/Young-ook/terraform-aws-eks/blob/main/modules/alb-ingress/README.md#quickstart)
- [Kubernetes Ingress with AWS ALB Ingress Controller](https://aws.amazon.com/blogs/opensource/kubernetes-ingress-aws-alb-ingress-controller/a)

## Quickstart
### Setup
This is a terraform module to deploy Helm chart for AWS LoadBalancer Controller.
```hcl
module "eks" {
source = "Young-ook/eks/aws"
name = "eks"
}
provider "helm" {
kubernetes {
host = module.eks.helmconfig.host
token = module.eks.helmconfig.token
cluster_ca_certificate = base64decode(module.eks.helmconfig.ca)
}
}
module "lb-controller" {
source = "Young-ook/eks/aws//modules/lb-controller"
cluster_name = module.eks.cluster.name
oidc = module.eks.oidc
tags = { env = "test" }
}
```
Modify the terraform configuration file to deploy AWS Load Balancer Controller. Run the terraform code to make a change on your environment.
```
terraform init
terraform apply
```

### Verify
All steps are finished, check that there are pods that are `Ready` in `kube-system` namespace:
Ensure the `aws-load-balancer-controller` pod is generated and running:

```sh
kubectl get deployment -n kube-system aws-load-balancer-controller
```
Output:
```
NAME READY UP-TO-DATE AVAILABLE AGE
aws-load-balancer-controller 2/2 2 2 84s
```
If the pod is not healthy, please try to check the log:
```sh
kubectl -n kube-system logs aws-load-balancer-controller-7dd4ff8cb-wqq58
```
17 changes: 17 additions & 0 deletions modules/lb-controller/labels.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
resource "random_string" "lbc-suffix" {
count = var.enabled ? 1 : 0
length = 5
upper = false
lower = true
number = false
special = false
}

locals {
suffix = var.petname && var.enabled ? random_string.lbc-suffix.0.result : ""
name = join("-", compact([var.cluster_name, "aws-load-balancer-controller", local.suffix]))
default-tags = merge(
{ "terraform.io" = "managed" },
{ "Name" = local.name },
)
}
Loading

0 comments on commit 64abacd

Please sign in to comment.