From 75db459071564806ed5c22170ddd7d5d03b19c9f Mon Sep 17 00:00:00 2001 From: GitHub Action Date: Fri, 17 Jan 2025 17:00:24 +0000 Subject: [PATCH] Deployed 4fe03ae with MkDocs version: 1.6.1 --- UPGRADE-20.0/index.html | 28 ++++------------------------ search/search_index.json | 2 +- 2 files changed, 5 insertions(+), 25 deletions(-) diff --git a/UPGRADE-20.0/index.html b/UPGRADE-20.0/index.html index 79f912a821..6432f3da25 100644 --- a/UPGRADE-20.0/index.html +++ b/UPGRADE-20.0/index.html @@ -452,15 +452,6 @@ - - -
  • - - - authentication_mode = "CONFIG_MAP" - - -
  • @@ -529,7 +520,7 @@

    List of backwards incompatible c
  • The default/fallback value for the preserve argument of cluster_addonsis now set to true. This has shown to be useful for users deprovisioning clusters while avoiding the situation where the CNI is deleted too early and causes resources to be left orphaned resulting in conflicts.
  • The Karpenter sub-module's use of the irsa naming convention has been removed, along with an update to the Karpenter controller IAM policy to align with Karpenter's v1beta1/v0.32 changes. Instead of referring to the role as irsa or pod_identity, its simply just an IAM role used by the Karpenter controller and there is support for use with either IRSA and/or Pod Identity (default) at this time
  • The aws-auth ConfigMap resources have been moved to a standalone sub-module. This removes the Kubernetes provider requirement from the main module and allows for the aws-auth ConfigMap to be managed independently of the main module. This sub-module will be removed entirely in the next major release.
  • -
  • Support for cluster access management has been added with the default authentication mode set as API_AND_CONFIG_MAP. This is a one way change if applied; if you wish to use CONFIG_MAP, you will need to set authentication_mode = "CONFIG_MAP" explicitly when upgrading.
  • +
  • Support for cluster access management has been added with the default authentication mode set as API_AND_CONFIG_MAP. Support for CONFIG_MAP is no longer supported; instead you will need to use API_AND_CONFIG_MAP at minimum
  • Karpenter EventBridge rule key spot_interrupt updated to correct mis-spelling (was spot_interupt). This will cause the rule to be replaced
  • ⚠️ Upcoming Changes Planned in v21.0 ⚠️

    @@ -764,22 +755,11 @@

    Terraform State Moves⚠️ Authentication Mode Changes ⚠️

    Changing the authentication_mode is a one-way decision. See announcement blog for further details:

    -

    Switching authentication modes on an existing cluster is a one-way operation. You can switch from CONFIG_MAP to API_AND_CONFIG_MAP. You can then switch from API_AND_CONFIG_MAP to API. You cannot revert these operations in the opposite direction. Meaning you cannot switch back to CONFIG_MAP or API_AND_CONFIG_MAP from API. And you cannot switch back to CONFIG_MAP from API_AND_CONFIG_MAP.

    +

    Switching authentication modes on an existing cluster is a one-way operation. You can switch from CONFIG_MAP to API_AND_CONFIG_MAP. You can then switch from API_AND_CONFIG_MAP to API. You cannot revert these operations in the opposite direction. Meaning you cannot switch back to CONFIG_MAP or API_AND_CONFIG_MAP from API.

    [!IMPORTANT] If migrating to cluster access entries and you will NOT have any entries that remain in the aws-auth ConfigMap, you do not need to remove the configmap from the statefile. You can simply follow the migration guide and once access entries have been created, you can let Terraform remove/delete the aws-auth ConfigMap.

    If you WILL have entries that remain in the aws-auth ConfigMap, then you will need to remove the ConfigMap resources from the statefile to avoid any disruptions. When you add the new aws-auth sub-module and apply the changes, the sub-module will upsert the ConfigMap on the cluster. Provided the necessary entries are defined in that sub-module's definition, it will "re-adopt" the ConfigMap under Terraform's control.

    -

    authentication_mode = "CONFIG_MAP"

    -

    If using authentication_mode = "CONFIG_MAP", before making any changes, you will first need to remove the configmap from the statefile to avoid any disruptions:

    -
    terraform state rm 'module.eks.kubernetes_config_map_v1_data.aws_auth[0]'
    -terraform state rm 'module.eks.kubernetes_config_map.aws_auth[0]' # include if Terraform created the original configmap
    -
    -

    Once the configmap has been removed from the statefile, you can add the new aws-auth sub-module and copy the relevant definitions from the EKS module over to the new aws-auth sub-module definition (see before after diff above).

    -
    -

    [!CAUTION] -You will need to add entries to the aws-auth sub-module for any IAM roles used by node groups and/or Fargate profiles - the module no longer handles this in the background on behalf of users.

    -

    When you apply the changes with the new sub-module, the configmap in the cluster will get updated with the contents provided in the sub-module definition, so please be sure all of the necessary entries are added before applying the changes.

    -

    authentication_mode = "API_AND_CONFIG_MAP"

    When using authentication_mode = "API_AND_CONFIG_MAP" and there are entries that will remain in the configmap (entries that cannot be replaced by cluster access entry), you will first need to update the authentication_mode on the cluster to "API_AND_CONFIG_MAP". To help make this upgrade process easier, a copy of the changes defined in the v20.0.0 PR have been captured here but with the aws-auth components still provided in the module. This means you get the equivalent of the v20.0.0 module, but it still includes support for the aws-auth configmap. You can follow the provided README on that interim migration module for the order of execution and return here once the authentication_mode has been updated to "API_AND_CONFIG_MAP". Note - EKS automatically adds access entries for the roles used by EKS managed node groups and Fargate profiles; users do not need to do anything additional for these roles.

    Once the authentication_mode has been updated, next you will need to remove the configmap from the statefile to avoid any disruptions:

    @@ -787,8 +767,8 @@

    authentication_mode = "API_AND_C

    [!NOTE] This is only required if there are entries that will remain in the aws-auth ConfigMap after migrating. Otherwise, you can skip this step and let Terraform destroy the ConfigMap.

    -
    terraform state rm 'module.eks.kubernetes_config_map_v1_data.aws_auth[0]'
    -terraform state rm 'module.eks.kubernetes_config_map.aws_auth[0]' # include if Terraform created the original configmap
    +
    terraform state rm 'module.eks.kubernetes_config_map_v1_data.aws_auth[0]'
    +terraform state rm 'module.eks.kubernetes_config_map.aws_auth[0]' # include if Terraform created the original configmap
     

    ℹ️ Terraform 1.7+ users

    If you are using Terraform v1.7+, you can utilize the remove to facilitate both the removal of the configmap through code. You can create a fork/clone of the provided migration module and add the remove blocks and apply those changes before proceeding. We do not want to force users onto the bleeding edge with this module, so we have not included remove support at this time.

    diff --git a/search/search_index.json b/search/search_index.json index 02509a4ece..51037f4508 100644 --- a/search/search_index.json +++ b/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Terraform AWS EKS module","text":"

    Moar content coming soon!

    "},{"location":"UPGRADE-17.0/","title":"How to handle the terraform-aws-eks module upgrade","text":""},{"location":"UPGRADE-17.0/#upgrade-module-to-v1700-for-managed-node-groups","title":"Upgrade module to v17.0.0 for Managed Node Groups","text":"

    In this release, we now decided to remove random_pet resources in Managed Node Groups (MNG). Those were used to recreate MNG if something changed. But they were causing a lot of issues. To upgrade the module without recreating your MNG, you will need to explicitly reuse their previous name and set them in your MNG name argument.

    1. Run terraform apply with the module version v16.2.0
    2. Get your worker group names
    ~ terraform state show 'module.eks.module.node_groups.aws_eks_node_group.workers[\"example\"]' | grep node_group_name\nnode_group_name = \"test-eks-mwIwsvui-example-sincere-squid\"\n
    1. Upgrade your module and configure your node groups to use existing names
    module \"eks\" {\n  source  = \"terraform-aws-modules/eks/aws\"\n  version = \"17.0.0\"\n\n  cluster_name    = \"test-eks-mwIwsvui\"\n  cluster_version = \"1.20\"\n  # ...\n\n  node_groups = {\n    example = {\n      name = \"test-eks-mwIwsvui-example-sincere-squid\"\n\n      # ...\n    }\n  }\n  # ...\n}\n
    1. Run terraform plan, you should see that only random_pets will be destroyed
    Terraform will perform the following actions:\n\n  # module.eks.module.node_groups.random_pet.node_groups[\"example\"] will be destroyed\n  - resource \"random_pet\" \"node_groups\" {\n      - id        = \"sincere-squid\" -> null\n      - keepers   = {\n          - \"ami_type\"                  = \"AL2_x86_64\"\n          - \"capacity_type\"             = \"SPOT\"\n          - \"disk_size\"                 = \"50\"\n          - \"iam_role_arn\"              = \"arn:aws:iam::123456789123:role/test-eks-mwIwsvui20210527220853611600000009\"\n          - \"instance_types\"            = \"t3.large\"\n          - \"key_name\"                  = \"\"\n          - \"node_group_name\"           = \"test-eks-mwIwsvui-example\"\n          - \"source_security_group_ids\" = \"\"\n          - \"subnet_ids\"                = \"subnet-xxxxxxxxxxxx|subnet-xxxxxxxxxxxx|subnet-xxxxxxxxxxxx\"\n        } -> null\n      - length    = 2 -> null\n      - separator = \"-\" -> null\n    }\n\nPlan: 0 to add, 0 to change, 1 to destroy.\n
    1. If everything sounds good to you, run terraform apply

    After the first apply, we recommend you to create a new node group and let the module use the node_group_name_prefix (by removing the name argument) to generate names and avoid collision during node groups re-creation if needed, because the lifecycle is create_before_destroy = true.

    "},{"location":"UPGRADE-18.0/","title":"Upgrade from v17.x to v18.x","text":"

    Please consult the examples directory for reference example configurations. If you find a bug, please open an issue with supporting configuration to reproduce.

    Note: please see https://github.com/terraform-aws-modules/terraform-aws-eks/issues/1744 where users have shared the steps/changes that have worked for their configurations to upgrade. Due to the numerous configuration possibilities, it is difficult to capture specific steps that will work for all; this has proven to be a useful thread to share collective information from the broader community regarding v18.x upgrades.

    For most users, adding the following to your v17.x configuration will preserve the state of your cluster control plane when upgrading to v18.x:

    prefix_separator                   = \"\"\niam_role_name                      = $CLUSTER_NAME\ncluster_security_group_name        = $CLUSTER_NAME\ncluster_security_group_description = \"EKS cluster security group.\"\n

    This configuration assumes that create_iam_role is set to true, which is the default value.

    As the location of the Terraform state of the IAM role has been changed from 17.x to 18.x, you'll also have to move the state before running terraform apply by calling:

    terraform state mv 'module.eks.aws_iam_role.cluster[0]' 'module.eks.aws_iam_role.this[0]'\n

    See more information here

    "},{"location":"UPGRADE-18.0/#list-of-backwards-incompatible-changes","title":"List of backwards incompatible changes","text":"
    • Launch configuration support has been removed and only launch template is supported going forward. AWS is no longer adding new features back into launch configuration and their docs state We strongly recommend that you do not use launch configurations. They do not provide full functionality for Amazon EC2 Auto Scaling or Amazon EC2. We provide information about launch configurations for customers who have not yet migrated from launch configurations to launch templates.
    • Support for managing aws-auth configmap has been removed. This change also removes the dependency on the Kubernetes Terraform provider, the local dependency on aws-iam-authenticator for users, as well as the reliance on the forked http provider to wait and poll on cluster creation. To aid users in this change, an output variable aws_auth_configmap_yaml has been provided which renders the aws-auth configmap necessary to support at least the IAM roles used by the module (additional mapRoles/mapUsers definitions to be provided by users)
    • Support for managing kubeconfig and its associated local_file resources have been removed; users are able to use the awscli provided aws eks update-kubeconfig --name <cluster_name> to update their local kubeconfig as necessary
    • The terminology used in the module has been modified to reflect that used by the AWS documentation.
    • AWS EKS Managed Node Group, eks_managed_node_groups, was previously referred to as simply node group, node_groups
    • Self Managed Node Group Group, self_managed_node_groups, was previously referred to as worker group, worker_groups
    • AWS Fargate Profile, fargate_profiles, remains unchanged in terms of naming and terminology
    • The three different node group types supported by AWS and the module have been refactored into standalone sub-modules that are both used by the root eks module as well as available for individual, standalone consumption if desired.
    • The previous node_groups sub-module is now named eks-managed-node-group and provisions a single AWS EKS Managed Node Group per sub-module definition (previous version utilized for_each to create 0 or more node groups)
      • Additional changes for the eks-managed-node-group sub-module over the previous node_groups module include:
      • Variable name changes defined in section Variable and output changes below
      • Support for nearly full control of the IAM role created, or provide the ARN of an existing IAM role, has been added
      • Support for nearly full control of the security group created, or provide the ID of an existing security group, has been added
      • User data has been revamped and all user data logic moved to the _user_data internal sub-module; the local userdata.sh.tpl has been removed entirely
    • The previous fargate sub-module is now named fargate-profile and provisions a single AWS EKS Fargate Profile per sub-module definition (previous version utilized for_each to create 0 or more profiles)
      • Additional changes for the fargate-profile sub-module over the previous fargate module include:
      • Variable name changes defined in section Variable and output changes below
      • Support for nearly full control of the IAM role created, or provide the ARN of an existing IAM role, has been added
      • Similar to the eks_managed_node_group_defaults and self_managed_node_group_defaults, a fargate_profile_defaults has been provided to allow users to control the default configurations for the Fargate profiles created
    • A sub-module for self-managed-node-group has been created and provisions a single self managed node group (autoscaling group) per sub-module definition
      • Additional changes for the self-managed-node-group sub-module over the previous node_groups variable include:
      • The underlying autoscaling group and launch template have been updated to more closely match that of the terraform-aws-autoscaling module and the features it offers
      • The previous iteration used a count over a list of node group definitions which was prone to disruptive updates; this is now replaced with a map/for_each to align with that of the EKS managed node group and Fargate profile behaviors/style
    • The user data configuration supported across the module has been completely revamped. A new _user_data internal sub-module has been created to consolidate all user data configuration in one location which provides better support for testability (via the tests/user-data example). The new sub-module supports nearly all possible combinations including the ability to allow users to provide their own user data template which will be rendered by the module. See the tests/user-data example project for the full plethora of example configuration possibilities and more details on the logic of the design can be found in the modules/_user_data directory.
    • Resource name changes may cause issues with existing resources. For example, security groups and IAM roles cannot be renamed, they must be recreated. Recreation of these resources may also trigger a recreation of the cluster. To use the legacy (< 18.x) resource naming convention, set prefix_separator to \"\".
    • Security group usage has been overhauled to provide only the bare minimum network connectivity required to launch a bare bones cluster. See the security group documentation section for more details. Users upgrading to v18.x will want to review the rules they have in place today versus the rules provisioned by the v18.x module and ensure to make any necessary adjustments for their specific workload.
    "},{"location":"UPGRADE-18.0/#additional-changes","title":"Additional changes","text":""},{"location":"UPGRADE-18.0/#added","title":"Added","text":"
    • Support for AWS EKS Addons has been added
    • Support for AWS EKS Cluster Identity Provider Configuration has been added
    • AWS Terraform provider minimum required version has been updated to 3.64 to support the changes made and additional resources supported
    • An example user_data project has been added to aid in demonstrating, testing, and validating the various methods of configuring user data with the _user_data sub-module as well as the root eks module
    • Template for rendering the aws-auth configmap output - aws_auth_cm.tpl
    • Template for Bottlerocket OS user data bootstrapping - bottlerocket_user_data.tpl
    "},{"location":"UPGRADE-18.0/#modified","title":"Modified","text":"
    • The previous fargate example has been renamed to fargate_profile
    • The previous irsa and instance_refresh examples have been merged into one example irsa_autoscale_refresh
    • The previous managed_node_groups example has been renamed to self_managed_node_group
    • The previously hardcoded EKS OIDC root CA thumbprint value and variable has been replaced with a tls_certificate data source that refers to the cluster OIDC issuer url. Thumbprint values should remain unchanged however
    • Individual cluster security group resources have been replaced with a single security group resource that takes a map of rules as input. The default ingress/egress rules have had their scope reduced in order to provide the bare minimum of access to permit successful cluster creation and allow users to opt in to any additional network access as needed for a better security posture. This means the 0.0.0.0/0 egress rule has been removed, instead TCP/443 and TCP/10250 egress rules to the node group security group are used instead
    • The Linux/bash user data template has been updated to include the bare minimum necessary for bootstrapping AWS EKS Optimized AMI derivative nodes with provisions for providing additional user data and configurations; was named userdata.sh.tpl and is now named linux_user_data.tpl
    • The Windows user data template has been renamed from userdata_windows.tpl to windows_user_data.tpl
    "},{"location":"UPGRADE-18.0/#removed","title":"Removed","text":"
    • Miscellaneous documents on how to configure Kubernetes cluster internals have been removed. Documentation related to how to configure the AWS EKS Cluster and its supported infrastructure resources provided by the module are supported, while cluster internal configuration is out of scope for this project
    • The previous bottlerocket example has been removed in favor of demonstrating the use and configuration of Bottlerocket nodes via the respective eks_managed_node_group and self_managed_node_group examples
    • The previous launch_template and launch_templates_with_managed_node_groups examples have been removed; only launch templates are now supported (default) and launch configuration support has been removed
    • The previous secrets_encryption example has been removed; the functionality has been demonstrated in several of the new examples rendering this standalone example redundant
    • The additional, custom IAM role policy for the cluster role has been removed. The permissions are either now provided in the attached managed AWS permission policies used or are no longer required
    • The kubeconfig.tpl template; kubeconfig management is no longer supported under this module
    • The HTTP Terraform provider (forked copy) dependency has been removed
    "},{"location":"UPGRADE-18.0/#variable-and-output-changes","title":"Variable and output changes","text":"
    1. Removed variables:

      • cluster_create_timeout, cluster_update_timeout, and cluster_delete_timeout have been replaced with cluster_timeouts
      • kubeconfig_name
      • kubeconfig_output_path
      • kubeconfig_file_permission
      • kubeconfig_api_version
      • kubeconfig_aws_authenticator_command
      • kubeconfig_aws_authenticator_command_args
      • kubeconfig_aws_authenticator_additional_args
      • kubeconfig_aws_authenticator_env_variables
      • write_kubeconfig
      • default_platform
      • manage_aws_auth
      • aws_auth_additional_labels
      • map_accounts
      • map_roles
      • map_users
      • fargate_subnets
      • worker_groups_launch_template
      • worker_security_group_id
      • worker_ami_name_filter
      • worker_ami_name_filter_windows
      • worker_ami_owner_id
      • worker_ami_owner_id_windows
      • worker_additional_security_group_ids
      • worker_sg_ingress_from_port
      • workers_additional_policies
      • worker_create_security_group
      • worker_create_initial_lifecycle_hooks
      • worker_create_cluster_primary_security_group_rules
      • cluster_create_endpoint_private_access_sg_rule
      • cluster_endpoint_private_access_cidrs
      • cluster_endpoint_private_access_sg
      • manage_worker_iam_resources
      • workers_role_name
      • attach_worker_cni_policy
      • eks_oidc_root_ca_thumbprint
      • create_fargate_pod_execution_role
      • fargate_pod_execution_role_name
      • cluster_egress_cidrs
      • workers_egress_cidrs
      • wait_for_cluster_timeout
      • EKS Managed Node Group sub-module (was node_groups)
      • default_iam_role_arn
      • workers_group_defaults
      • worker_security_group_id
      • node_groups_defaults
      • node_groups
      • ebs_optimized_not_supported
      • Fargate profile sub-module (was fargate)
      • create_eks and create_fargate_pod_execution_role have been replaced with simply create
    2. Renamed variables:

      • create_eks -> create
      • subnets -> subnet_ids
      • cluster_create_security_group -> create_cluster_security_group
      • cluster_log_retention_in_days -> cloudwatch_log_group_retention_in_days
      • cluster_log_kms_key_id -> cloudwatch_log_group_kms_key_id
      • manage_cluster_iam_resources -> create_iam_role
      • cluster_iam_role_name -> iam_role_name
      • permissions_boundary -> iam_role_permissions_boundary
      • iam_path -> iam_role_path
      • pre_userdata -> pre_bootstrap_user_data
      • additional_userdata -> post_bootstrap_user_data
      • worker_groups -> self_managed_node_groups
      • workers_group_defaults -> self_managed_node_group_defaults
      • node_groups -> eks_managed_node_groups
      • node_groups_defaults -> eks_managed_node_group_defaults
      • EKS Managed Node Group sub-module (was node_groups)
      • create_eks -> create
      • worker_additional_security_group_ids -> vpc_security_group_ids
      • Fargate profile sub-module
      • fargate_pod_execution_role_name -> name
      • create_fargate_pod_execution_role -> create_iam_role
      • subnets -> subnet_ids
      • iam_path -> iam_role_path
      • permissions_boundary -> iam_role_permissions_boundary
    3. Added variables:

      • cluster_additional_security_group_ids added to allow users to add additional security groups to the cluster as needed
      • cluster_security_group_name
      • cluster_security_group_use_name_prefix added to allow users to use either the name as specified or default to using the name specified as a prefix
      • cluster_security_group_description
      • cluster_security_group_additional_rules
      • cluster_security_group_tags
      • create_cloudwatch_log_group added in place of the logic that checked if any cluster log types were enabled to allow users to opt in as they see fit
      • create_node_security_group added to create single security group that connects node groups and cluster in central location
      • node_security_group_id
      • node_security_group_name
      • node_security_group_use_name_prefix
      • node_security_group_description
      • node_security_group_additional_rules
      • node_security_group_tags
      • iam_role_arn
      • iam_role_use_name_prefix
      • iam_role_description
      • iam_role_additional_policies
      • iam_role_tags
      • cluster_addons
      • cluster_identity_providers
      • fargate_profile_defaults
      • prefix_separator added to support legacy behavior of not having a prefix separator
      • EKS Managed Node Group sub-module (was node_groups)
      • platform
      • enable_bootstrap_user_data
      • pre_bootstrap_user_data
      • post_bootstrap_user_data
      • bootstrap_extra_args
      • user_data_template_path
      • create_launch_template
      • launch_template_name
      • launch_template_use_name_prefix
      • description
      • ebs_optimized
      • ami_id
      • key_name
      • launch_template_default_version
      • update_launch_template_default_version
      • disable_api_termination
      • kernel_id
      • ram_disk_id
      • block_device_mappings
      • capacity_reservation_specification
      • cpu_options
      • credit_specification
      • elastic_gpu_specifications
      • elastic_inference_accelerator
      • enclave_options
      • instance_market_options
      • license_specifications
      • metadata_options
      • enable_monitoring
      • network_interfaces
      • placement
      • min_size
      • max_size
      • desired_size
      • use_name_prefix
      • ami_type
      • ami_release_version
      • capacity_type
      • disk_size
      • force_update_version
      • instance_types
      • labels
      • cluster_version
      • launch_template_version
      • remote_access
      • taints
      • update_config
      • timeouts
      • create_security_group
      • security_group_name
      • security_group_use_name_prefix
      • security_group_description
      • vpc_id
      • security_group_rules
      • cluster_security_group_id
      • security_group_tags
      • create_iam_role
      • iam_role_arn
      • iam_role_name
      • iam_role_use_name_prefix
      • iam_role_path
      • iam_role_description
      • iam_role_permissions_boundary
      • iam_role_additional_policies
      • iam_role_tags
      • Fargate profile sub-module (was fargate)
      • iam_role_arn (for if create_iam_role is false to bring your own externally created role)
      • iam_role_name
      • iam_role_use_name_prefix
      • iam_role_description
      • iam_role_additional_policies
      • iam_role_tags
      • selectors
      • timeouts
    4. Removed outputs:

      • cluster_version
      • kubeconfig
      • kubeconfig_filename
      • workers_asg_arns
      • workers_asg_names
      • workers_user_data
      • workers_default_ami_id
      • workers_default_ami_id_windows
      • workers_launch_template_ids
      • workers_launch_template_arns
      • workers_launch_template_latest_versions
      • worker_security_group_id
      • worker_iam_instance_profile_arns
      • worker_iam_instance_profile_names
      • worker_iam_role_name
      • worker_iam_role_arn
      • fargate_profile_ids
      • fargate_profile_arns
      • fargate_iam_role_name
      • fargate_iam_role_arn
      • node_groups
      • security_group_rule_cluster_https_worker_ingress
      • EKS Managed Node Group sub-module (was node_groups)
      • node_groups
      • aws_auth_roles
      • Fargate profile sub-module (was fargate)
      • aws_auth_roles
    5. Renamed outputs:

      • config_map_aws_auth -> aws_auth_configmap_yaml
      • Fargate profile sub-module (was fargate)
      • fargate_profile_ids -> fargate_profile_id
      • fargate_profile_arns -> fargate_profile_arn
    6. Added outputs:

      • cluster_platform_version
      • cluster_status
      • cluster_security_group_arn
      • cluster_security_group_id
      • node_security_group_arn
      • node_security_group_id
      • cluster_iam_role_unique_id
      • cluster_addons
      • cluster_identity_providers
      • fargate_profiles
      • eks_managed_node_groups
      • self_managed_node_groups
      • EKS Managed Node Group sub-module (was node_groups)
      • launch_template_id
      • launch_template_arn
      • launch_template_latest_version
      • node_group_arn
      • node_group_id
      • node_group_resources
      • node_group_status
      • security_group_arn
      • security_group_id
      • iam_role_name
      • iam_role_arn
      • iam_role_unique_id
      • Fargate profile sub-module (was fargate)
      • iam_role_unique_id
      • fargate_profile_status
    "},{"location":"UPGRADE-18.0/#upgrade-migrations","title":"Upgrade Migrations","text":""},{"location":"UPGRADE-18.0/#before-17x-example","title":"Before 17.x Example","text":"
    module \"eks\" {\n  source  = \"terraform-aws-modules/eks/aws\"\n  version = \"~> 17.0\"\n\n  cluster_name                    = local.name\n  cluster_version                 = local.cluster_version\n  cluster_endpoint_private_access = true\n  cluster_endpoint_public_access  = true\n\n  vpc_id  = module.vpc.vpc_id\n  subnets = module.vpc.private_subnets\n\n  # Managed Node Groups\n  node_groups_defaults = {\n    ami_type  = \"AL2_x86_64\"\n    disk_size = 50\n  }\n\n  node_groups = {\n    node_group = {\n      min_capacity     = 1\n      max_capacity     = 10\n      desired_capacity = 1\n\n      instance_types = [\"t3.large\"]\n      capacity_type  = \"SPOT\"\n\n      update_config = {\n        max_unavailable_percentage = 50\n      }\n\n      k8s_labels = {\n        Environment = \"test\"\n        GithubRepo  = \"terraform-aws-eks\"\n        GithubOrg   = \"terraform-aws-modules\"\n      }\n\n      taints = [\n        {\n          key    = \"dedicated\"\n          value  = \"gpuGroup\"\n          effect = \"NO_SCHEDULE\"\n        }\n      ]\n\n      additional_tags = {\n        ExtraTag = \"example\"\n      }\n    }\n  }\n\n  # Worker groups\n  worker_additional_security_group_ids = [aws_security_group.additional.id]\n\n  worker_groups_launch_template = [\n    {\n      name                    = \"worker-group\"\n      override_instance_types = [\"m5.large\", \"m5a.large\", \"m5d.large\", \"m5ad.large\"]\n      spot_instance_pools     = 4\n      asg_max_size            = 5\n      asg_desired_capacity    = 2\n      kubelet_extra_args      = \"--node-labels=node.kubernetes.io/lifecycle=spot\"\n      public_ip               = true\n    },\n  ]\n\n  # Fargate\n  fargate_profiles = {\n    default = {\n      name = \"default\"\n      selectors = [\n        {\n          namespace = \"kube-system\"\n          labels = {\n            k8s-app = \"kube-dns\"\n          }\n        },\n        {\n          namespace = \"default\"\n        }\n      ]\n\n      tags = {\n        Owner = \"test\"\n      }\n\n      timeouts = {\n        create = \"20m\"\n        delete = \"20m\"\n      }\n    }\n  }\n\n  tags = {\n    Environment = \"test\"\n    GithubRepo  = \"terraform-aws-eks\"\n    GithubOrg   = \"terraform-aws-modules\"\n  }\n}\n
    "},{"location":"UPGRADE-18.0/#after-18x-example","title":"After 18.x Example","text":"
    module \"cluster_after\" {\n  source  = \"terraform-aws-modules/eks/aws\"\n  version = \"~> 18.0\"\n\n  cluster_name                    = local.name\n  cluster_version                 = local.cluster_version\n  cluster_endpoint_private_access = true\n  cluster_endpoint_public_access  = true\n\n  vpc_id     = module.vpc.vpc_id\n  subnet_ids = module.vpc.private_subnets\n\n  eks_managed_node_group_defaults = {\n    ami_type  = \"AL2_x86_64\"\n    disk_size = 50\n  }\n\n  eks_managed_node_groups = {\n    node_group = {\n      min_size     = 1\n      max_size     = 10\n      desired_size = 1\n\n      instance_types = [\"t3.large\"]\n      capacity_type  = \"SPOT\"\n\n      update_config = {\n        max_unavailable_percentage = 50\n      }\n\n      labels = {\n        Environment = \"test\"\n        GithubRepo  = \"terraform-aws-eks\"\n        GithubOrg   = \"terraform-aws-modules\"\n      }\n\n      taints = [\n        {\n          key    = \"dedicated\"\n          value  = \"gpuGroup\"\n          effect = \"NO_SCHEDULE\"\n        }\n      ]\n\n      tags = {\n        ExtraTag = \"example\"\n      }\n    }\n  }\n\n  self_managed_node_group_defaults = {\n    vpc_security_group_ids = [aws_security_group.additional.id]\n  }\n\n  self_managed_node_groups = {\n    worker_group = {\n      name = \"worker-group\"\n\n      min_size      = 1\n      max_size      = 5\n      desired_size  = 2\n      instance_type = \"m4.large\"\n\n      bootstrap_extra_args = \"--kubelet-extra-args '--node-labels=node.kubernetes.io/lifecycle=spot'\"\n\n      block_device_mappings = {\n        xvda = {\n          device_name = \"/dev/xvda\"\n          ebs = {\n            delete_on_termination = true\n            encrypted             = false\n            volume_size           = 100\n            volume_type           = \"gp2\"\n          }\n\n        }\n      }\n\n      use_mixed_instances_policy = true\n      mixed_instances_policy = {\n        instances_distribution = {\n          spot_instance_pools = 4\n        }\n\n        override = [\n          { instance_type = \"m5.large\" },\n          { instance_type = \"m5a.large\" },\n          { instance_type = \"m5d.large\" },\n          { instance_type = \"m5ad.large\" },\n        ]\n      }\n    }\n  }\n\n  # Fargate\n  fargate_profiles = {\n    default = {\n      name = \"default\"\n\n      selectors = [\n        {\n          namespace = \"kube-system\"\n          labels = {\n            k8s-app = \"kube-dns\"\n          }\n        },\n        {\n          namespace = \"default\"\n        }\n      ]\n\n      tags = {\n        Owner = \"test\"\n      }\n\n      timeouts = {\n        create = \"20m\"\n        delete = \"20m\"\n      }\n    }\n  }\n\n  tags = {\n    Environment = \"test\"\n    GithubRepo  = \"terraform-aws-eks\"\n    GithubOrg   = \"terraform-aws-modules\"\n  }\n}\n
    "},{"location":"UPGRADE-18.0/#diff-of-before-after","title":"Diff of before <> after","text":"
     module \"eks\" {\n   source  = \"terraform-aws-modules/eks/aws\"\n-  version = \"~> 17.0\"\n+  version = \"~> 18.0\"\n\n   cluster_name                    = local.name\n   cluster_version                 = local.cluster_version\n   cluster_endpoint_private_access = true\n   cluster_endpoint_public_access  = true\n\n   vpc_id  = module.vpc.vpc_id\n-  subnets = module.vpc.private_subnets\n+  subnet_ids = module.vpc.private_subnets\n\n-  # Managed Node Groups\n-  node_groups_defaults = {\n+  eks_managed_node_group_defaults = {\n     ami_type  = \"AL2_x86_64\"\n     disk_size = 50\n   }\n\n-  node_groups = {\n+  eks_managed_node_groups = {\n     node_group = {\n-      min_capacity     = 1\n-      max_capacity     = 10\n-      desired_capacity = 1\n+      min_size     = 1\n+      max_size     = 10\n+      desired_size = 1\n\n       instance_types = [\"t3.large\"]\n       capacity_type  = \"SPOT\"\n\n       update_config = {\n         max_unavailable_percentage = 50\n       }\n\n-      k8s_labels = {\n+      labels = {\n         Environment = \"test\"\n         GithubRepo  = \"terraform-aws-eks\"\n         GithubOrg   = \"terraform-aws-modules\"\n       }\n\n       taints = [\n         {\n           key    = \"dedicated\"\n           value  = \"gpuGroup\"\n           effect = \"NO_SCHEDULE\"\n         }\n       ]\n\n-      additional_tags = {\n+      tags = {\n         ExtraTag = \"example\"\n       }\n     }\n   }\n\n-  # Worker groups\n-  worker_additional_security_group_ids = [aws_security_group.additional.id]\n-\n-  worker_groups_launch_template = [\n-    {\n-      name                    = \"worker-group\"\n-      override_instance_types = [\"m5.large\", \"m5a.large\", \"m5d.large\", \"m5ad.large\"]\n-      spot_instance_pools     = 4\n-      asg_max_size            = 5\n-      asg_desired_capacity    = 2\n-      kubelet_extra_args      = \"--node-labels=node.kubernetes.io/lifecycle=spot\"\n-      public_ip               = true\n-    },\n-  ]\n+  self_managed_node_group_defaults = {\n+    vpc_security_group_ids = [aws_security_group.additional.id]\n+  }\n+\n+  self_managed_node_groups = {\n+    worker_group = {\n+      name = \"worker-group\"\n+\n+      min_size      = 1\n+      max_size      = 5\n+      desired_size  = 2\n+      instance_type = \"m4.large\"\n+\n+      bootstrap_extra_args = \"--kubelet-extra-args '--node-labels=node.kubernetes.io/lifecycle=spot'\"\n+\n+      block_device_mappings = {\n+        xvda = {\n+          device_name = \"/dev/xvda\"\n+          ebs = {\n+            delete_on_termination = true\n+            encrypted             = false\n+            volume_size           = 100\n+            volume_type           = \"gp2\"\n+          }\n+\n+        }\n+      }\n+\n+      use_mixed_instances_policy = true\n+      mixed_instances_policy = {\n+        instances_distribution = {\n+          spot_instance_pools = 4\n+        }\n+\n+        override = [\n+          { instance_type = \"m5.large\" },\n+          { instance_type = \"m5a.large\" },\n+          { instance_type = \"m5d.large\" },\n+          { instance_type = \"m5ad.large\" },\n+        ]\n+      }\n+    }\n+  }\n\n   # Fargate\n   fargate_profiles = {\n     default = {\n       name = \"default\"\n       selectors = [\n         {\n           namespace = \"kube-system\"\n           labels = {\n             k8s-app = \"kube-dns\"\n           }\n         },\n         {\n           namespace = \"default\"\n         }\n       ]\n\n       tags = {\n         Owner = \"test\"\n       }\n\n       timeouts = {\n         create = \"20m\"\n         delete = \"20m\"\n       }\n     }\n   }\n\n   tags = {\n     Environment = \"test\"\n     GithubRepo  = \"terraform-aws-eks\"\n     GithubOrg   = \"terraform-aws-modules\"\n   }\n }\n
    "},{"location":"UPGRADE-18.0/#attaching-an-iam-role-policy-to-a-fargate-profile","title":"Attaching an IAM role policy to a Fargate profile","text":""},{"location":"UPGRADE-18.0/#before-17x","title":"Before 17.x","text":"
    resource \"aws_iam_role_policy_attachment\" \"default\" {\n  role       = module.eks.fargate_iam_role_name\n  policy_arn = aws_iam_policy.default.arn\n}\n
    "},{"location":"UPGRADE-18.0/#after-18x","title":"After 18.x","text":"
    # Attach the policy to an \"example\" Fargate profile\nresource \"aws_iam_role_policy_attachment\" \"default\" {\n  role       = module.eks.fargate_profiles[\"example\"].iam_role_name\n  policy_arn = aws_iam_policy.default.arn\n}\n

    Or:

    # Attach the policy to all Fargate profiles\nresource \"aws_iam_role_policy_attachment\" \"default\" {\n  for_each = module.eks.fargate_profiles\n\n  role       = each.value.iam_role_name\n  policy_arn = aws_iam_policy.default.arn\n}\n
    "},{"location":"UPGRADE-19.0/","title":"Upgrade from v18.x to v19.x","text":"

    Please consult the examples directory for reference example configurations. If you find a bug, please open an issue with supporting configuration to reproduce.

    "},{"location":"UPGRADE-19.0/#list-of-backwards-incompatible-changes","title":"List of backwards incompatible changes","text":"
    • The cluster_id output used to output the name of the cluster. This is due to the fact that the cluster name is a unique constraint and therefore its set as the unique identifier within Terraform's state map. However, starting with local EKS clusters created on Outposts, there is now an attribute returned from the aws eks create-cluster API named id. The cluster_id has been updated to return this value which means that for current, standard EKS clusters created in the AWS cloud, no value will be returned (at the time of this writing) for cluster_id and only local EKS clusters on Outposts will return a value that looks like a UUID/GUID. Users should switch all instances of cluster_id to use cluster_name before upgrading to v19. Reference
    • Minimum supported version of Terraform AWS provider updated to v4.45 to support the latest features provided via the resources utilized.
    • Minimum supported version of Terraform updated to v1.0
    • Individual security group created per EKS managed node group or self-managed node group has been removed. This configuration went mostly unused and would often cause confusion (\"Why is there an empty security group attached to my nodes?\"). This functionality can easily be replicated by user's providing one or more externally created security groups to attach to nodes launched from the node group.
    • Previously, var.iam_role_additional_policies (one for each of the following: cluster IAM role, EKS managed node group IAM role, self-managed node group IAM role, and Fargate Profile IAM role) accepted a list of strings. This worked well for policies that already existed but failed for policies being created at the same time as the cluster due to the well-known issue of unknown values used in a for_each loop. To rectify this issue in v19.x, two changes were made:
    • var.iam_role_additional_policies was changed from type list(string) to type map(string) -> this is a breaking change. More information on managing this change can be found below, under Terraform State Moves
    • The logic used in the root module for this variable was changed to replace the use of try() with lookup(). More details on why can be found here
    • The cluster name has been removed from the Karpenter module event rule names. Due to the use of long cluster names appending to the provided naming scheme, the cluster name has moved to a ClusterName tag and the event rule name is now a prefix. This guarantees that users can have multiple instances of Karpenter with their respective event rules/SQS queue without name collisions, while also still being able to identify which queues and event rules belong to which cluster.
    • The new variable node_security_group_enable_recommended_rules is set to true by default and may conflict with any custom ingress/egress rules. Please ensure that any duplicates from the node_security_group_additional_rules are removed before upgrading, or set node_security_group_enable_recommended_rules to false. Reference
    "},{"location":"UPGRADE-19.0/#additional-changes","title":"Additional changes","text":""},{"location":"UPGRADE-19.0/#added","title":"Added","text":"
    • Support for setting preserve as well as most_recent on addons.
    • preserve indicates if you want to preserve the created resources when deleting the EKS add-on
    • most_recent indicates if you want to use the most recent revision of the add-on or the default version (default)
    • Support for setting default node security group rules for common access patterns required:
    • Egress all for 0.0.0.0/0/::/0
    • Ingress from cluster security group for 8443/TCP and 9443/TCP for common applications such as ALB Ingress Controller, Karpenter, OPA Gatekeeper, etc. These are commonly used as webhook ports for validating and mutating webhooks
    "},{"location":"UPGRADE-19.0/#modified","title":"Modified","text":"
    • cluster_security_group_additional_rules and node_security_group_additional_rules have been modified to use lookup() instead of try() to avoid the well-known issue of unknown values within a for_each loop
    • Default cluster security group rules have removed egress rules for TCP/443 and TCP/10250 to node groups since the cluster primary security group includes a default rule for ALL to 0.0.0.0/0/::/0
    • Default node security group rules have removed egress rules have been removed since the default security group settings have egress rule for ALL to 0.0.0.0/0/::/0
    • block_device_mappings previously required a map of maps but has since changed to an array of maps. Users can remove the outer key for each block device mapping and replace the outermost map {} with an array []. There are no state changes required for this change.
    • create_kms_key previously defaulted to false and now defaults to true. Clusters created with this module now default to enabling secret encryption by default with a customer-managed KMS key created by this module
    • cluster_encryption_config previously used a type of list(any) and now uses a type of any -> users can simply remove the outer [...] brackets on v19.x
    • cluster_encryption_config previously defaulted to [] and now defaults to {resources = [\"secrets\"]} to encrypt secrets by default
    • cluster_endpoint_public_access previously defaulted to true and now defaults to false. Clusters created with this module now default to private-only access to the cluster endpoint
    • cluster_endpoint_private_access previously defaulted to false and now defaults to true
    • The addon configuration now sets \"OVERWRITE\" as the default value for resolve_conflicts to ease add-on upgrade management. Users can opt out of this by instead setting \"NONE\" as the value for resolve_conflicts
    • The kms module used has been updated from v1.0.2 to v1.1.0 - no material changes other than updated to latest
    • The default value for EKS managed node group update_config has been updated to the recommended { max_unavailable_percentage = 33 }
    • The default value for the self-managed node group instance_refresh has been updated to the recommended:
      {\n  strategy = \"Rolling\"\n  preferences = {\n    min_healthy_percentage = 66\n  }\n}\n
    "},{"location":"UPGRADE-19.0/#removed","title":"Removed","text":"
    • Remove all references of aws_default_tags to avoid update conflicts; this is the responsibility of the provider and should be handled at the provider level
    • https://github.com/terraform-aws-modules/terraform-aws-eks/issues?q=is%3Aissue+default_tags+is%3Aclosed
    • https://github.com/terraform-aws-modules/terraform-aws-eks/pulls?q=is%3Apr+default_tags+is%3Aclosed
    "},{"location":"UPGRADE-19.0/#variable-and-output-changes","title":"Variable and output changes","text":"
    1. Removed variables:

    2. node_security_group_ntp_ipv4_cidr_block - default security group settings have an egress rule for ALL to 0.0.0.0/0/::/0

    3. node_security_group_ntp_ipv6_cidr_block - default security group settings have an egress rule for ALL to 0.0.0.0/0/::/0
    4. Self-managed node groups:
      • create_security_group
      • security_group_name
      • security_group_use_name_prefix
      • security_group_description
      • security_group_rules
      • security_group_tags
      • cluster_security_group_id
      • vpc_id
    5. EKS managed node groups:

      • create_security_group
      • security_group_name
      • security_group_use_name_prefix
      • security_group_description
      • security_group_rules
      • security_group_tags
      • cluster_security_group_id
      • vpc_id
    6. Renamed variables:

    7. N/A

    8. Added variables:

    9. provision_on_outpostfor Outposts support

    10. outpost_config for Outposts support
    11. cluster_addons_timeouts for setting a common set of timeouts for all addons (unless a specific value is provided within the addon configuration)
    12. service_ipv6_cidr for setting the IPv6 CIDR block for the Kubernetes service addresses
    13. node_security_group_enable_recommended_rules for enabling recommended node security group rules for common access patterns

    14. Self-managed node groups:

      • launch_template_id for use when using an existing/externally created launch template (Ref: https://github.com/terraform-aws-modules/terraform-aws-autoscaling/pull/204)
      • maintenance_options
      • private_dns_name_options
      • instance_requirements
      • context
      • default_instance_warmup
      • force_delete_warm_pool
    15. EKS managed node groups:
      • use_custom_launch_template was added to better clarify how users can switch between a custom launch template or the default launch template provided by the EKS managed node group. Previously, to achieve this same functionality of using the default launch template, users needed to set create_launch_template = false and launch_template_name = \"\" which is not very intuitive.
      • launch_template_id for use when using an existing/externally created launch template (Ref: https://github.com/terraform-aws-modules/terraform-aws-autoscaling/pull/204)
      • maintenance_options
      • private_dns_name_options -
    16. Removed outputs:

    17. Self-managed node groups:

      • security_group_arn
      • security_group_id
    18. EKS managed node groups:

      • security_group_arn
      • security_group_id
    19. Renamed outputs:

    20. cluster_id is not renamed but the value it returns is now different. For standard EKS clusters created in the AWS cloud, the value returned at the time of this writing is null/empty. For local EKS clusters created on Outposts, the value returned will look like a UUID/GUID. Users should switch all instances of cluster_id to use cluster_name before upgrading to v19. Reference

    21. Added outputs:

    22. cluster_name - The cluster_id currently set by the AWS provider is actually the cluster name, but in the future, this will change and there will be a distinction between the cluster_name and cluster_id. Reference

    "},{"location":"UPGRADE-19.0/#upgrade-migrations","title":"Upgrade Migrations","text":"
    1. Before upgrading your module definition to v19.x, please see below for both EKS managed node group(s) and self-managed node groups and remove the node group(s) security group prior to upgrading.
    "},{"location":"UPGRADE-19.0/#self-managed-node-groups","title":"Self-Managed Node Groups","text":"

    Self-managed node groups on v18.x by default create a security group that does not specify any rules. In v19.x, this security group has been removed due to the predominant lack of usage (most users rely on the shared node security group). While still using version v18.x of your module definition, remove this security group from your node groups by setting create_security_group = false.

    • If you are currently utilizing this security group, it is recommended to create an additional security group that matches the rules/settings of the security group created by the node group, and specify that security group ID in vpc_security_group_ids. Once this is in place, you can proceed with the original security group removal.
    • For most users, the security group is not used and can be safely removed. However, deployed instances will have the security group attached to nodes and require the security group to be disassociated before the security group can be deleted. Because instances are deployed via autoscaling groups, we cannot simply remove the security group from the code and have those changes reflected on the instances. Instead, we have to update the code and then trigger the autoscaling groups to cycle the instances deployed so that new instances are provisioned without the security group attached. You can utilize the instance_refresh parameter of Autoscaling groups to force nodes to re-deploy when removing the security group since changes to launch templates automatically trigger an instance refresh. An example configuration is provided below.
    • Add the following to either/or self_managed_node_group_defaults or the individual self-managed node group definitions:
      create_security_group = false\ninstance_refresh = {\n  strategy = \"Rolling\"\n  preferences = {\n    min_healthy_percentage = 66\n  }\n}\n
    • It is recommended to use the aws-node-termination-handler while performing this update. Please refer to the irsa-autoscale-refresh example for usage. This will ensure that pods are safely evicted in a controlled manner to avoid service disruptions.
    • Once the necessary configurations are in place, you can apply the changes which will:
    • Create a new launch template (version) without the self-managed node group security group
    • Replace instances based on the instance_refresh configuration settings
    • New instances will launch without the self-managed node group security group, and prior instances will be terminated
    • Once the self-managed node group has cycled, the security group will be deleted
    "},{"location":"UPGRADE-19.0/#eks-managed-node-groups","title":"EKS Managed Node Groups","text":"

    EKS managed node groups on v18.x by default create a security group that does not specify any rules. In v19.x, this security group has been removed due to the predominant lack of usage (most users rely on the shared node security group). While still using version v18.x of your module definition, remove this security group from your node groups by setting create_security_group = false.

    • If you are currently utilizing this security group, it is recommended to create an additional security group that matches the rules/settings of the security group created by the node group, and specify that security group ID in vpc_security_group_ids. Once this is in place, you can proceed with the original security group removal.
    • EKS managed node groups rollout changes using a rolling update strategy that can be influenced through update_config. No additional changes are required for removing the security group created by node groups (unlike self-managed node groups which should utilize the instance_refresh setting of Autoscaling groups).
    • Once create_security_group = false has been set, you can apply the changes which will:
    • Create a new launch template (version) without the EKS managed node group security group
    • Replace instances based on the update_config configuration settings
    • New instances will launch without the EKS managed node group security group, and prior instances will be terminated
    • Once the EKS managed node group has cycled, the security group will be deleted

    • Once the node group security group(s) have been removed, you can update your module definition to specify the v19.x version of the module

    • Run terraform init -upgrade=true to update your configuration and pull in the v19 changes
    • Using the documentation provided above, update your module definition to reflect the changes in the module from v18.x to v19.x. You can utilize terraform plan as you go to help highlight any changes that you wish to make. See below for terraform state mv ... commands related to the use of iam_role_additional_policies. If you are not providing any values to these variables, you can skip this section.
    • Once you are satisfied with the changes and the terraform plan output, you can apply the changes to sync your infrastructure with the updated module definition (or vice versa).
    "},{"location":"UPGRADE-19.0/#diff-of-before-v18x-vs-after-v19x","title":"Diff of Before (v18.x) vs After (v19.x)","text":"
     module \"eks\" {\n   source  = \"terraform-aws-modules/eks/aws\"\n-  version = \"~> 18.0\"\n+  version = \"~> 19.0\"\n\n  cluster_name                    = local.name\n+ cluster_endpoint_public_access  = true\n- cluster_endpoint_private_access = true # now the default\n\n  cluster_addons = {\n-   resolve_conflicts = \"OVERWRITE\" # now the default\n+   preserve          = true\n+   most_recent       = true\n\n+   timeouts = {\n+     create = \"25m\"\n+     delete = \"10m\"\n    }\n    kube-proxy = {}\n    vpc-cni = {\n-     resolve_conflicts = \"OVERWRITE\" # now the default\n    }\n  }\n\n  # Encryption key\n  create_kms_key = true\n- cluster_encryption_config = [{\n-   resources = [\"secrets\"]\n- }]\n+ cluster_encryption_config = {\n+   resources = [\"secrets\"]\n+ }\n  kms_key_deletion_window_in_days = 7\n  enable_kms_key_rotation         = true\n\n- iam_role_additional_policies = [aws_iam_policy.additional.arn]\n+ iam_role_additional_policies = {\n+   additional = aws_iam_policy.additional.arn\n+ }\n\n  vpc_id                   = module.vpc.vpc_id\n  subnet_ids               = module.vpc.private_subnets\n  control_plane_subnet_ids = module.vpc.intra_subnets\n\n  # Extend node-to-node security group rules\n- node_security_group_ntp_ipv4_cidr_block = [\"169.254.169.123/32\"] # now the default\n  node_security_group_additional_rules = {\n-    ingress_self_ephemeral = {\n-      description = \"Node to node ephemeral ports\"\n-      protocol    = \"tcp\"\n-      from_port   = 0\n-      to_port     = 0\n-      type        = \"ingress\"\n-      self        = true\n-    }\n-    egress_all = {\n-      description      = \"Node all egress\"\n-      protocol         = \"-1\"\n-      from_port        = 0\n-      to_port          = 0\n-      type             = \"egress\"\n-      cidr_blocks      = [\"0.0.0.0/0\"]\n-      ipv6_cidr_blocks = [\"::/0\"]\n-    }\n  }\n\n  # Self-Managed Node Group(s)\n  self_managed_node_group_defaults = {\n    vpc_security_group_ids = [aws_security_group.additional.id]\n-   iam_role_additional_policies = [aws_iam_policy.additional.arn]\n+   iam_role_additional_policies = {\n+     additional = aws_iam_policy.additional.arn\n+   }\n  }\n\n  self_managed_node_groups = {\n    spot = {\n      instance_type = \"m5.large\"\n      instance_market_options = {\n        market_type = \"spot\"\n      }\n\n      pre_bootstrap_user_data = <<-EOT\n        echo \"foo\"\n        export FOO=bar\n      EOT\n\n      bootstrap_extra_args = \"--kubelet-extra-args '--node-labels=node.kubernetes.io/lifecycle=spot'\"\n\n      post_bootstrap_user_data = <<-EOT\n        cd /tmp\n        sudo yum install -y https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm\n        sudo systemctl enable amazon-ssm-agent\n        sudo systemctl start amazon-ssm-agent\n      EOT\n\n-     create_security_group          = true\n-     security_group_name            = \"eks-managed-node-group-complete-example\"\n-     security_group_use_name_prefix = false\n-     security_group_description     = \"EKS managed node group complete example security group\"\n-     security_group_rules = {}\n-     security_group_tags = {}\n    }\n  }\n\n  # EKS Managed Node Group(s)\n  eks_managed_node_group_defaults = {\n    ami_type       = \"AL2_x86_64\"\n    instance_types = [\"m6i.large\", \"m5.large\", \"m5n.large\", \"m5zn.large\"]\n\n    attach_cluster_primary_security_group = true\n    vpc_security_group_ids                = [aws_security_group.additional.id]\n-   iam_role_additional_policies = [aws_iam_policy.additional.arn]\n+   iam_role_additional_policies = {\n+     additional = aws_iam_policy.additional.arn\n+   }\n  }\n\n  eks_managed_node_groups = {\n    blue = {}\n    green = {\n      min_size     = 1\n      max_size     = 10\n      desired_size = 1\n\n      instance_types = [\"t3.large\"]\n      capacity_type  = \"SPOT\"\n      labels = {\n        Environment = \"test\"\n        GithubRepo  = \"terraform-aws-eks\"\n        GithubOrg   = \"terraform-aws-modules\"\n      }\n\n      taints = {\n        dedicated = {\n          key    = \"dedicated\"\n          value  = \"gpuGroup\"\n          effect = \"NO_SCHEDULE\"\n        }\n      }\n\n      update_config = {\n        max_unavailable_percentage = 33 # or set `max_unavailable`\n      }\n\n-     create_security_group          = true\n-     security_group_name            = \"eks-managed-node-group-complete-example\"\n-     security_group_use_name_prefix = false\n-     security_group_description     = \"EKS managed node group complete example security group\"\n-     security_group_rules = {}\n-     security_group_tags = {}\n\n      tags = {\n        ExtraTag = \"example\"\n      }\n    }\n  }\n\n  # Fargate Profile(s)\n  fargate_profile_defaults = {\n-   iam_role_additional_policies = [aws_iam_policy.additional.arn]\n+   iam_role_additional_policies = {\n+     additional = aws_iam_policy.additional.arn\n+   }\n  }\n\n  fargate_profiles = {\n    default = {\n      name = \"default\"\n      selectors = [\n        {\n          namespace = \"kube-system\"\n          labels = {\n            k8s-app = \"kube-dns\"\n          }\n        },\n        {\n          namespace = \"default\"\n        }\n      ]\n\n      tags = {\n        Owner = \"test\"\n      }\n\n      timeouts = {\n        create = \"20m\"\n        delete = \"20m\"\n      }\n    }\n  }\n\n  # OIDC Identity provider\n  cluster_identity_providers = {\n    cognito = {\n      client_id      = \"702vqsrjicklgb7c5b7b50i1gc\"\n      issuer_url     = \"https://cognito-idp.us-west-2.amazonaws.com/us-west-2_re1u6bpRA\"\n      username_claim = \"email\"\n      groups_claim   = \"cognito:groups\"\n      groups_prefix  = \"gid:\"\n    }\n  }\n\n  # aws-auth configmap\n  manage_aws_auth_configmap = true\n\n  aws_auth_node_iam_role_arns_non_windows = [\n    module.eks_managed_node_group.iam_role_arn,\n    module.self_managed_node_group.iam_role_arn,\n  ]\n  aws_auth_fargate_profile_pod_execution_role_arns = [\n    module.fargate_profile.fargate_profile_pod_execution_role_arn\n  ]\n\n  aws_auth_roles = [\n    {\n      rolearn  = \"arn:aws:iam::66666666666:role/role1\"\n      username = \"role1\"\n      groups   = [\"system:masters\"]\n    },\n  ]\n\n  aws_auth_users = [\n    {\n      userarn  = \"arn:aws:iam::66666666666:user/user1\"\n      username = \"user1\"\n      groups   = [\"system:masters\"]\n    },\n    {\n      userarn  = \"arn:aws:iam::66666666666:user/user2\"\n      username = \"user2\"\n      groups   = [\"system:masters\"]\n    },\n  ]\n\n  aws_auth_accounts = [\n    \"777777777777\",\n    \"888888888888\",\n  ]\n\n  tags = local.tags\n}\n
    "},{"location":"UPGRADE-19.0/#terraform-state-moves","title":"Terraform State Moves","text":"

    The following Terraform state move commands are optional but recommended if you are providing additional IAM policies that are to be attached to IAM roles created by this module (cluster IAM role, node group IAM role, Fargate profile IAM role). Because the resources affected are aws_iam_role_policy_attachment, in theory, you could get away with simply applying the configuration and letting Terraform detach and re-attach the policies. However, during this brief period of update, you could experience permission failures as the policy is detached and re-attached, and therefore the state move route is recommended.

    Where \"<POLICY_ARN>\" is specified, this should be replaced with the full ARN of the policy, and \"<POLICY_MAP_KEY>\" should be replaced with the key used in the iam_role_additional_policies map for the associated policy. For example, if you have the followingv19.x configuration:

      ...\n  # This is demonstrating the cluster IAM role additional policies\n  iam_role_additional_policies = {\n    additional = aws_iam_policy.additional.arn\n  }\n  ...\n

    The associated state move command would look similar to (albeit with your correct policy ARN):

    terraform state mv 'module.eks.aws_iam_role_policy_attachment.this[\"arn:aws:iam::111111111111:policy/ex-complete-additional\"]' 'module.eks.aws_iam_role_policy_attachment.additional[\"additional\"]'\n

    If you are not providing any additional IAM policies, no actions are required.

    "},{"location":"UPGRADE-19.0/#cluster-iam-role","title":"Cluster IAM Role","text":"

    Repeat for each policy provided in iam_role_additional_policies:

    terraform state mv 'module.eks.aws_iam_role_policy_attachment.this[\"<POLICY_ARN>\"]' 'module.eks.aws_iam_role_policy_attachment.additional[\"<POLICY_MAP_KEY>\"]'\n
    "},{"location":"UPGRADE-19.0/#eks-managed-node-group-iam-role","title":"EKS Managed Node Group IAM Role","text":"

    Where \"<NODE_GROUP_KEY>\" is the key used in the eks_managed_node_groups map for the associated node group. Repeat for each policy provided in iam_role_additional_policies in either/or eks_managed_node_group_defaults or the individual node group definitions:

    terraform state mv 'module.eks.module.eks_managed_node_group[\"<NODE_GROUP_KEY>\"].aws_iam_role_policy_attachment.this[\"<POLICY_ARN>\"]' 'module.eks.module.eks_managed_node_group[\"<NODE_GROUP_KEY>\"].aws_iam_role_policy_attachment.additional[\"<POLICY_MAP_KEY>\"]'\n
    "},{"location":"UPGRADE-19.0/#self-managed-node-group-iam-role","title":"Self-Managed Node Group IAM Role","text":"

    Where \"<NODE_GROUP_KEY>\" is the key used in the self_managed_node_groups map for the associated node group. Repeat for each policy provided in iam_role_additional_policies in either/or self_managed_node_group_defaults or the individual node group definitions:

    terraform state mv 'module.eks.module.self_managed_node_group[\"<NODE_GROUP_KEY>\"].aws_iam_role_policy_attachment.this[\"<POLICY_ARN>\"]' 'module.eks.module.self_managed_node_group[\"<NODE_GROUP_KEY>\"].aws_iam_role_policy_attachment.additional[\"<POLICY_MAP_KEY>\"]'\n
    "},{"location":"UPGRADE-19.0/#fargate-profile-iam-role","title":"Fargate Profile IAM Role","text":"

    Where \"<FARGATE_PROFILE_KEY>\" is the key used in the fargate_profiles map for the associated profile. Repeat for each policy provided in iam_role_additional_policies in either/or fargate_profile_defaults or the individual profile definitions:

    terraform state mv 'module.eks.module.fargate_profile[\"<FARGATE_PROFILE_KEY>\"].aws_iam_role_policy_attachment.this[\"<POLICY_ARN>\"]' 'module.eks.module.fargate_profile[\"<FARGATE_PROFILE_KEY>\"].aws_iam_role_policy_attachment.additional[\"<POLICY_MAP_KEY>\"]'\n
    "},{"location":"UPGRADE-20.0/","title":"Upgrade from v19.x to v20.x","text":"

    Please consult the examples directory for reference example configurations. If you find a bug, please open an issue with supporting configuration to reproduce.

    "},{"location":"UPGRADE-20.0/#list-of-backwards-incompatible-changes","title":"List of backwards incompatible changes","text":"
    • Minium supported AWS provider version increased to v5.34
    • Minimum supported Terraform version increased to v1.3 to support Terraform state moved blocks as well as other advanced features
    • The resolve_conflicts argument within the cluster_addons configuration has been replaced with resolve_conflicts_on_create and resolve_conflicts_on_update now that resolve_conflicts is deprecated
    • The default/fallback value for the preserve argument of cluster_addonsis now set to true. This has shown to be useful for users deprovisioning clusters while avoiding the situation where the CNI is deleted too early and causes resources to be left orphaned resulting in conflicts.
    • The Karpenter sub-module's use of the irsa naming convention has been removed, along with an update to the Karpenter controller IAM policy to align with Karpenter's v1beta1/v0.32 changes. Instead of referring to the role as irsa or pod_identity, its simply just an IAM role used by the Karpenter controller and there is support for use with either IRSA and/or Pod Identity (default) at this time
    • The aws-auth ConfigMap resources have been moved to a standalone sub-module. This removes the Kubernetes provider requirement from the main module and allows for the aws-auth ConfigMap to be managed independently of the main module. This sub-module will be removed entirely in the next major release.
    • Support for cluster access management has been added with the default authentication mode set as API_AND_CONFIG_MAP. This is a one way change if applied; if you wish to use CONFIG_MAP, you will need to set authentication_mode = \"CONFIG_MAP\" explicitly when upgrading.
    • Karpenter EventBridge rule key spot_interrupt updated to correct mis-spelling (was spot_interupt). This will cause the rule to be replaced
    "},{"location":"UPGRADE-20.0/#upcoming-changes-planned-in-v210","title":"\u26a0\ufe0f Upcoming Changes Planned in v21.0 \u26a0\ufe0f","text":"

    To give users advanced notice and provide some future direction for this module, these are the following changes we will be looking to make in the next major release of this module:

    1. The aws-auth sub-module will be removed entirely from the project. Since this sub-module is captured in the v20.x releases, users can continue using it even after the module moves forward with the next major version. The long term strategy and direction is cluster access entry and to rely only on the AWS Terraform provider.
    2. The default value for authentication_mode will change to API. Aligning with point 1 above, this is a one way change, but users are free to specify the value of their choosing in place of this default (when the change is made). This module will proceed with an EKS API first strategy.
    3. The launch template and autoscaling group usage contained within the EKS managed node group and self-managed node group sub-modules *might be replaced with the terraform-aws-autoscaling module. At minimum, it makes sense to replace most of functionality in the self-managed node group module with this external module, but its not yet clear if there is any benefit of using it in the EKS managed node group sub-module. The interface that users interact with will stay the same, the changes will be internal to the implementation and we will do everything we can to keep the disruption to a minimum.
    4. The platform variable will be replaced and instead ami_type will become the standard across both self-managed node group(s) and EKS managed node group(s). As EKS expands its portfolio of supported operating systems, the ami_type is better suited to associate the correct user data format to the respective OS. The platform variable is a legacy artifact of self-managed node groups but not as descriptive as the ami_type, and therefore it will be removed in favor of ami_type.
    "},{"location":"UPGRADE-20.0/#additional-changes","title":"Additional changes","text":""},{"location":"UPGRADE-20.0/#added","title":"Added","text":"
    • A module tag has been added to the cluster control plane
    • Support for cluster access entries. The bootstrap_cluster_creator_admin_permissions setting on the control plane has been hardcoded to false since this operation is a one time operation only at cluster creation per the EKS API. Instead, users can enable/disable enable_cluster_creator_admin_permissions at any time to achieve the same functionality. This takes the identity that Terraform is using to make API calls and maps it into a cluster admin via an access entry. For users on existing clusters, you will need to remove the default cluster administrator that was created by EKS prior to the cluster access entry APIs - see the section Removing the default cluster administrator for more details.
    • Support for specifying the CloudWatch log group class (standard or infrequent access)
    • Native support for Windows based managed node groups similar to AL2 and Bottlerocket
    • Self-managed node groups now support instance_maintenance_policy and have added max_healthy_percentage, scale_in_protected_instances, and standby_instances arguments to the instance_refresh.preferences block
    "},{"location":"UPGRADE-20.0/#modified","title":"Modified","text":"
    • For sts:AssumeRole permissions by services, the use of dynamically looking up the DNS suffix has been replaced with the static value of amazonaws.com. This does not appear to change by partition and instead requires users to set this manually for non-commercial regions.
    • The default value for kms_key_enable_default_policy has changed from false to true to align with the default behavior of the aws_kms_key resource
    • The Karpenter default value for create_instance_profile has changed from true to false to align with the changes in Karpenter v0.32
    • The Karpenter variable create_instance_profile default value has changed from true to false. Starting with Karpenter v0.32.0, Karpenter accepts an IAM role and creates the EC2 instance profile used by the nodes
    "},{"location":"UPGRADE-20.0/#removed","title":"Removed","text":"
    • The complete example has been removed due to its redundancy with the other examples
    • References to the IRSA sub-module in the IAM repository have been removed. Once https://github.com/clowdhaus/terraform-aws-eks-pod-identity has been updated and moved into the organization, the documentation here will be updated to mention the new module.
    "},{"location":"UPGRADE-20.0/#variable-and-output-changes","title":"Variable and output changes","text":"
    1. Removed variables:

    2. cluster_iam_role_dns_suffix - replaced with a static string of amazonaws.com

    3. manage_aws_auth_configmap
    4. create_aws_auth_configmap
    5. aws_auth_node_iam_role_arns_non_windows
    6. aws_auth_node_iam_role_arns_windows
    7. aws_auth_fargate_profile_pod_execution_role_arn
    8. aws_auth_roles
    9. aws_auth_users
    10. aws_auth_accounts

    11. Karpenter

      • irsa_tag_key
      • irsa_tag_values
      • irsa_subnet_account_id
      • enable_karpenter_instance_profile_creation
    12. Renamed variables:

    13. Karpenter

      • create_irsa -> create_iam_role
      • irsa_name -> iam_role_name
      • irsa_use_name_prefix -> iam_role_name_prefix
      • irsa_path -> iam_role_path
      • irsa_description -> iam_role_description
      • irsa_max_session_duration -> iam_role_max_session_duration
      • irsa_permissions_boundary_arn -> iam_role_permissions_boundary_arn
      • irsa_tags -> iam_role_tags
      • policies -> iam_role_policies
      • irsa_policy_name -> iam_policy_name
      • irsa_ssm_parameter_arns -> ami_id_ssm_parameter_arns
      • create_iam_role -> create_node_iam_role
      • iam_role_additional_policies -> node_iam_role_additional_policies
      • policies -> iam_role_policies
      • iam_role_arn -> node_iam_role_arn
      • iam_role_name -> node_iam_role_name
      • iam_role_name_prefix -> node_iam_role_name_prefix
      • iam_role_path -> node_iam_role_path
      • iam_role_description -> node_iam_role_description
      • iam_role_max_session_duration -> node_iam_role_max_session_duration
      • iam_role_permissions_boundary_arn -> node_iam_role_permissions_boundary_arn
      • iam_role_attach_cni_policy -> node_iam_role_attach_cni_policy
      • iam_role_additional_policies -> node_iam_role_additional_policies
      • iam_role_tags -> node_iam_role_tags
    14. Added variables:

    15. create_access_entry

    16. enable_cluster_creator_admin_permissions
    17. authentication_mode
    18. access_entries
    19. cloudwatch_log_group_class

    20. Karpenter

      • iam_policy_name
      • iam_policy_use_name_prefix
      • iam_policy_description
      • iam_policy_path
      • enable_irsa
      • create_access_entry
      • access_entry_type
    21. Self-managed node group

      • instance_maintenance_policy
      • create_access_entry
      • iam_role_arn
    22. Removed outputs:

    23. aws_auth_configmap_yaml

    24. Renamed outputs:

    25. Karpenter

      • irsa_name -> iam_role_name
      • irsa_arn -> iam_role_arn
      • irsa_unique_id -> iam_role_unique_id
      • role_name -> node_iam_role_name
      • role_arn -> node_iam_role_arn
      • role_unique_id -> node_iam_role_unique_id
    26. Added outputs:

    27. access_entries

    28. Karpenter

      • node_access_entry_arn
    29. Self-managed node group

      • access_entry_arn
    "},{"location":"UPGRADE-20.0/#upgrade-migrations","title":"Upgrade Migrations","text":""},{"location":"UPGRADE-20.0/#diff-of-before-v1921-vs-after-v200","title":"Diff of Before (v19.21) vs After (v20.0)","text":"
     module \"eks\" {\n   source  = \"terraform-aws-modules/eks/aws\"\n-  version = \"~> 19.21\"\n+  version = \"~> 20.0\"\n\n# If you want to maintain the current default behavior of v19.x\n+  kms_key_enable_default_policy = false\n\n-   manage_aws_auth_configmap = true\n\n-   aws_auth_roles = [\n-     {\n-       rolearn  = \"arn:aws:iam::66666666666:role/role1\"\n-       username = \"role1\"\n-       groups   = [\"custom-role-group\"]\n-     },\n-   ]\n\n-   aws_auth_users = [\n-     {\n-       userarn  = \"arn:aws:iam::66666666666:user/user1\"\n-       username = \"user1\"\n-       groups   = [\"custom-users-group\"]\n-     },\n-   ]\n}\n\n+ module \"eks_aws_auth\" {\n+   source  = \"terraform-aws-modules/eks/aws//modules/aws-auth\"\n+   version = \"~> 20.0\"\n\n+   manage_aws_auth_configmap = true\n\n+   aws_auth_roles = [\n+     {\n+       rolearn  = \"arn:aws:iam::66666666666:role/role1\"\n+       username = \"role1\"\n+       groups   = [\"custom-role-group\"]\n+     },\n+   ]\n\n+   aws_auth_users = [\n+     {\n+       userarn  = \"arn:aws:iam::66666666666:user/user1\"\n+       username = \"user1\"\n+       groups   = [\"custom-users-group\"]\n+     },\n+   ]\n+ }\n
    "},{"location":"UPGRADE-20.0/#karpenter-diff-of-before-v1921-vs-after-v200","title":"Karpenter Diff of Before (v19.21) vs After (v20.0)","text":"
     module \"eks_karpenter\" {\n   source  = \"terraform-aws-modules/eks/aws//modules/karpenter\"\n-  version = \"~> 19.21\"\n+  version = \"~> 20.0\"\n\n# If you wish to maintain the current default behavior of v19.x\n+  enable_irsa             = true\n+  create_instance_profile = true\n\n# To avoid any resource re-creation\n+  iam_role_name          = \"KarpenterIRSA-${module.eks.cluster_name}\"\n+  iam_role_description   = \"Karpenter IAM role for service account\"\n+  iam_policy_name        = \"KarpenterIRSA-${module.eks.cluster_name}\"\n+  iam_policy_description = \"Karpenter IAM role for service account\"\n}\n
    "},{"location":"UPGRADE-20.0/#terraform-state-moves","title":"Terraform State Moves","text":""},{"location":"UPGRADE-20.0/#authentication-mode-changes","title":"\u26a0\ufe0f Authentication Mode Changes \u26a0\ufe0f","text":"

    Changing the authentication_mode is a one-way decision. See announcement blog for further details:

    Switching authentication modes on an existing cluster is a one-way operation. You can switch from CONFIG_MAP to API_AND_CONFIG_MAP. You can then switch from API_AND_CONFIG_MAP to API. You cannot revert these operations in the opposite direction. Meaning you cannot switch back to CONFIG_MAP or API_AND_CONFIG_MAP from API. And you cannot switch back to CONFIG_MAP from API_AND_CONFIG_MAP.

    [!IMPORTANT] If migrating to cluster access entries and you will NOT have any entries that remain in the aws-auth ConfigMap, you do not need to remove the configmap from the statefile. You can simply follow the migration guide and once access entries have been created, you can let Terraform remove/delete the aws-auth ConfigMap.

    If you WILL have entries that remain in the aws-auth ConfigMap, then you will need to remove the ConfigMap resources from the statefile to avoid any disruptions. When you add the new aws-auth sub-module and apply the changes, the sub-module will upsert the ConfigMap on the cluster. Provided the necessary entries are defined in that sub-module's definition, it will \"re-adopt\" the ConfigMap under Terraform's control.

    "},{"location":"UPGRADE-20.0/#authentication_mode-config_map","title":"authentication_mode = \"CONFIG_MAP\"","text":"

    If using authentication_mode = \"CONFIG_MAP\", before making any changes, you will first need to remove the configmap from the statefile to avoid any disruptions:

    terraform state rm 'module.eks.kubernetes_config_map_v1_data.aws_auth[0]'\nterraform state rm 'module.eks.kubernetes_config_map.aws_auth[0]' # include if Terraform created the original configmap\n

    Once the configmap has been removed from the statefile, you can add the new aws-auth sub-module and copy the relevant definitions from the EKS module over to the new aws-auth sub-module definition (see before after diff above).

    [!CAUTION] You will need to add entries to the aws-auth sub-module for any IAM roles used by node groups and/or Fargate profiles - the module no longer handles this in the background on behalf of users.

    When you apply the changes with the new sub-module, the configmap in the cluster will get updated with the contents provided in the sub-module definition, so please be sure all of the necessary entries are added before applying the changes.

    "},{"location":"UPGRADE-20.0/#authentication_mode-api_and_config_map","title":"authentication_mode = \"API_AND_CONFIG_MAP\"","text":"

    When using authentication_mode = \"API_AND_CONFIG_MAP\" and there are entries that will remain in the configmap (entries that cannot be replaced by cluster access entry), you will first need to update the authentication_mode on the cluster to \"API_AND_CONFIG_MAP\". To help make this upgrade process easier, a copy of the changes defined in the v20.0.0 PR have been captured here but with the aws-auth components still provided in the module. This means you get the equivalent of the v20.0.0 module, but it still includes support for the aws-auth configmap. You can follow the provided README on that interim migration module for the order of execution and return here once the authentication_mode has been updated to \"API_AND_CONFIG_MAP\". Note - EKS automatically adds access entries for the roles used by EKS managed node groups and Fargate profiles; users do not need to do anything additional for these roles.

    Once the authentication_mode has been updated, next you will need to remove the configmap from the statefile to avoid any disruptions:

    [!NOTE] This is only required if there are entries that will remain in the aws-auth ConfigMap after migrating. Otherwise, you can skip this step and let Terraform destroy the ConfigMap.

    terraform state rm 'module.eks.kubernetes_config_map_v1_data.aws_auth[0]'\nterraform state rm 'module.eks.kubernetes_config_map.aws_auth[0]' # include if Terraform created the original configmap\n
    "},{"location":"UPGRADE-20.0/#i-terraform-17-users","title":"\u2139\ufe0f Terraform 1.7+ users","text":"

    If you are using Terraform v1.7+, you can utilize the remove to facilitate both the removal of the configmap through code. You can create a fork/clone of the provided migration module and add the remove blocks and apply those changes before proceeding. We do not want to force users onto the bleeding edge with this module, so we have not included remove support at this time.

    Once the configmap has been removed from the statefile, you can add the new aws-auth sub-module and copy the relevant definitions from the EKS module over to the new aws-auth sub-module definition (see before after diff above). When you apply the changes with the new sub-module, the configmap in the cluster will get updated with the contents provided in the sub-module definition, so please be sure all of the necessary entries are added before applying the changes. In the before/example above - the configmap would remove any entries for roles used by node groups and/or Fargate Profiles, but maintain the custom entries for users and roles passed into the module definition.

    "},{"location":"UPGRADE-20.0/#authentication_mode-api","title":"authentication_mode = \"API\"","text":"

    In order to switch to API only using cluster access entry, you first need to update the authentication_mode on the cluster to API_AND_CONFIG_MAP without modifying the aws-auth configmap. To help make this upgrade process easier, a copy of the changes defined in the v20.0.0 PR have been captured here but with the aws-auth components still provided in the module. This means you get the equivalent of the v20.0.0 module, but it still includes support for the aws-auth configmap. You can follow the provided README on that interim migration module for the order of execution and return here once the authentication_mode has been updated to \"API_AND_CONFIG_MAP\". Note - EKS automatically adds access entries for the roles used by EKS managed node groups and Fargate profiles; users do not need to do anything additional for these roles.

    Once the authentication_mode has been updated, you can update the authentication_mode on the cluster to API and remove the aws-auth configmap components.

    "},{"location":"compute_resources/","title":"Compute Resources","text":""},{"location":"compute_resources/#table-of-contents","title":"Table of Contents","text":"
    • EKS Managed Node Groups
    • Self Managed Node Groups
    • Fargate Profiles
    • Default Configurations

    \u2139\ufe0f Only the pertinent attributes are shown below for brevity

    "},{"location":"compute_resources/#eks-managed-node-groups","title":"EKS Managed Node Groups","text":"

    Refer to the EKS Managed Node Group documentation documentation for service related details.

    1. The module creates a custom launch template by default to ensure settings such as tags are propagated to instances. Please note that many of the customization options listed here are only available when a custom launch template is created. To use the default template provided by the AWS EKS managed node group service, disable the launch template creation by setting use_custom_launch_template to false:
      eks_managed_node_groups = {\n    default = {\n      use_custom_launch_template = false\n    }\n  }\n
    1. Native support for Bottlerocket OS is provided by providing the respective AMI type:
      eks_managed_node_groups = {\n    bottlerocket_default = {\n      use_custom_launch_template = false\n\n      ami_type = \"BOTTLEROCKET_x86_64\"\n    }\n  }\n
    1. Bottlerocket OS is supported in a similar manner. However, note that the user data for Bottlerocket OS uses the TOML format:
      eks_managed_node_groups = {\n    bottlerocket_prepend_userdata = {\n      ami_type = \"BOTTLEROCKET_x86_64\"\n\n      bootstrap_extra_args = <<-EOT\n        # extra args added\n        [settings.kernel]\n        lockdown = \"integrity\"\n      EOT\n    }\n  }\n
    1. When using a custom AMI, the AWS EKS Managed Node Group service will NOT inject the necessary bootstrap script into the supplied user data. Users can elect to provide their own user data to bootstrap and connect or opt in to use the module provided user data:
      eks_managed_node_groups = {\n    custom_ami = {\n      ami_id = \"ami-0caf35bc73450c396\"\n\n      # By default, EKS managed node groups will not append bootstrap script;\n      # this adds it back in using the default template provided by the module\n      # Note: this assumes the AMI provided is an EKS optimized AMI derivative\n      enable_bootstrap_user_data = true\n\n      pre_bootstrap_user_data = <<-EOT\n        export FOO=bar\n      EOT\n\n      # Because we have full control over the user data supplied, we can also run additional\n      # scripts/configuration changes after the bootstrap script has been run\n      post_bootstrap_user_data = <<-EOT\n        echo \"you are free little kubelet!\"\n      EOT\n    }\n  }\n
    1. There is similar support for Bottlerocket OS:
      eks_managed_node_groups = {\n    bottlerocket_custom_ami = {\n      ami_id   = \"ami-0ff61e0bcfc81dc94\"\n      ami_type = \"BOTTLEROCKET_x86_64\"\n\n      # use module user data template to bootstrap\n      enable_bootstrap_user_data = true\n      # this will get added to the template\n      bootstrap_extra_args = <<-EOT\n        # extra args added\n        [settings.kernel]\n        lockdown = \"integrity\"\n\n        [settings.kubernetes.node-labels]\n        \"label1\" = \"foo\"\n        \"label2\" = \"bar\"\n\n        [settings.kubernetes.node-taints]\n        \"dedicated\" = \"experimental:PreferNoSchedule\"\n        \"special\" = \"true:NoSchedule\"\n      EOT\n    }\n  }\n

    See the examples/eks-managed-node-group/ example for a working example of various configurations.

    "},{"location":"compute_resources/#self-managed-node-groups","title":"Self Managed Node Groups","text":"

    Refer to the Self Managed Node Group documentation documentation for service related details.

    1. The self-managed-node-group uses the latest AWS EKS Optimized AMI (Linux) for the given Kubernetes version by default:
      cluster_version = \"1.31\"\n\n  # This self managed node group will use the latest AWS EKS Optimized AMI for Kubernetes 1.27\n  self_managed_node_groups = {\n    default = {}\n  }\n
    1. To use Bottlerocket, specify the ami_type as one of the respective \"BOTTLEROCKET_*\" types and supply a Bottlerocket OS AMI:
      cluster_version = \"1.31\"\n\n  self_managed_node_groups = {\n    bottlerocket = {\n      ami_id   = data.aws_ami.bottlerocket_ami.id\n      ami_type = \"BOTTLEROCKET_x86_64\"\n    }\n  }\n

    See the examples/self-managed-node-group/ example for a working example of various configurations.

    "},{"location":"compute_resources/#fargate-profiles","title":"Fargate Profiles","text":"

    Fargate profiles are straightforward to use and therefore no further details are provided here. See the tests/fargate-profile/ tests for a working example of various configurations.

    "},{"location":"compute_resources/#default-configurations","title":"Default Configurations","text":"

    Each type of compute resource (EKS managed node group, self managed node group, or Fargate profile) provides the option for users to specify a default configuration. These default configurations can be overridden from within the compute resource's individual definition. The order of precedence for configurations (from highest to least precedence):

    • Compute resource individual configuration
    • Compute resource family default configuration (eks_managed_node_group_defaults, self_managed_node_group_defaults, fargate_profile_defaults)
      • Module default configuration (see variables.tf and node_groups.tf)

    For example, the following creates 4 AWS EKS Managed Node Groups:

      eks_managed_node_group_defaults = {\n    ami_type               = \"AL2_x86_64\"\n    disk_size              = 50\n    instance_types         = [\"m6i.large\", \"m5.large\", \"m5n.large\", \"m5zn.large\"]\n  }\n\n  eks_managed_node_groups = {\n    # Uses module default configurations overridden by configuration above\n    default = {}\n\n    # This further overrides the instance types used\n    compute = {\n      instance_types = [\"c5.large\", \"c6i.large\", \"c6d.large\"]\n    }\n\n    # This further overrides the instance types and disk size used\n    persistent = {\n      disk_size = 1024\n      instance_types = [\"r5.xlarge\", \"r6i.xlarge\", \"r5b.xlarge\"]\n    }\n\n    # This overrides the OS used\n    bottlerocket = {\n      ami_type = \"BOTTLEROCKET_x86_64\"\n    }\n  }\n
    "},{"location":"faq/","title":"Frequently Asked Questions","text":"
    • Setting disk_size or remote_access does not make any changes
    • I received an error: expect exactly one securityGroup tagged with kubernetes.io/cluster/<NAME> ...
    • Why are nodes not being registered?
    • Why are there no changes when a node group's desired_size is modified?
    • How do I access compute resource attributes?
    • What add-ons are available?
    • What configuration values are available for an add-on?
    "},{"location":"faq/#setting-disk_size-or-remote_access-does-not-make-any-changes","title":"Setting disk_size or remote_access does not make any changes","text":"

    disk_size, and remote_access can only be set when using the EKS managed node group default launch template. This module defaults to providing a custom launch template to allow for custom security groups, tag propagation, etc. If you wish to forgo the custom launch template route, you can set use_custom_launch_template = false and then you can set disk_size and remote_access.

    "},{"location":"faq/#i-received-an-error-expect-exactly-one-securitygroup-tagged-with-kubernetesioclustername","title":"I received an error: expect exactly one securityGroup tagged with kubernetes.io/cluster/<NAME> ...","text":"

    By default, EKS creates a cluster primary security group that is created outside of the module and the EKS service adds the tag { \"kubernetes.io/cluster/<CLUSTER_NAME>\" = \"owned\" }. This on its own does not cause any conflicts for addons such as the AWS Load Balancer Controller until users decide to attach both the cluster primary security group and the shared node security group created by the module (by setting attach_cluster_primary_security_group = true). The issue is not with having multiple security groups in your account with this tag key:value combination, but having multiple security groups with this tag key:value combination attached to nodes in the same cluster. There are a few ways to resolve this depending on your use case/intentions:

    \u26a0\ufe0f <CLUSTER_NAME> below needs to be replaced with the name of your cluster

    1. If you want to use the cluster primary security group, you can disable the creation of the shared node security group with:
      create_node_security_group            = false # default is true\n  attach_cluster_primary_security_group = true # default is false\n
    1. By not attaching the cluster primary security group. The cluster primary security group has quite broad access and the module has instead provided a security group with the minimum amount of access to launch an empty EKS cluster successfully and users are encouraged to open up access when necessary to support their workload.
      attach_cluster_primary_security_group = false # this is the default for the module\n

    In theory, if you are attaching the cluster primary security group, you shouldn't need to use the shared node security group created by the module. However, this is left up to users to decide for their requirements and use case.

    If you choose to use Custom Networking, make sure to only attach the security groups matching your choice above in your ENIConfig resources. This will ensure you avoid redundant tags.

    "},{"location":"faq/#why-are-nodes-not-being-registered","title":"Why are nodes not being registered?","text":"

    Nodes not being able to register with the EKS control plane is generally due to networking mis-configurations.

    1. At least one of the cluster endpoints (public or private) must be enabled.

    If you require a public endpoint, setting up both (public and private) and restricting the public endpoint via setting cluster_endpoint_public_access_cidrs is recommended. More info regarding communication with an endpoint is available here.

    1. Nodes need to be able to contact the EKS cluster endpoint. By default, the module only creates a public endpoint. To access the endpoint, the nodes need outgoing internet access:

    2. Nodes in private subnets: via a NAT gateway or instance along with the appropriate routing rules

    3. Nodes in public subnets: ensure that nodes are launched with public IPs (enable through either the module here or your subnet setting defaults)

    Important: If you apply only the public endpoint and configure the cluster_endpoint_public_access_cidrs to restrict access, know that EKS nodes will also use the public endpoint and you must allow access to the endpoint. If not, then your nodes will fail to work correctly.

    1. The private endpoint can also be enabled by setting cluster_endpoint_private_access = true. Ensure that VPC DNS resolution and hostnames are also enabled for your VPC when the private endpoint is enabled.

    2. Nodes need to be able to connect to other AWS services to function (download container images, make API calls to assume roles, etc.). If for some reason you cannot enable public internet access for nodes you can add VPC endpoints to the relevant services: EC2 API, ECR API, ECR DKR and S3.

    "},{"location":"faq/#why-are-there-no-changes-when-a-node-groups-desired_size-is-modified","title":"Why are there no changes when a node group's desired_size is modified?","text":"

    The module is configured to ignore this value. Unfortunately, Terraform does not support variables within the lifecycle block. The setting is ignored to allow autoscaling via controllers such as cluster autoscaler or Karpenter to work properly and without interference by Terraform. Changing the desired count must be handled outside of Terraform once the node group is created.

    "},{"location":"faq/#how-do-i-access-compute-resource-attributes","title":"How do I access compute resource attributes?","text":"

    Examples of accessing the attributes of the compute resource(s) created by the root module are shown below. Note - the assumption is that your cluster module definition is named eks as in module \"eks\" { ... }:

    • EKS Managed Node Group attributes
    eks_managed_role_arns = [for group in module.eks_managed_node_group : group.iam_role_arn]\n
    • Self Managed Node Group attributes
    self_managed_role_arns = [for group in module.self_managed_node_group : group.iam_role_arn]\n
    • Fargate Profile attributes
    fargate_profile_pod_execution_role_arns = [for group in module.fargate_profile : group.fargate_profile_pod_execution_role_arn]\n
    "},{"location":"faq/#what-add-ons-are-available","title":"What add-ons are available?","text":"

    The available EKS add-ons can be found here. You can also retrieve the available addons from the API using:

    aws eks describe-addon-versions --query 'addons[*].addonName'\n
    "},{"location":"faq/#what-configuration-values-are-available-for-an-add-on","title":"What configuration values are available for an add-on?","text":"

    You can retrieve the configuration value schema for a given addon using the following command:

    aws eks describe-addon-configuration --addon-name <value> --addon-version <value> --query 'configurationSchema' --output text | jq\n

    For example:

    aws eks describe-addon-configuration --addon-name coredns --addon-version v1.11.1-eksbuild.8 --query 'configurationSchema' --output text | jq\n

    Returns (at the time of writing):

    {\n  \"$ref\": \"#/definitions/Coredns\",\n  \"$schema\": \"http://json-schema.org/draft-06/schema#\",\n  \"definitions\": {\n    \"Coredns\": {\n      \"additionalProperties\": false,\n      \"properties\": {\n        \"affinity\": {\n          \"default\": {\n            \"affinity\": {\n              \"nodeAffinity\": {\n                \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                  \"nodeSelectorTerms\": [\n                    {\n                      \"matchExpressions\": [\n                        {\n                          \"key\": \"kubernetes.io/os\",\n                          \"operator\": \"In\",\n                          \"values\": [\n                            \"linux\"\n                          ]\n                        },\n                        {\n                          \"key\": \"kubernetes.io/arch\",\n                          \"operator\": \"In\",\n                          \"values\": [\n                            \"amd64\",\n                            \"arm64\"\n                          ]\n                        }\n                      ]\n                    }\n                  ]\n                }\n              },\n              \"podAntiAffinity\": {\n                \"preferredDuringSchedulingIgnoredDuringExecution\": [\n                  {\n                    \"podAffinityTerm\": {\n                      \"labelSelector\": {\n                        \"matchExpressions\": [\n                          {\n                            \"key\": \"k8s-app\",\n                            \"operator\": \"In\",\n                            \"values\": [\n                              \"kube-dns\"\n                            ]\n                          }\n                        ]\n                      },\n                      \"topologyKey\": \"kubernetes.io/hostname\"\n                    },\n                    \"weight\": 100\n                  }\n                ]\n              }\n            }\n          },\n          \"description\": \"Affinity of the coredns pods\",\n          \"type\": [\n            \"object\",\n            \"null\"\n          ]\n        },\n        \"computeType\": {\n          \"type\": \"string\"\n        },\n        \"corefile\": {\n          \"description\": \"Entire corefile contents to use with installation\",\n          \"type\": \"string\"\n        },\n        \"nodeSelector\": {\n          \"additionalProperties\": {\n            \"type\": \"string\"\n          },\n          \"type\": \"object\"\n        },\n        \"podAnnotations\": {\n          \"properties\": {},\n          \"title\": \"The podAnnotations Schema\",\n          \"type\": \"object\"\n        },\n        \"podDisruptionBudget\": {\n          \"description\": \"podDisruptionBudget configurations\",\n          \"enabled\": {\n            \"default\": true,\n            \"description\": \"the option to enable managed PDB\",\n            \"type\": \"boolean\"\n          },\n          \"maxUnavailable\": {\n            \"anyOf\": [\n              {\n                \"pattern\": \".*%$\",\n                \"type\": \"string\"\n              },\n              {\n                \"type\": \"integer\"\n              }\n            ],\n            \"default\": 1,\n            \"description\": \"minAvailable value for managed PDB, can be either string or integer; if it's string, should end with %\"\n          },\n          \"minAvailable\": {\n            \"anyOf\": [\n              {\n                \"pattern\": \".*%$\",\n                \"type\": \"string\"\n              },\n              {\n                \"type\": \"integer\"\n              }\n            ],\n            \"description\": \"maxUnavailable value for managed PDB, can be either string or integer; if it's string, should end with %\"\n          },\n          \"type\": \"object\"\n        },\n        \"podLabels\": {\n          \"properties\": {},\n          \"title\": \"The podLabels Schema\",\n          \"type\": \"object\"\n        },\n        \"replicaCount\": {\n          \"type\": \"integer\"\n        },\n        \"resources\": {\n          \"$ref\": \"#/definitions/Resources\"\n        },\n        \"tolerations\": {\n          \"default\": [\n            {\n              \"key\": \"CriticalAddonsOnly\",\n              \"operator\": \"Exists\"\n            },\n            {\n              \"effect\": \"NoSchedule\",\n              \"key\": \"node-role.kubernetes.io/control-plane\"\n            }\n          ],\n          \"description\": \"Tolerations of the coredns pod\",\n          \"items\": {\n            \"type\": \"object\"\n          },\n          \"type\": \"array\"\n        },\n        \"topologySpreadConstraints\": {\n          \"description\": \"The coredns pod topology spread constraints\",\n          \"type\": \"array\"\n        }\n      },\n      \"title\": \"Coredns\",\n      \"type\": \"object\"\n    },\n    \"Limits\": {\n      \"additionalProperties\": false,\n      \"properties\": {\n        \"cpu\": {\n          \"type\": \"string\"\n        },\n        \"memory\": {\n          \"type\": \"string\"\n        }\n      },\n      \"title\": \"Limits\",\n      \"type\": \"object\"\n    },\n    \"Resources\": {\n      \"additionalProperties\": false,\n      \"properties\": {\n        \"limits\": {\n          \"$ref\": \"#/definitions/Limits\"\n        },\n        \"requests\": {\n          \"$ref\": \"#/definitions/Limits\"\n        }\n      },\n      \"title\": \"Resources\",\n      \"type\": \"object\"\n    }\n  }\n}\n

    [!NOTE] The available configuration values will vary between add-on versions, typically more configuration values will be added in later versions as functionality is enabled by EKS.

    "},{"location":"local/","title":"Local Development","text":""},{"location":"local/#documentation-site","title":"Documentation Site","text":"

    In order to run the documentation site locally, you will need to have the following installed locally:

    • Python 3.x
    • mkdocs
    • The following pip packages for mkdocs (i.e. - pip install ...)
      • mkdocs-material
      • mkdocs-include-markdown-plugin
      • mkdocs-awesome-pages-plugin

    To run the documentation site locally, run the following command from the root of the repository:

    mkdocs serve\n

    Opening the documentation at the link posted in the terminal output (i.e. - http://127.0.0.1:8000/terraform-aws-eks/)

    "},{"location":"network_connectivity/","title":"Network Connectivity","text":""},{"location":"network_connectivity/#cluster-endpoint","title":"Cluster Endpoint","text":""},{"location":"network_connectivity/#public-endpoint-w-restricted-cidrs","title":"Public Endpoint w/ Restricted CIDRs","text":"

    When restricting the clusters public endpoint to only the CIDRs specified by users, it is recommended that you also enable the private endpoint, or ensure that the CIDR blocks that you specify include the addresses that nodes and Fargate pods (if you use them) access the public endpoint from.

    Please refer to the AWS documentation for further information

    "},{"location":"network_connectivity/#security-groups","title":"Security Groups","text":"
    • Cluster Security Group
    • This module by default creates a cluster security group (\"additional\" security group when viewed from the console) in addition to the default security group created by the AWS EKS service. This \"additional\" security group allows users to customize inbound and outbound rules via the module as they see fit
      • The default inbound/outbound rules provided by the module are derived from the AWS minimum recommendations in addition to NTP and HTTPS public internet egress rules (without, these show up in VPC flow logs as rejects - they are used for clock sync and downloading necessary packages/updates)
      • The minimum inbound/outbound rules are provided for cluster and node creation to succeed without errors, but users will most likely need to add the necessary port and protocol for node-to-node communication (this is user specific based on how nodes are configured to communicate across the cluster)
      • Users have the ability to opt out of the security group creation and instead provide their own externally created security group if so desired
      • The security group that is created is designed to handle the bare minimum communication necessary between the control plane and the nodes, as well as any external egress to allow the cluster to successfully launch without error
    • Users also have the option to supply additional, externally created security groups to the cluster as well via the cluster_additional_security_group_ids variable
    • Lastly, users are able to opt in to attaching the primary security group automatically created by the EKS service by setting attach_cluster_primary_security_group = true from the root module for the respective node group (or set it within the node group defaults). This security group is not managed by the module; it is created by the EKS service. It permits all traffic within the domain of the security group as well as all egress traffic to the internet.

    • Node Group Security Group(s)

    • Users have the option to assign their own externally created security group(s) to the node group via the vpc_security_group_ids variable

    See the example snippet below which adds additional security group rules to the cluster security group as well as the shared node security group (for node-to-node access). Users can use this extensibility to open up network access as they see fit using the security groups provided by the module:

      ...\n  # Extend cluster security group rules\n  cluster_security_group_additional_rules = {\n    egress_nodes_ephemeral_ports_tcp = {\n      description                = \"To node 1025-65535\"\n      protocol                   = \"tcp\"\n      from_port                  = 1025\n      to_port                    = 65535\n      type                       = \"egress\"\n      source_node_security_group = true\n    }\n  }\n\n  # Extend node-to-node security group rules\n  node_security_group_additional_rules = {\n    ingress_self_all = {\n      description = \"Node to node all ports/protocols\"\n      protocol    = \"-1\"\n      from_port   = 0\n      to_port     = 0\n      type        = \"ingress\"\n      self        = true\n    }\n    egress_all = {\n      description      = \"Node all egress\"\n      protocol         = \"-1\"\n      from_port        = 0\n      to_port          = 0\n      type             = \"egress\"\n      cidr_blocks      = [\"0.0.0.0/0\"]\n      ipv6_cidr_blocks = [\"::/0\"]\n    }\n  }\n  ...\n
    The security groups created by this module are depicted in the image shown below along with their default inbound/outbound rules:

    "},{"location":"user_data/","title":"User Data & Bootstrapping","text":"

    Users can see the various methods of using and providing user data through the user data tests as well more detailed information on the design and possible configurations via the user data module itself

    "},{"location":"user_data/#summary","title":"Summary","text":"
    • AWS EKS Managed Node Groups
    • By default, any supplied user data is pre-pended to the user data supplied by the EKS Managed Node Group service
    • If users supply an ami_id, the service no longers supplies user data to bootstrap nodes; users can enable enable_bootstrap_user_data and use the module provided user data template, or provide their own user data template
    • AMI types of BOTTLEROCKET_*, user data must be in TOML format
    • AMI types of WINDOWS_*, user data must be in powershell/PS1 script format
    • Self Managed Node Groups
    • AL2_x86_64 AMI type (default) -> the user data template (bash/shell script) provided by the module is used as the default; users are able to provide their own user data template
    • BOTTLEROCKET_* AMI types -> the user data template (TOML file) provided by the module is used as the default; users are able to provide their own user data template
    • WINDOWS_* AMI types -> the user data template (powershell/PS1 script) provided by the module is used as the default; users are able to provide their own user data template

    The templates provided by the module can be found under the templates directory

    "},{"location":"user_data/#eks-managed-node-group","title":"EKS Managed Node Group","text":"

    When using an EKS managed node group, users have 2 primary routes for interacting with the bootstrap user data:

    1. If a value for ami_id is not provided, users can supply additional user data that is pre-pended before the EKS Managed Node Group bootstrap user data. You can read more about this process from the AWS supplied documentation

    2. Users can use the following variables to facilitate this process:

      pre_bootstrap_user_data = \"...\"\n
    3. If a custom AMI is used, then per the AWS documentation, users will need to supply the necessary user data to bootstrap and register nodes with the cluster when launched. There are two routes to facilitate this bootstrapping process:

    4. If the AMI used is a derivative of the AWS EKS Optimized AMI , users can opt in to using a template provided by the module that provides the minimum necessary configuration to bootstrap the node when launched:
      • Users can use the following variables to facilitate this process:
        enable_bootstrap_user_data = true # to opt in to using the module supplied bootstrap user data template\npre_bootstrap_user_data    = \"...\"\nbootstrap_extra_args       = \"...\"\npost_bootstrap_user_data   = \"...\"\n
    5. If the AMI is NOT an AWS EKS Optimized AMI derivative, or if users wish to have more control over the user data that is supplied to the node when launched, users have the ability to supply their own user data template that will be rendered instead of the module supplied template. Note - only the variables that are supplied to the templatefile() for the respective AMI type are available for use in the supplied template, otherwise users will need to pre-render/pre-populate the template before supplying the final template to the module for rendering as user data.
      • Users can use the following variables to facilitate this process:
        user_data_template_path  = \"./your/user_data.sh\" # user supplied bootstrap user data template\npre_bootstrap_user_data  = \"...\"\nbootstrap_extra_args     = \"...\"\npost_bootstrap_user_data = \"...\"\n
    \u2139\ufe0f When using bottlerocket, the supplied user data (TOML format) is merged in with the values supplied by EKS. Therefore, pre_bootstrap_user_data and post_bootstrap_user_data are not valid since the bottlerocket OS handles when various settings are applied. If you wish to supply additional configuration settings when using bottlerocket, supply them via the bootstrap_extra_args variable. For the AL2_* AMI types, bootstrap_extra_args are settings that will be supplied to the AWS EKS Optimized AMI bootstrap script such as kubelet extra args, etc. See the bottlerocket GitHub repository documentation for more details on what settings can be supplied via the bootstrap_extra_args variable."},{"location":"user_data/#self-managed-node-group","title":"Self Managed Node Group","text":"

    Self managed node groups require users to provide the necessary bootstrap user data. Users can elect to use the user data template provided by the module for their respective AMI type or provide their own user data template for rendering by the module.

    • If the AMI used is a derivative of the AWS EKS Optimized AMI , users can opt in to using a template provided by the module that provides the minimum necessary configuration to bootstrap the node when launched:
    • Users can use the following variables to facilitate this process:
      enable_bootstrap_user_data = true # to opt in to using the module supplied bootstrap user data template\npre_bootstrap_user_data    = \"...\"\nbootstrap_extra_args       = \"...\"\npost_bootstrap_user_data   = \"...\"\n
    • If the AMI is NOT an AWS EKS Optimized AMI derivative, or if users wish to have more control over the user data that is supplied to the node when launched, users have the ability to supply their own user data template that will be rendered instead of the module supplied template. Note - only the variables that are supplied to the templatefile() for the respective AMI type are available for use in the supplied template, otherwise users will need to pre-render/pre-populate the template before supplying the final template to the module for rendering as user data.
      • Users can use the following variables to facilitate this process:
        user_data_template_path  = \"./your/user_data.sh\" # user supplied bootstrap user data template\npre_bootstrap_user_data  = \"...\"\nbootstrap_extra_args     = \"...\"\npost_bootstrap_user_data = \"...\"\n
    "},{"location":"user_data/#logic-diagram","title":"Logic Diagram","text":"

    The rough flow of logic that is encapsulated within the _user_data module can be represented by the following diagram to better highlight the various manners in which user data can be populated.

    "}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Terraform AWS EKS module","text":"

    Moar content coming soon!

    "},{"location":"UPGRADE-17.0/","title":"How to handle the terraform-aws-eks module upgrade","text":""},{"location":"UPGRADE-17.0/#upgrade-module-to-v1700-for-managed-node-groups","title":"Upgrade module to v17.0.0 for Managed Node Groups","text":"

    In this release, we now decided to remove random_pet resources in Managed Node Groups (MNG). Those were used to recreate MNG if something changed. But they were causing a lot of issues. To upgrade the module without recreating your MNG, you will need to explicitly reuse their previous name and set them in your MNG name argument.

    1. Run terraform apply with the module version v16.2.0
    2. Get your worker group names
    ~ terraform state show 'module.eks.module.node_groups.aws_eks_node_group.workers[\"example\"]' | grep node_group_name\nnode_group_name = \"test-eks-mwIwsvui-example-sincere-squid\"\n
    1. Upgrade your module and configure your node groups to use existing names
    module \"eks\" {\n  source  = \"terraform-aws-modules/eks/aws\"\n  version = \"17.0.0\"\n\n  cluster_name    = \"test-eks-mwIwsvui\"\n  cluster_version = \"1.20\"\n  # ...\n\n  node_groups = {\n    example = {\n      name = \"test-eks-mwIwsvui-example-sincere-squid\"\n\n      # ...\n    }\n  }\n  # ...\n}\n
    1. Run terraform plan, you should see that only random_pets will be destroyed
    Terraform will perform the following actions:\n\n  # module.eks.module.node_groups.random_pet.node_groups[\"example\"] will be destroyed\n  - resource \"random_pet\" \"node_groups\" {\n      - id        = \"sincere-squid\" -> null\n      - keepers   = {\n          - \"ami_type\"                  = \"AL2_x86_64\"\n          - \"capacity_type\"             = \"SPOT\"\n          - \"disk_size\"                 = \"50\"\n          - \"iam_role_arn\"              = \"arn:aws:iam::123456789123:role/test-eks-mwIwsvui20210527220853611600000009\"\n          - \"instance_types\"            = \"t3.large\"\n          - \"key_name\"                  = \"\"\n          - \"node_group_name\"           = \"test-eks-mwIwsvui-example\"\n          - \"source_security_group_ids\" = \"\"\n          - \"subnet_ids\"                = \"subnet-xxxxxxxxxxxx|subnet-xxxxxxxxxxxx|subnet-xxxxxxxxxxxx\"\n        } -> null\n      - length    = 2 -> null\n      - separator = \"-\" -> null\n    }\n\nPlan: 0 to add, 0 to change, 1 to destroy.\n
    1. If everything sounds good to you, run terraform apply

    After the first apply, we recommend you to create a new node group and let the module use the node_group_name_prefix (by removing the name argument) to generate names and avoid collision during node groups re-creation if needed, because the lifecycle is create_before_destroy = true.

    "},{"location":"UPGRADE-18.0/","title":"Upgrade from v17.x to v18.x","text":"

    Please consult the examples directory for reference example configurations. If you find a bug, please open an issue with supporting configuration to reproduce.

    Note: please see https://github.com/terraform-aws-modules/terraform-aws-eks/issues/1744 where users have shared the steps/changes that have worked for their configurations to upgrade. Due to the numerous configuration possibilities, it is difficult to capture specific steps that will work for all; this has proven to be a useful thread to share collective information from the broader community regarding v18.x upgrades.

    For most users, adding the following to your v17.x configuration will preserve the state of your cluster control plane when upgrading to v18.x:

    prefix_separator                   = \"\"\niam_role_name                      = $CLUSTER_NAME\ncluster_security_group_name        = $CLUSTER_NAME\ncluster_security_group_description = \"EKS cluster security group.\"\n

    This configuration assumes that create_iam_role is set to true, which is the default value.

    As the location of the Terraform state of the IAM role has been changed from 17.x to 18.x, you'll also have to move the state before running terraform apply by calling:

    terraform state mv 'module.eks.aws_iam_role.cluster[0]' 'module.eks.aws_iam_role.this[0]'\n

    See more information here

    "},{"location":"UPGRADE-18.0/#list-of-backwards-incompatible-changes","title":"List of backwards incompatible changes","text":"
    • Launch configuration support has been removed and only launch template is supported going forward. AWS is no longer adding new features back into launch configuration and their docs state We strongly recommend that you do not use launch configurations. They do not provide full functionality for Amazon EC2 Auto Scaling or Amazon EC2. We provide information about launch configurations for customers who have not yet migrated from launch configurations to launch templates.
    • Support for managing aws-auth configmap has been removed. This change also removes the dependency on the Kubernetes Terraform provider, the local dependency on aws-iam-authenticator for users, as well as the reliance on the forked http provider to wait and poll on cluster creation. To aid users in this change, an output variable aws_auth_configmap_yaml has been provided which renders the aws-auth configmap necessary to support at least the IAM roles used by the module (additional mapRoles/mapUsers definitions to be provided by users)
    • Support for managing kubeconfig and its associated local_file resources have been removed; users are able to use the awscli provided aws eks update-kubeconfig --name <cluster_name> to update their local kubeconfig as necessary
    • The terminology used in the module has been modified to reflect that used by the AWS documentation.
    • AWS EKS Managed Node Group, eks_managed_node_groups, was previously referred to as simply node group, node_groups
    • Self Managed Node Group Group, self_managed_node_groups, was previously referred to as worker group, worker_groups
    • AWS Fargate Profile, fargate_profiles, remains unchanged in terms of naming and terminology
    • The three different node group types supported by AWS and the module have been refactored into standalone sub-modules that are both used by the root eks module as well as available for individual, standalone consumption if desired.
    • The previous node_groups sub-module is now named eks-managed-node-group and provisions a single AWS EKS Managed Node Group per sub-module definition (previous version utilized for_each to create 0 or more node groups)
      • Additional changes for the eks-managed-node-group sub-module over the previous node_groups module include:
      • Variable name changes defined in section Variable and output changes below
      • Support for nearly full control of the IAM role created, or provide the ARN of an existing IAM role, has been added
      • Support for nearly full control of the security group created, or provide the ID of an existing security group, has been added
      • User data has been revamped and all user data logic moved to the _user_data internal sub-module; the local userdata.sh.tpl has been removed entirely
    • The previous fargate sub-module is now named fargate-profile and provisions a single AWS EKS Fargate Profile per sub-module definition (previous version utilized for_each to create 0 or more profiles)
      • Additional changes for the fargate-profile sub-module over the previous fargate module include:
      • Variable name changes defined in section Variable and output changes below
      • Support for nearly full control of the IAM role created, or provide the ARN of an existing IAM role, has been added
      • Similar to the eks_managed_node_group_defaults and self_managed_node_group_defaults, a fargate_profile_defaults has been provided to allow users to control the default configurations for the Fargate profiles created
    • A sub-module for self-managed-node-group has been created and provisions a single self managed node group (autoscaling group) per sub-module definition
      • Additional changes for the self-managed-node-group sub-module over the previous node_groups variable include:
      • The underlying autoscaling group and launch template have been updated to more closely match that of the terraform-aws-autoscaling module and the features it offers
      • The previous iteration used a count over a list of node group definitions which was prone to disruptive updates; this is now replaced with a map/for_each to align with that of the EKS managed node group and Fargate profile behaviors/style
    • The user data configuration supported across the module has been completely revamped. A new _user_data internal sub-module has been created to consolidate all user data configuration in one location which provides better support for testability (via the tests/user-data example). The new sub-module supports nearly all possible combinations including the ability to allow users to provide their own user data template which will be rendered by the module. See the tests/user-data example project for the full plethora of example configuration possibilities and more details on the logic of the design can be found in the modules/_user_data directory.
    • Resource name changes may cause issues with existing resources. For example, security groups and IAM roles cannot be renamed, they must be recreated. Recreation of these resources may also trigger a recreation of the cluster. To use the legacy (< 18.x) resource naming convention, set prefix_separator to \"\".
    • Security group usage has been overhauled to provide only the bare minimum network connectivity required to launch a bare bones cluster. See the security group documentation section for more details. Users upgrading to v18.x will want to review the rules they have in place today versus the rules provisioned by the v18.x module and ensure to make any necessary adjustments for their specific workload.
    "},{"location":"UPGRADE-18.0/#additional-changes","title":"Additional changes","text":""},{"location":"UPGRADE-18.0/#added","title":"Added","text":"
    • Support for AWS EKS Addons has been added
    • Support for AWS EKS Cluster Identity Provider Configuration has been added
    • AWS Terraform provider minimum required version has been updated to 3.64 to support the changes made and additional resources supported
    • An example user_data project has been added to aid in demonstrating, testing, and validating the various methods of configuring user data with the _user_data sub-module as well as the root eks module
    • Template for rendering the aws-auth configmap output - aws_auth_cm.tpl
    • Template for Bottlerocket OS user data bootstrapping - bottlerocket_user_data.tpl
    "},{"location":"UPGRADE-18.0/#modified","title":"Modified","text":"
    • The previous fargate example has been renamed to fargate_profile
    • The previous irsa and instance_refresh examples have been merged into one example irsa_autoscale_refresh
    • The previous managed_node_groups example has been renamed to self_managed_node_group
    • The previously hardcoded EKS OIDC root CA thumbprint value and variable has been replaced with a tls_certificate data source that refers to the cluster OIDC issuer url. Thumbprint values should remain unchanged however
    • Individual cluster security group resources have been replaced with a single security group resource that takes a map of rules as input. The default ingress/egress rules have had their scope reduced in order to provide the bare minimum of access to permit successful cluster creation and allow users to opt in to any additional network access as needed for a better security posture. This means the 0.0.0.0/0 egress rule has been removed, instead TCP/443 and TCP/10250 egress rules to the node group security group are used instead
    • The Linux/bash user data template has been updated to include the bare minimum necessary for bootstrapping AWS EKS Optimized AMI derivative nodes with provisions for providing additional user data and configurations; was named userdata.sh.tpl and is now named linux_user_data.tpl
    • The Windows user data template has been renamed from userdata_windows.tpl to windows_user_data.tpl
    "},{"location":"UPGRADE-18.0/#removed","title":"Removed","text":"
    • Miscellaneous documents on how to configure Kubernetes cluster internals have been removed. Documentation related to how to configure the AWS EKS Cluster and its supported infrastructure resources provided by the module are supported, while cluster internal configuration is out of scope for this project
    • The previous bottlerocket example has been removed in favor of demonstrating the use and configuration of Bottlerocket nodes via the respective eks_managed_node_group and self_managed_node_group examples
    • The previous launch_template and launch_templates_with_managed_node_groups examples have been removed; only launch templates are now supported (default) and launch configuration support has been removed
    • The previous secrets_encryption example has been removed; the functionality has been demonstrated in several of the new examples rendering this standalone example redundant
    • The additional, custom IAM role policy for the cluster role has been removed. The permissions are either now provided in the attached managed AWS permission policies used or are no longer required
    • The kubeconfig.tpl template; kubeconfig management is no longer supported under this module
    • The HTTP Terraform provider (forked copy) dependency has been removed
    "},{"location":"UPGRADE-18.0/#variable-and-output-changes","title":"Variable and output changes","text":"
    1. Removed variables:

      • cluster_create_timeout, cluster_update_timeout, and cluster_delete_timeout have been replaced with cluster_timeouts
      • kubeconfig_name
      • kubeconfig_output_path
      • kubeconfig_file_permission
      • kubeconfig_api_version
      • kubeconfig_aws_authenticator_command
      • kubeconfig_aws_authenticator_command_args
      • kubeconfig_aws_authenticator_additional_args
      • kubeconfig_aws_authenticator_env_variables
      • write_kubeconfig
      • default_platform
      • manage_aws_auth
      • aws_auth_additional_labels
      • map_accounts
      • map_roles
      • map_users
      • fargate_subnets
      • worker_groups_launch_template
      • worker_security_group_id
      • worker_ami_name_filter
      • worker_ami_name_filter_windows
      • worker_ami_owner_id
      • worker_ami_owner_id_windows
      • worker_additional_security_group_ids
      • worker_sg_ingress_from_port
      • workers_additional_policies
      • worker_create_security_group
      • worker_create_initial_lifecycle_hooks
      • worker_create_cluster_primary_security_group_rules
      • cluster_create_endpoint_private_access_sg_rule
      • cluster_endpoint_private_access_cidrs
      • cluster_endpoint_private_access_sg
      • manage_worker_iam_resources
      • workers_role_name
      • attach_worker_cni_policy
      • eks_oidc_root_ca_thumbprint
      • create_fargate_pod_execution_role
      • fargate_pod_execution_role_name
      • cluster_egress_cidrs
      • workers_egress_cidrs
      • wait_for_cluster_timeout
      • EKS Managed Node Group sub-module (was node_groups)
      • default_iam_role_arn
      • workers_group_defaults
      • worker_security_group_id
      • node_groups_defaults
      • node_groups
      • ebs_optimized_not_supported
      • Fargate profile sub-module (was fargate)
      • create_eks and create_fargate_pod_execution_role have been replaced with simply create
    2. Renamed variables:

      • create_eks -> create
      • subnets -> subnet_ids
      • cluster_create_security_group -> create_cluster_security_group
      • cluster_log_retention_in_days -> cloudwatch_log_group_retention_in_days
      • cluster_log_kms_key_id -> cloudwatch_log_group_kms_key_id
      • manage_cluster_iam_resources -> create_iam_role
      • cluster_iam_role_name -> iam_role_name
      • permissions_boundary -> iam_role_permissions_boundary
      • iam_path -> iam_role_path
      • pre_userdata -> pre_bootstrap_user_data
      • additional_userdata -> post_bootstrap_user_data
      • worker_groups -> self_managed_node_groups
      • workers_group_defaults -> self_managed_node_group_defaults
      • node_groups -> eks_managed_node_groups
      • node_groups_defaults -> eks_managed_node_group_defaults
      • EKS Managed Node Group sub-module (was node_groups)
      • create_eks -> create
      • worker_additional_security_group_ids -> vpc_security_group_ids
      • Fargate profile sub-module
      • fargate_pod_execution_role_name -> name
      • create_fargate_pod_execution_role -> create_iam_role
      • subnets -> subnet_ids
      • iam_path -> iam_role_path
      • permissions_boundary -> iam_role_permissions_boundary
    3. Added variables:

      • cluster_additional_security_group_ids added to allow users to add additional security groups to the cluster as needed
      • cluster_security_group_name
      • cluster_security_group_use_name_prefix added to allow users to use either the name as specified or default to using the name specified as a prefix
      • cluster_security_group_description
      • cluster_security_group_additional_rules
      • cluster_security_group_tags
      • create_cloudwatch_log_group added in place of the logic that checked if any cluster log types were enabled to allow users to opt in as they see fit
      • create_node_security_group added to create single security group that connects node groups and cluster in central location
      • node_security_group_id
      • node_security_group_name
      • node_security_group_use_name_prefix
      • node_security_group_description
      • node_security_group_additional_rules
      • node_security_group_tags
      • iam_role_arn
      • iam_role_use_name_prefix
      • iam_role_description
      • iam_role_additional_policies
      • iam_role_tags
      • cluster_addons
      • cluster_identity_providers
      • fargate_profile_defaults
      • prefix_separator added to support legacy behavior of not having a prefix separator
      • EKS Managed Node Group sub-module (was node_groups)
      • platform
      • enable_bootstrap_user_data
      • pre_bootstrap_user_data
      • post_bootstrap_user_data
      • bootstrap_extra_args
      • user_data_template_path
      • create_launch_template
      • launch_template_name
      • launch_template_use_name_prefix
      • description
      • ebs_optimized
      • ami_id
      • key_name
      • launch_template_default_version
      • update_launch_template_default_version
      • disable_api_termination
      • kernel_id
      • ram_disk_id
      • block_device_mappings
      • capacity_reservation_specification
      • cpu_options
      • credit_specification
      • elastic_gpu_specifications
      • elastic_inference_accelerator
      • enclave_options
      • instance_market_options
      • license_specifications
      • metadata_options
      • enable_monitoring
      • network_interfaces
      • placement
      • min_size
      • max_size
      • desired_size
      • use_name_prefix
      • ami_type
      • ami_release_version
      • capacity_type
      • disk_size
      • force_update_version
      • instance_types
      • labels
      • cluster_version
      • launch_template_version
      • remote_access
      • taints
      • update_config
      • timeouts
      • create_security_group
      • security_group_name
      • security_group_use_name_prefix
      • security_group_description
      • vpc_id
      • security_group_rules
      • cluster_security_group_id
      • security_group_tags
      • create_iam_role
      • iam_role_arn
      • iam_role_name
      • iam_role_use_name_prefix
      • iam_role_path
      • iam_role_description
      • iam_role_permissions_boundary
      • iam_role_additional_policies
      • iam_role_tags
      • Fargate profile sub-module (was fargate)
      • iam_role_arn (for if create_iam_role is false to bring your own externally created role)
      • iam_role_name
      • iam_role_use_name_prefix
      • iam_role_description
      • iam_role_additional_policies
      • iam_role_tags
      • selectors
      • timeouts
    4. Removed outputs:

      • cluster_version
      • kubeconfig
      • kubeconfig_filename
      • workers_asg_arns
      • workers_asg_names
      • workers_user_data
      • workers_default_ami_id
      • workers_default_ami_id_windows
      • workers_launch_template_ids
      • workers_launch_template_arns
      • workers_launch_template_latest_versions
      • worker_security_group_id
      • worker_iam_instance_profile_arns
      • worker_iam_instance_profile_names
      • worker_iam_role_name
      • worker_iam_role_arn
      • fargate_profile_ids
      • fargate_profile_arns
      • fargate_iam_role_name
      • fargate_iam_role_arn
      • node_groups
      • security_group_rule_cluster_https_worker_ingress
      • EKS Managed Node Group sub-module (was node_groups)
      • node_groups
      • aws_auth_roles
      • Fargate profile sub-module (was fargate)
      • aws_auth_roles
    5. Renamed outputs:

      • config_map_aws_auth -> aws_auth_configmap_yaml
      • Fargate profile sub-module (was fargate)
      • fargate_profile_ids -> fargate_profile_id
      • fargate_profile_arns -> fargate_profile_arn
    6. Added outputs:

      • cluster_platform_version
      • cluster_status
      • cluster_security_group_arn
      • cluster_security_group_id
      • node_security_group_arn
      • node_security_group_id
      • cluster_iam_role_unique_id
      • cluster_addons
      • cluster_identity_providers
      • fargate_profiles
      • eks_managed_node_groups
      • self_managed_node_groups
      • EKS Managed Node Group sub-module (was node_groups)
      • launch_template_id
      • launch_template_arn
      • launch_template_latest_version
      • node_group_arn
      • node_group_id
      • node_group_resources
      • node_group_status
      • security_group_arn
      • security_group_id
      • iam_role_name
      • iam_role_arn
      • iam_role_unique_id
      • Fargate profile sub-module (was fargate)
      • iam_role_unique_id
      • fargate_profile_status
    "},{"location":"UPGRADE-18.0/#upgrade-migrations","title":"Upgrade Migrations","text":""},{"location":"UPGRADE-18.0/#before-17x-example","title":"Before 17.x Example","text":"
    module \"eks\" {\n  source  = \"terraform-aws-modules/eks/aws\"\n  version = \"~> 17.0\"\n\n  cluster_name                    = local.name\n  cluster_version                 = local.cluster_version\n  cluster_endpoint_private_access = true\n  cluster_endpoint_public_access  = true\n\n  vpc_id  = module.vpc.vpc_id\n  subnets = module.vpc.private_subnets\n\n  # Managed Node Groups\n  node_groups_defaults = {\n    ami_type  = \"AL2_x86_64\"\n    disk_size = 50\n  }\n\n  node_groups = {\n    node_group = {\n      min_capacity     = 1\n      max_capacity     = 10\n      desired_capacity = 1\n\n      instance_types = [\"t3.large\"]\n      capacity_type  = \"SPOT\"\n\n      update_config = {\n        max_unavailable_percentage = 50\n      }\n\n      k8s_labels = {\n        Environment = \"test\"\n        GithubRepo  = \"terraform-aws-eks\"\n        GithubOrg   = \"terraform-aws-modules\"\n      }\n\n      taints = [\n        {\n          key    = \"dedicated\"\n          value  = \"gpuGroup\"\n          effect = \"NO_SCHEDULE\"\n        }\n      ]\n\n      additional_tags = {\n        ExtraTag = \"example\"\n      }\n    }\n  }\n\n  # Worker groups\n  worker_additional_security_group_ids = [aws_security_group.additional.id]\n\n  worker_groups_launch_template = [\n    {\n      name                    = \"worker-group\"\n      override_instance_types = [\"m5.large\", \"m5a.large\", \"m5d.large\", \"m5ad.large\"]\n      spot_instance_pools     = 4\n      asg_max_size            = 5\n      asg_desired_capacity    = 2\n      kubelet_extra_args      = \"--node-labels=node.kubernetes.io/lifecycle=spot\"\n      public_ip               = true\n    },\n  ]\n\n  # Fargate\n  fargate_profiles = {\n    default = {\n      name = \"default\"\n      selectors = [\n        {\n          namespace = \"kube-system\"\n          labels = {\n            k8s-app = \"kube-dns\"\n          }\n        },\n        {\n          namespace = \"default\"\n        }\n      ]\n\n      tags = {\n        Owner = \"test\"\n      }\n\n      timeouts = {\n        create = \"20m\"\n        delete = \"20m\"\n      }\n    }\n  }\n\n  tags = {\n    Environment = \"test\"\n    GithubRepo  = \"terraform-aws-eks\"\n    GithubOrg   = \"terraform-aws-modules\"\n  }\n}\n
    "},{"location":"UPGRADE-18.0/#after-18x-example","title":"After 18.x Example","text":"
    module \"cluster_after\" {\n  source  = \"terraform-aws-modules/eks/aws\"\n  version = \"~> 18.0\"\n\n  cluster_name                    = local.name\n  cluster_version                 = local.cluster_version\n  cluster_endpoint_private_access = true\n  cluster_endpoint_public_access  = true\n\n  vpc_id     = module.vpc.vpc_id\n  subnet_ids = module.vpc.private_subnets\n\n  eks_managed_node_group_defaults = {\n    ami_type  = \"AL2_x86_64\"\n    disk_size = 50\n  }\n\n  eks_managed_node_groups = {\n    node_group = {\n      min_size     = 1\n      max_size     = 10\n      desired_size = 1\n\n      instance_types = [\"t3.large\"]\n      capacity_type  = \"SPOT\"\n\n      update_config = {\n        max_unavailable_percentage = 50\n      }\n\n      labels = {\n        Environment = \"test\"\n        GithubRepo  = \"terraform-aws-eks\"\n        GithubOrg   = \"terraform-aws-modules\"\n      }\n\n      taints = [\n        {\n          key    = \"dedicated\"\n          value  = \"gpuGroup\"\n          effect = \"NO_SCHEDULE\"\n        }\n      ]\n\n      tags = {\n        ExtraTag = \"example\"\n      }\n    }\n  }\n\n  self_managed_node_group_defaults = {\n    vpc_security_group_ids = [aws_security_group.additional.id]\n  }\n\n  self_managed_node_groups = {\n    worker_group = {\n      name = \"worker-group\"\n\n      min_size      = 1\n      max_size      = 5\n      desired_size  = 2\n      instance_type = \"m4.large\"\n\n      bootstrap_extra_args = \"--kubelet-extra-args '--node-labels=node.kubernetes.io/lifecycle=spot'\"\n\n      block_device_mappings = {\n        xvda = {\n          device_name = \"/dev/xvda\"\n          ebs = {\n            delete_on_termination = true\n            encrypted             = false\n            volume_size           = 100\n            volume_type           = \"gp2\"\n          }\n\n        }\n      }\n\n      use_mixed_instances_policy = true\n      mixed_instances_policy = {\n        instances_distribution = {\n          spot_instance_pools = 4\n        }\n\n        override = [\n          { instance_type = \"m5.large\" },\n          { instance_type = \"m5a.large\" },\n          { instance_type = \"m5d.large\" },\n          { instance_type = \"m5ad.large\" },\n        ]\n      }\n    }\n  }\n\n  # Fargate\n  fargate_profiles = {\n    default = {\n      name = \"default\"\n\n      selectors = [\n        {\n          namespace = \"kube-system\"\n          labels = {\n            k8s-app = \"kube-dns\"\n          }\n        },\n        {\n          namespace = \"default\"\n        }\n      ]\n\n      tags = {\n        Owner = \"test\"\n      }\n\n      timeouts = {\n        create = \"20m\"\n        delete = \"20m\"\n      }\n    }\n  }\n\n  tags = {\n    Environment = \"test\"\n    GithubRepo  = \"terraform-aws-eks\"\n    GithubOrg   = \"terraform-aws-modules\"\n  }\n}\n
    "},{"location":"UPGRADE-18.0/#diff-of-before-after","title":"Diff of before <> after","text":"
     module \"eks\" {\n   source  = \"terraform-aws-modules/eks/aws\"\n-  version = \"~> 17.0\"\n+  version = \"~> 18.0\"\n\n   cluster_name                    = local.name\n   cluster_version                 = local.cluster_version\n   cluster_endpoint_private_access = true\n   cluster_endpoint_public_access  = true\n\n   vpc_id  = module.vpc.vpc_id\n-  subnets = module.vpc.private_subnets\n+  subnet_ids = module.vpc.private_subnets\n\n-  # Managed Node Groups\n-  node_groups_defaults = {\n+  eks_managed_node_group_defaults = {\n     ami_type  = \"AL2_x86_64\"\n     disk_size = 50\n   }\n\n-  node_groups = {\n+  eks_managed_node_groups = {\n     node_group = {\n-      min_capacity     = 1\n-      max_capacity     = 10\n-      desired_capacity = 1\n+      min_size     = 1\n+      max_size     = 10\n+      desired_size = 1\n\n       instance_types = [\"t3.large\"]\n       capacity_type  = \"SPOT\"\n\n       update_config = {\n         max_unavailable_percentage = 50\n       }\n\n-      k8s_labels = {\n+      labels = {\n         Environment = \"test\"\n         GithubRepo  = \"terraform-aws-eks\"\n         GithubOrg   = \"terraform-aws-modules\"\n       }\n\n       taints = [\n         {\n           key    = \"dedicated\"\n           value  = \"gpuGroup\"\n           effect = \"NO_SCHEDULE\"\n         }\n       ]\n\n-      additional_tags = {\n+      tags = {\n         ExtraTag = \"example\"\n       }\n     }\n   }\n\n-  # Worker groups\n-  worker_additional_security_group_ids = [aws_security_group.additional.id]\n-\n-  worker_groups_launch_template = [\n-    {\n-      name                    = \"worker-group\"\n-      override_instance_types = [\"m5.large\", \"m5a.large\", \"m5d.large\", \"m5ad.large\"]\n-      spot_instance_pools     = 4\n-      asg_max_size            = 5\n-      asg_desired_capacity    = 2\n-      kubelet_extra_args      = \"--node-labels=node.kubernetes.io/lifecycle=spot\"\n-      public_ip               = true\n-    },\n-  ]\n+  self_managed_node_group_defaults = {\n+    vpc_security_group_ids = [aws_security_group.additional.id]\n+  }\n+\n+  self_managed_node_groups = {\n+    worker_group = {\n+      name = \"worker-group\"\n+\n+      min_size      = 1\n+      max_size      = 5\n+      desired_size  = 2\n+      instance_type = \"m4.large\"\n+\n+      bootstrap_extra_args = \"--kubelet-extra-args '--node-labels=node.kubernetes.io/lifecycle=spot'\"\n+\n+      block_device_mappings = {\n+        xvda = {\n+          device_name = \"/dev/xvda\"\n+          ebs = {\n+            delete_on_termination = true\n+            encrypted             = false\n+            volume_size           = 100\n+            volume_type           = \"gp2\"\n+          }\n+\n+        }\n+      }\n+\n+      use_mixed_instances_policy = true\n+      mixed_instances_policy = {\n+        instances_distribution = {\n+          spot_instance_pools = 4\n+        }\n+\n+        override = [\n+          { instance_type = \"m5.large\" },\n+          { instance_type = \"m5a.large\" },\n+          { instance_type = \"m5d.large\" },\n+          { instance_type = \"m5ad.large\" },\n+        ]\n+      }\n+    }\n+  }\n\n   # Fargate\n   fargate_profiles = {\n     default = {\n       name = \"default\"\n       selectors = [\n         {\n           namespace = \"kube-system\"\n           labels = {\n             k8s-app = \"kube-dns\"\n           }\n         },\n         {\n           namespace = \"default\"\n         }\n       ]\n\n       tags = {\n         Owner = \"test\"\n       }\n\n       timeouts = {\n         create = \"20m\"\n         delete = \"20m\"\n       }\n     }\n   }\n\n   tags = {\n     Environment = \"test\"\n     GithubRepo  = \"terraform-aws-eks\"\n     GithubOrg   = \"terraform-aws-modules\"\n   }\n }\n
    "},{"location":"UPGRADE-18.0/#attaching-an-iam-role-policy-to-a-fargate-profile","title":"Attaching an IAM role policy to a Fargate profile","text":""},{"location":"UPGRADE-18.0/#before-17x","title":"Before 17.x","text":"
    resource \"aws_iam_role_policy_attachment\" \"default\" {\n  role       = module.eks.fargate_iam_role_name\n  policy_arn = aws_iam_policy.default.arn\n}\n
    "},{"location":"UPGRADE-18.0/#after-18x","title":"After 18.x","text":"
    # Attach the policy to an \"example\" Fargate profile\nresource \"aws_iam_role_policy_attachment\" \"default\" {\n  role       = module.eks.fargate_profiles[\"example\"].iam_role_name\n  policy_arn = aws_iam_policy.default.arn\n}\n

    Or:

    # Attach the policy to all Fargate profiles\nresource \"aws_iam_role_policy_attachment\" \"default\" {\n  for_each = module.eks.fargate_profiles\n\n  role       = each.value.iam_role_name\n  policy_arn = aws_iam_policy.default.arn\n}\n
    "},{"location":"UPGRADE-19.0/","title":"Upgrade from v18.x to v19.x","text":"

    Please consult the examples directory for reference example configurations. If you find a bug, please open an issue with supporting configuration to reproduce.

    "},{"location":"UPGRADE-19.0/#list-of-backwards-incompatible-changes","title":"List of backwards incompatible changes","text":"
    • The cluster_id output used to output the name of the cluster. This is due to the fact that the cluster name is a unique constraint and therefore its set as the unique identifier within Terraform's state map. However, starting with local EKS clusters created on Outposts, there is now an attribute returned from the aws eks create-cluster API named id. The cluster_id has been updated to return this value which means that for current, standard EKS clusters created in the AWS cloud, no value will be returned (at the time of this writing) for cluster_id and only local EKS clusters on Outposts will return a value that looks like a UUID/GUID. Users should switch all instances of cluster_id to use cluster_name before upgrading to v19. Reference
    • Minimum supported version of Terraform AWS provider updated to v4.45 to support the latest features provided via the resources utilized.
    • Minimum supported version of Terraform updated to v1.0
    • Individual security group created per EKS managed node group or self-managed node group has been removed. This configuration went mostly unused and would often cause confusion (\"Why is there an empty security group attached to my nodes?\"). This functionality can easily be replicated by user's providing one or more externally created security groups to attach to nodes launched from the node group.
    • Previously, var.iam_role_additional_policies (one for each of the following: cluster IAM role, EKS managed node group IAM role, self-managed node group IAM role, and Fargate Profile IAM role) accepted a list of strings. This worked well for policies that already existed but failed for policies being created at the same time as the cluster due to the well-known issue of unknown values used in a for_each loop. To rectify this issue in v19.x, two changes were made:
    • var.iam_role_additional_policies was changed from type list(string) to type map(string) -> this is a breaking change. More information on managing this change can be found below, under Terraform State Moves
    • The logic used in the root module for this variable was changed to replace the use of try() with lookup(). More details on why can be found here
    • The cluster name has been removed from the Karpenter module event rule names. Due to the use of long cluster names appending to the provided naming scheme, the cluster name has moved to a ClusterName tag and the event rule name is now a prefix. This guarantees that users can have multiple instances of Karpenter with their respective event rules/SQS queue without name collisions, while also still being able to identify which queues and event rules belong to which cluster.
    • The new variable node_security_group_enable_recommended_rules is set to true by default and may conflict with any custom ingress/egress rules. Please ensure that any duplicates from the node_security_group_additional_rules are removed before upgrading, or set node_security_group_enable_recommended_rules to false. Reference
    "},{"location":"UPGRADE-19.0/#additional-changes","title":"Additional changes","text":""},{"location":"UPGRADE-19.0/#added","title":"Added","text":"
    • Support for setting preserve as well as most_recent on addons.
    • preserve indicates if you want to preserve the created resources when deleting the EKS add-on
    • most_recent indicates if you want to use the most recent revision of the add-on or the default version (default)
    • Support for setting default node security group rules for common access patterns required:
    • Egress all for 0.0.0.0/0/::/0
    • Ingress from cluster security group for 8443/TCP and 9443/TCP for common applications such as ALB Ingress Controller, Karpenter, OPA Gatekeeper, etc. These are commonly used as webhook ports for validating and mutating webhooks
    "},{"location":"UPGRADE-19.0/#modified","title":"Modified","text":"
    • cluster_security_group_additional_rules and node_security_group_additional_rules have been modified to use lookup() instead of try() to avoid the well-known issue of unknown values within a for_each loop
    • Default cluster security group rules have removed egress rules for TCP/443 and TCP/10250 to node groups since the cluster primary security group includes a default rule for ALL to 0.0.0.0/0/::/0
    • Default node security group rules have removed egress rules have been removed since the default security group settings have egress rule for ALL to 0.0.0.0/0/::/0
    • block_device_mappings previously required a map of maps but has since changed to an array of maps. Users can remove the outer key for each block device mapping and replace the outermost map {} with an array []. There are no state changes required for this change.
    • create_kms_key previously defaulted to false and now defaults to true. Clusters created with this module now default to enabling secret encryption by default with a customer-managed KMS key created by this module
    • cluster_encryption_config previously used a type of list(any) and now uses a type of any -> users can simply remove the outer [...] brackets on v19.x
    • cluster_encryption_config previously defaulted to [] and now defaults to {resources = [\"secrets\"]} to encrypt secrets by default
    • cluster_endpoint_public_access previously defaulted to true and now defaults to false. Clusters created with this module now default to private-only access to the cluster endpoint
    • cluster_endpoint_private_access previously defaulted to false and now defaults to true
    • The addon configuration now sets \"OVERWRITE\" as the default value for resolve_conflicts to ease add-on upgrade management. Users can opt out of this by instead setting \"NONE\" as the value for resolve_conflicts
    • The kms module used has been updated from v1.0.2 to v1.1.0 - no material changes other than updated to latest
    • The default value for EKS managed node group update_config has been updated to the recommended { max_unavailable_percentage = 33 }
    • The default value for the self-managed node group instance_refresh has been updated to the recommended:
      {\n  strategy = \"Rolling\"\n  preferences = {\n    min_healthy_percentage = 66\n  }\n}\n
    "},{"location":"UPGRADE-19.0/#removed","title":"Removed","text":"
    • Remove all references of aws_default_tags to avoid update conflicts; this is the responsibility of the provider and should be handled at the provider level
    • https://github.com/terraform-aws-modules/terraform-aws-eks/issues?q=is%3Aissue+default_tags+is%3Aclosed
    • https://github.com/terraform-aws-modules/terraform-aws-eks/pulls?q=is%3Apr+default_tags+is%3Aclosed
    "},{"location":"UPGRADE-19.0/#variable-and-output-changes","title":"Variable and output changes","text":"
    1. Removed variables:

    2. node_security_group_ntp_ipv4_cidr_block - default security group settings have an egress rule for ALL to 0.0.0.0/0/::/0

    3. node_security_group_ntp_ipv6_cidr_block - default security group settings have an egress rule for ALL to 0.0.0.0/0/::/0
    4. Self-managed node groups:
      • create_security_group
      • security_group_name
      • security_group_use_name_prefix
      • security_group_description
      • security_group_rules
      • security_group_tags
      • cluster_security_group_id
      • vpc_id
    5. EKS managed node groups:

      • create_security_group
      • security_group_name
      • security_group_use_name_prefix
      • security_group_description
      • security_group_rules
      • security_group_tags
      • cluster_security_group_id
      • vpc_id
    6. Renamed variables:

    7. N/A

    8. Added variables:

    9. provision_on_outpostfor Outposts support

    10. outpost_config for Outposts support
    11. cluster_addons_timeouts for setting a common set of timeouts for all addons (unless a specific value is provided within the addon configuration)
    12. service_ipv6_cidr for setting the IPv6 CIDR block for the Kubernetes service addresses
    13. node_security_group_enable_recommended_rules for enabling recommended node security group rules for common access patterns

    14. Self-managed node groups:

      • launch_template_id for use when using an existing/externally created launch template (Ref: https://github.com/terraform-aws-modules/terraform-aws-autoscaling/pull/204)
      • maintenance_options
      • private_dns_name_options
      • instance_requirements
      • context
      • default_instance_warmup
      • force_delete_warm_pool
    15. EKS managed node groups:
      • use_custom_launch_template was added to better clarify how users can switch between a custom launch template or the default launch template provided by the EKS managed node group. Previously, to achieve this same functionality of using the default launch template, users needed to set create_launch_template = false and launch_template_name = \"\" which is not very intuitive.
      • launch_template_id for use when using an existing/externally created launch template (Ref: https://github.com/terraform-aws-modules/terraform-aws-autoscaling/pull/204)
      • maintenance_options
      • private_dns_name_options -
    16. Removed outputs:

    17. Self-managed node groups:

      • security_group_arn
      • security_group_id
    18. EKS managed node groups:

      • security_group_arn
      • security_group_id
    19. Renamed outputs:

    20. cluster_id is not renamed but the value it returns is now different. For standard EKS clusters created in the AWS cloud, the value returned at the time of this writing is null/empty. For local EKS clusters created on Outposts, the value returned will look like a UUID/GUID. Users should switch all instances of cluster_id to use cluster_name before upgrading to v19. Reference

    21. Added outputs:

    22. cluster_name - The cluster_id currently set by the AWS provider is actually the cluster name, but in the future, this will change and there will be a distinction between the cluster_name and cluster_id. Reference

    "},{"location":"UPGRADE-19.0/#upgrade-migrations","title":"Upgrade Migrations","text":"
    1. Before upgrading your module definition to v19.x, please see below for both EKS managed node group(s) and self-managed node groups and remove the node group(s) security group prior to upgrading.
    "},{"location":"UPGRADE-19.0/#self-managed-node-groups","title":"Self-Managed Node Groups","text":"

    Self-managed node groups on v18.x by default create a security group that does not specify any rules. In v19.x, this security group has been removed due to the predominant lack of usage (most users rely on the shared node security group). While still using version v18.x of your module definition, remove this security group from your node groups by setting create_security_group = false.

    • If you are currently utilizing this security group, it is recommended to create an additional security group that matches the rules/settings of the security group created by the node group, and specify that security group ID in vpc_security_group_ids. Once this is in place, you can proceed with the original security group removal.
    • For most users, the security group is not used and can be safely removed. However, deployed instances will have the security group attached to nodes and require the security group to be disassociated before the security group can be deleted. Because instances are deployed via autoscaling groups, we cannot simply remove the security group from the code and have those changes reflected on the instances. Instead, we have to update the code and then trigger the autoscaling groups to cycle the instances deployed so that new instances are provisioned without the security group attached. You can utilize the instance_refresh parameter of Autoscaling groups to force nodes to re-deploy when removing the security group since changes to launch templates automatically trigger an instance refresh. An example configuration is provided below.
    • Add the following to either/or self_managed_node_group_defaults or the individual self-managed node group definitions:
      create_security_group = false\ninstance_refresh = {\n  strategy = \"Rolling\"\n  preferences = {\n    min_healthy_percentage = 66\n  }\n}\n
    • It is recommended to use the aws-node-termination-handler while performing this update. Please refer to the irsa-autoscale-refresh example for usage. This will ensure that pods are safely evicted in a controlled manner to avoid service disruptions.
    • Once the necessary configurations are in place, you can apply the changes which will:
    • Create a new launch template (version) without the self-managed node group security group
    • Replace instances based on the instance_refresh configuration settings
    • New instances will launch without the self-managed node group security group, and prior instances will be terminated
    • Once the self-managed node group has cycled, the security group will be deleted
    "},{"location":"UPGRADE-19.0/#eks-managed-node-groups","title":"EKS Managed Node Groups","text":"

    EKS managed node groups on v18.x by default create a security group that does not specify any rules. In v19.x, this security group has been removed due to the predominant lack of usage (most users rely on the shared node security group). While still using version v18.x of your module definition, remove this security group from your node groups by setting create_security_group = false.

    • If you are currently utilizing this security group, it is recommended to create an additional security group that matches the rules/settings of the security group created by the node group, and specify that security group ID in vpc_security_group_ids. Once this is in place, you can proceed with the original security group removal.
    • EKS managed node groups rollout changes using a rolling update strategy that can be influenced through update_config. No additional changes are required for removing the security group created by node groups (unlike self-managed node groups which should utilize the instance_refresh setting of Autoscaling groups).
    • Once create_security_group = false has been set, you can apply the changes which will:
    • Create a new launch template (version) without the EKS managed node group security group
    • Replace instances based on the update_config configuration settings
    • New instances will launch without the EKS managed node group security group, and prior instances will be terminated
    • Once the EKS managed node group has cycled, the security group will be deleted

    • Once the node group security group(s) have been removed, you can update your module definition to specify the v19.x version of the module

    • Run terraform init -upgrade=true to update your configuration and pull in the v19 changes
    • Using the documentation provided above, update your module definition to reflect the changes in the module from v18.x to v19.x. You can utilize terraform plan as you go to help highlight any changes that you wish to make. See below for terraform state mv ... commands related to the use of iam_role_additional_policies. If you are not providing any values to these variables, you can skip this section.
    • Once you are satisfied with the changes and the terraform plan output, you can apply the changes to sync your infrastructure with the updated module definition (or vice versa).
    "},{"location":"UPGRADE-19.0/#diff-of-before-v18x-vs-after-v19x","title":"Diff of Before (v18.x) vs After (v19.x)","text":"
     module \"eks\" {\n   source  = \"terraform-aws-modules/eks/aws\"\n-  version = \"~> 18.0\"\n+  version = \"~> 19.0\"\n\n  cluster_name                    = local.name\n+ cluster_endpoint_public_access  = true\n- cluster_endpoint_private_access = true # now the default\n\n  cluster_addons = {\n-   resolve_conflicts = \"OVERWRITE\" # now the default\n+   preserve          = true\n+   most_recent       = true\n\n+   timeouts = {\n+     create = \"25m\"\n+     delete = \"10m\"\n    }\n    kube-proxy = {}\n    vpc-cni = {\n-     resolve_conflicts = \"OVERWRITE\" # now the default\n    }\n  }\n\n  # Encryption key\n  create_kms_key = true\n- cluster_encryption_config = [{\n-   resources = [\"secrets\"]\n- }]\n+ cluster_encryption_config = {\n+   resources = [\"secrets\"]\n+ }\n  kms_key_deletion_window_in_days = 7\n  enable_kms_key_rotation         = true\n\n- iam_role_additional_policies = [aws_iam_policy.additional.arn]\n+ iam_role_additional_policies = {\n+   additional = aws_iam_policy.additional.arn\n+ }\n\n  vpc_id                   = module.vpc.vpc_id\n  subnet_ids               = module.vpc.private_subnets\n  control_plane_subnet_ids = module.vpc.intra_subnets\n\n  # Extend node-to-node security group rules\n- node_security_group_ntp_ipv4_cidr_block = [\"169.254.169.123/32\"] # now the default\n  node_security_group_additional_rules = {\n-    ingress_self_ephemeral = {\n-      description = \"Node to node ephemeral ports\"\n-      protocol    = \"tcp\"\n-      from_port   = 0\n-      to_port     = 0\n-      type        = \"ingress\"\n-      self        = true\n-    }\n-    egress_all = {\n-      description      = \"Node all egress\"\n-      protocol         = \"-1\"\n-      from_port        = 0\n-      to_port          = 0\n-      type             = \"egress\"\n-      cidr_blocks      = [\"0.0.0.0/0\"]\n-      ipv6_cidr_blocks = [\"::/0\"]\n-    }\n  }\n\n  # Self-Managed Node Group(s)\n  self_managed_node_group_defaults = {\n    vpc_security_group_ids = [aws_security_group.additional.id]\n-   iam_role_additional_policies = [aws_iam_policy.additional.arn]\n+   iam_role_additional_policies = {\n+     additional = aws_iam_policy.additional.arn\n+   }\n  }\n\n  self_managed_node_groups = {\n    spot = {\n      instance_type = \"m5.large\"\n      instance_market_options = {\n        market_type = \"spot\"\n      }\n\n      pre_bootstrap_user_data = <<-EOT\n        echo \"foo\"\n        export FOO=bar\n      EOT\n\n      bootstrap_extra_args = \"--kubelet-extra-args '--node-labels=node.kubernetes.io/lifecycle=spot'\"\n\n      post_bootstrap_user_data = <<-EOT\n        cd /tmp\n        sudo yum install -y https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm\n        sudo systemctl enable amazon-ssm-agent\n        sudo systemctl start amazon-ssm-agent\n      EOT\n\n-     create_security_group          = true\n-     security_group_name            = \"eks-managed-node-group-complete-example\"\n-     security_group_use_name_prefix = false\n-     security_group_description     = \"EKS managed node group complete example security group\"\n-     security_group_rules = {}\n-     security_group_tags = {}\n    }\n  }\n\n  # EKS Managed Node Group(s)\n  eks_managed_node_group_defaults = {\n    ami_type       = \"AL2_x86_64\"\n    instance_types = [\"m6i.large\", \"m5.large\", \"m5n.large\", \"m5zn.large\"]\n\n    attach_cluster_primary_security_group = true\n    vpc_security_group_ids                = [aws_security_group.additional.id]\n-   iam_role_additional_policies = [aws_iam_policy.additional.arn]\n+   iam_role_additional_policies = {\n+     additional = aws_iam_policy.additional.arn\n+   }\n  }\n\n  eks_managed_node_groups = {\n    blue = {}\n    green = {\n      min_size     = 1\n      max_size     = 10\n      desired_size = 1\n\n      instance_types = [\"t3.large\"]\n      capacity_type  = \"SPOT\"\n      labels = {\n        Environment = \"test\"\n        GithubRepo  = \"terraform-aws-eks\"\n        GithubOrg   = \"terraform-aws-modules\"\n      }\n\n      taints = {\n        dedicated = {\n          key    = \"dedicated\"\n          value  = \"gpuGroup\"\n          effect = \"NO_SCHEDULE\"\n        }\n      }\n\n      update_config = {\n        max_unavailable_percentage = 33 # or set `max_unavailable`\n      }\n\n-     create_security_group          = true\n-     security_group_name            = \"eks-managed-node-group-complete-example\"\n-     security_group_use_name_prefix = false\n-     security_group_description     = \"EKS managed node group complete example security group\"\n-     security_group_rules = {}\n-     security_group_tags = {}\n\n      tags = {\n        ExtraTag = \"example\"\n      }\n    }\n  }\n\n  # Fargate Profile(s)\n  fargate_profile_defaults = {\n-   iam_role_additional_policies = [aws_iam_policy.additional.arn]\n+   iam_role_additional_policies = {\n+     additional = aws_iam_policy.additional.arn\n+   }\n  }\n\n  fargate_profiles = {\n    default = {\n      name = \"default\"\n      selectors = [\n        {\n          namespace = \"kube-system\"\n          labels = {\n            k8s-app = \"kube-dns\"\n          }\n        },\n        {\n          namespace = \"default\"\n        }\n      ]\n\n      tags = {\n        Owner = \"test\"\n      }\n\n      timeouts = {\n        create = \"20m\"\n        delete = \"20m\"\n      }\n    }\n  }\n\n  # OIDC Identity provider\n  cluster_identity_providers = {\n    cognito = {\n      client_id      = \"702vqsrjicklgb7c5b7b50i1gc\"\n      issuer_url     = \"https://cognito-idp.us-west-2.amazonaws.com/us-west-2_re1u6bpRA\"\n      username_claim = \"email\"\n      groups_claim   = \"cognito:groups\"\n      groups_prefix  = \"gid:\"\n    }\n  }\n\n  # aws-auth configmap\n  manage_aws_auth_configmap = true\n\n  aws_auth_node_iam_role_arns_non_windows = [\n    module.eks_managed_node_group.iam_role_arn,\n    module.self_managed_node_group.iam_role_arn,\n  ]\n  aws_auth_fargate_profile_pod_execution_role_arns = [\n    module.fargate_profile.fargate_profile_pod_execution_role_arn\n  ]\n\n  aws_auth_roles = [\n    {\n      rolearn  = \"arn:aws:iam::66666666666:role/role1\"\n      username = \"role1\"\n      groups   = [\"system:masters\"]\n    },\n  ]\n\n  aws_auth_users = [\n    {\n      userarn  = \"arn:aws:iam::66666666666:user/user1\"\n      username = \"user1\"\n      groups   = [\"system:masters\"]\n    },\n    {\n      userarn  = \"arn:aws:iam::66666666666:user/user2\"\n      username = \"user2\"\n      groups   = [\"system:masters\"]\n    },\n  ]\n\n  aws_auth_accounts = [\n    \"777777777777\",\n    \"888888888888\",\n  ]\n\n  tags = local.tags\n}\n
    "},{"location":"UPGRADE-19.0/#terraform-state-moves","title":"Terraform State Moves","text":"

    The following Terraform state move commands are optional but recommended if you are providing additional IAM policies that are to be attached to IAM roles created by this module (cluster IAM role, node group IAM role, Fargate profile IAM role). Because the resources affected are aws_iam_role_policy_attachment, in theory, you could get away with simply applying the configuration and letting Terraform detach and re-attach the policies. However, during this brief period of update, you could experience permission failures as the policy is detached and re-attached, and therefore the state move route is recommended.

    Where \"<POLICY_ARN>\" is specified, this should be replaced with the full ARN of the policy, and \"<POLICY_MAP_KEY>\" should be replaced with the key used in the iam_role_additional_policies map for the associated policy. For example, if you have the followingv19.x configuration:

      ...\n  # This is demonstrating the cluster IAM role additional policies\n  iam_role_additional_policies = {\n    additional = aws_iam_policy.additional.arn\n  }\n  ...\n

    The associated state move command would look similar to (albeit with your correct policy ARN):

    terraform state mv 'module.eks.aws_iam_role_policy_attachment.this[\"arn:aws:iam::111111111111:policy/ex-complete-additional\"]' 'module.eks.aws_iam_role_policy_attachment.additional[\"additional\"]'\n

    If you are not providing any additional IAM policies, no actions are required.

    "},{"location":"UPGRADE-19.0/#cluster-iam-role","title":"Cluster IAM Role","text":"

    Repeat for each policy provided in iam_role_additional_policies:

    terraform state mv 'module.eks.aws_iam_role_policy_attachment.this[\"<POLICY_ARN>\"]' 'module.eks.aws_iam_role_policy_attachment.additional[\"<POLICY_MAP_KEY>\"]'\n
    "},{"location":"UPGRADE-19.0/#eks-managed-node-group-iam-role","title":"EKS Managed Node Group IAM Role","text":"

    Where \"<NODE_GROUP_KEY>\" is the key used in the eks_managed_node_groups map for the associated node group. Repeat for each policy provided in iam_role_additional_policies in either/or eks_managed_node_group_defaults or the individual node group definitions:

    terraform state mv 'module.eks.module.eks_managed_node_group[\"<NODE_GROUP_KEY>\"].aws_iam_role_policy_attachment.this[\"<POLICY_ARN>\"]' 'module.eks.module.eks_managed_node_group[\"<NODE_GROUP_KEY>\"].aws_iam_role_policy_attachment.additional[\"<POLICY_MAP_KEY>\"]'\n
    "},{"location":"UPGRADE-19.0/#self-managed-node-group-iam-role","title":"Self-Managed Node Group IAM Role","text":"

    Where \"<NODE_GROUP_KEY>\" is the key used in the self_managed_node_groups map for the associated node group. Repeat for each policy provided in iam_role_additional_policies in either/or self_managed_node_group_defaults or the individual node group definitions:

    terraform state mv 'module.eks.module.self_managed_node_group[\"<NODE_GROUP_KEY>\"].aws_iam_role_policy_attachment.this[\"<POLICY_ARN>\"]' 'module.eks.module.self_managed_node_group[\"<NODE_GROUP_KEY>\"].aws_iam_role_policy_attachment.additional[\"<POLICY_MAP_KEY>\"]'\n
    "},{"location":"UPGRADE-19.0/#fargate-profile-iam-role","title":"Fargate Profile IAM Role","text":"

    Where \"<FARGATE_PROFILE_KEY>\" is the key used in the fargate_profiles map for the associated profile. Repeat for each policy provided in iam_role_additional_policies in either/or fargate_profile_defaults or the individual profile definitions:

    terraform state mv 'module.eks.module.fargate_profile[\"<FARGATE_PROFILE_KEY>\"].aws_iam_role_policy_attachment.this[\"<POLICY_ARN>\"]' 'module.eks.module.fargate_profile[\"<FARGATE_PROFILE_KEY>\"].aws_iam_role_policy_attachment.additional[\"<POLICY_MAP_KEY>\"]'\n
    "},{"location":"UPGRADE-20.0/","title":"Upgrade from v19.x to v20.x","text":"

    Please consult the examples directory for reference example configurations. If you find a bug, please open an issue with supporting configuration to reproduce.

    "},{"location":"UPGRADE-20.0/#list-of-backwards-incompatible-changes","title":"List of backwards incompatible changes","text":"
    • Minium supported AWS provider version increased to v5.34
    • Minimum supported Terraform version increased to v1.3 to support Terraform state moved blocks as well as other advanced features
    • The resolve_conflicts argument within the cluster_addons configuration has been replaced with resolve_conflicts_on_create and resolve_conflicts_on_update now that resolve_conflicts is deprecated
    • The default/fallback value for the preserve argument of cluster_addonsis now set to true. This has shown to be useful for users deprovisioning clusters while avoiding the situation where the CNI is deleted too early and causes resources to be left orphaned resulting in conflicts.
    • The Karpenter sub-module's use of the irsa naming convention has been removed, along with an update to the Karpenter controller IAM policy to align with Karpenter's v1beta1/v0.32 changes. Instead of referring to the role as irsa or pod_identity, its simply just an IAM role used by the Karpenter controller and there is support for use with either IRSA and/or Pod Identity (default) at this time
    • The aws-auth ConfigMap resources have been moved to a standalone sub-module. This removes the Kubernetes provider requirement from the main module and allows for the aws-auth ConfigMap to be managed independently of the main module. This sub-module will be removed entirely in the next major release.
    • Support for cluster access management has been added with the default authentication mode set as API_AND_CONFIG_MAP. Support for CONFIG_MAP is no longer supported; instead you will need to use API_AND_CONFIG_MAP at minimum
    • Karpenter EventBridge rule key spot_interrupt updated to correct mis-spelling (was spot_interupt). This will cause the rule to be replaced
    "},{"location":"UPGRADE-20.0/#upcoming-changes-planned-in-v210","title":"\u26a0\ufe0f Upcoming Changes Planned in v21.0 \u26a0\ufe0f","text":"

    To give users advanced notice and provide some future direction for this module, these are the following changes we will be looking to make in the next major release of this module:

    1. The aws-auth sub-module will be removed entirely from the project. Since this sub-module is captured in the v20.x releases, users can continue using it even after the module moves forward with the next major version. The long term strategy and direction is cluster access entry and to rely only on the AWS Terraform provider.
    2. The default value for authentication_mode will change to API. Aligning with point 1 above, this is a one way change, but users are free to specify the value of their choosing in place of this default (when the change is made). This module will proceed with an EKS API first strategy.
    3. The launch template and autoscaling group usage contained within the EKS managed node group and self-managed node group sub-modules *might be replaced with the terraform-aws-autoscaling module. At minimum, it makes sense to replace most of functionality in the self-managed node group module with this external module, but its not yet clear if there is any benefit of using it in the EKS managed node group sub-module. The interface that users interact with will stay the same, the changes will be internal to the implementation and we will do everything we can to keep the disruption to a minimum.
    4. The platform variable will be replaced and instead ami_type will become the standard across both self-managed node group(s) and EKS managed node group(s). As EKS expands its portfolio of supported operating systems, the ami_type is better suited to associate the correct user data format to the respective OS. The platform variable is a legacy artifact of self-managed node groups but not as descriptive as the ami_type, and therefore it will be removed in favor of ami_type.
    "},{"location":"UPGRADE-20.0/#additional-changes","title":"Additional changes","text":""},{"location":"UPGRADE-20.0/#added","title":"Added","text":"
    • A module tag has been added to the cluster control plane
    • Support for cluster access entries. The bootstrap_cluster_creator_admin_permissions setting on the control plane has been hardcoded to false since this operation is a one time operation only at cluster creation per the EKS API. Instead, users can enable/disable enable_cluster_creator_admin_permissions at any time to achieve the same functionality. This takes the identity that Terraform is using to make API calls and maps it into a cluster admin via an access entry. For users on existing clusters, you will need to remove the default cluster administrator that was created by EKS prior to the cluster access entry APIs - see the section Removing the default cluster administrator for more details.
    • Support for specifying the CloudWatch log group class (standard or infrequent access)
    • Native support for Windows based managed node groups similar to AL2 and Bottlerocket
    • Self-managed node groups now support instance_maintenance_policy and have added max_healthy_percentage, scale_in_protected_instances, and standby_instances arguments to the instance_refresh.preferences block
    "},{"location":"UPGRADE-20.0/#modified","title":"Modified","text":"
    • For sts:AssumeRole permissions by services, the use of dynamically looking up the DNS suffix has been replaced with the static value of amazonaws.com. This does not appear to change by partition and instead requires users to set this manually for non-commercial regions.
    • The default value for kms_key_enable_default_policy has changed from false to true to align with the default behavior of the aws_kms_key resource
    • The Karpenter default value for create_instance_profile has changed from true to false to align with the changes in Karpenter v0.32
    • The Karpenter variable create_instance_profile default value has changed from true to false. Starting with Karpenter v0.32.0, Karpenter accepts an IAM role and creates the EC2 instance profile used by the nodes
    "},{"location":"UPGRADE-20.0/#removed","title":"Removed","text":"
    • The complete example has been removed due to its redundancy with the other examples
    • References to the IRSA sub-module in the IAM repository have been removed. Once https://github.com/clowdhaus/terraform-aws-eks-pod-identity has been updated and moved into the organization, the documentation here will be updated to mention the new module.
    "},{"location":"UPGRADE-20.0/#variable-and-output-changes","title":"Variable and output changes","text":"
    1. Removed variables:

    2. cluster_iam_role_dns_suffix - replaced with a static string of amazonaws.com

    3. manage_aws_auth_configmap
    4. create_aws_auth_configmap
    5. aws_auth_node_iam_role_arns_non_windows
    6. aws_auth_node_iam_role_arns_windows
    7. aws_auth_fargate_profile_pod_execution_role_arn
    8. aws_auth_roles
    9. aws_auth_users
    10. aws_auth_accounts

    11. Karpenter

      • irsa_tag_key
      • irsa_tag_values
      • irsa_subnet_account_id
      • enable_karpenter_instance_profile_creation
    12. Renamed variables:

    13. Karpenter

      • create_irsa -> create_iam_role
      • irsa_name -> iam_role_name
      • irsa_use_name_prefix -> iam_role_name_prefix
      • irsa_path -> iam_role_path
      • irsa_description -> iam_role_description
      • irsa_max_session_duration -> iam_role_max_session_duration
      • irsa_permissions_boundary_arn -> iam_role_permissions_boundary_arn
      • irsa_tags -> iam_role_tags
      • policies -> iam_role_policies
      • irsa_policy_name -> iam_policy_name
      • irsa_ssm_parameter_arns -> ami_id_ssm_parameter_arns
      • create_iam_role -> create_node_iam_role
      • iam_role_additional_policies -> node_iam_role_additional_policies
      • policies -> iam_role_policies
      • iam_role_arn -> node_iam_role_arn
      • iam_role_name -> node_iam_role_name
      • iam_role_name_prefix -> node_iam_role_name_prefix
      • iam_role_path -> node_iam_role_path
      • iam_role_description -> node_iam_role_description
      • iam_role_max_session_duration -> node_iam_role_max_session_duration
      • iam_role_permissions_boundary_arn -> node_iam_role_permissions_boundary_arn
      • iam_role_attach_cni_policy -> node_iam_role_attach_cni_policy
      • iam_role_additional_policies -> node_iam_role_additional_policies
      • iam_role_tags -> node_iam_role_tags
    14. Added variables:

    15. create_access_entry

    16. enable_cluster_creator_admin_permissions
    17. authentication_mode
    18. access_entries
    19. cloudwatch_log_group_class

    20. Karpenter

      • iam_policy_name
      • iam_policy_use_name_prefix
      • iam_policy_description
      • iam_policy_path
      • enable_irsa
      • create_access_entry
      • access_entry_type
    21. Self-managed node group

      • instance_maintenance_policy
      • create_access_entry
      • iam_role_arn
    22. Removed outputs:

    23. aws_auth_configmap_yaml

    24. Renamed outputs:

    25. Karpenter

      • irsa_name -> iam_role_name
      • irsa_arn -> iam_role_arn
      • irsa_unique_id -> iam_role_unique_id
      • role_name -> node_iam_role_name
      • role_arn -> node_iam_role_arn
      • role_unique_id -> node_iam_role_unique_id
    26. Added outputs:

    27. access_entries

    28. Karpenter

      • node_access_entry_arn
    29. Self-managed node group

      • access_entry_arn
    "},{"location":"UPGRADE-20.0/#upgrade-migrations","title":"Upgrade Migrations","text":""},{"location":"UPGRADE-20.0/#diff-of-before-v1921-vs-after-v200","title":"Diff of Before (v19.21) vs After (v20.0)","text":"
     module \"eks\" {\n   source  = \"terraform-aws-modules/eks/aws\"\n-  version = \"~> 19.21\"\n+  version = \"~> 20.0\"\n\n# If you want to maintain the current default behavior of v19.x\n+  kms_key_enable_default_policy = false\n\n-   manage_aws_auth_configmap = true\n\n-   aws_auth_roles = [\n-     {\n-       rolearn  = \"arn:aws:iam::66666666666:role/role1\"\n-       username = \"role1\"\n-       groups   = [\"custom-role-group\"]\n-     },\n-   ]\n\n-   aws_auth_users = [\n-     {\n-       userarn  = \"arn:aws:iam::66666666666:user/user1\"\n-       username = \"user1\"\n-       groups   = [\"custom-users-group\"]\n-     },\n-   ]\n}\n\n+ module \"eks_aws_auth\" {\n+   source  = \"terraform-aws-modules/eks/aws//modules/aws-auth\"\n+   version = \"~> 20.0\"\n\n+   manage_aws_auth_configmap = true\n\n+   aws_auth_roles = [\n+     {\n+       rolearn  = \"arn:aws:iam::66666666666:role/role1\"\n+       username = \"role1\"\n+       groups   = [\"custom-role-group\"]\n+     },\n+   ]\n\n+   aws_auth_users = [\n+     {\n+       userarn  = \"arn:aws:iam::66666666666:user/user1\"\n+       username = \"user1\"\n+       groups   = [\"custom-users-group\"]\n+     },\n+   ]\n+ }\n
    "},{"location":"UPGRADE-20.0/#karpenter-diff-of-before-v1921-vs-after-v200","title":"Karpenter Diff of Before (v19.21) vs After (v20.0)","text":"
     module \"eks_karpenter\" {\n   source  = \"terraform-aws-modules/eks/aws//modules/karpenter\"\n-  version = \"~> 19.21\"\n+  version = \"~> 20.0\"\n\n# If you wish to maintain the current default behavior of v19.x\n+  enable_irsa             = true\n+  create_instance_profile = true\n\n# To avoid any resource re-creation\n+  iam_role_name          = \"KarpenterIRSA-${module.eks.cluster_name}\"\n+  iam_role_description   = \"Karpenter IAM role for service account\"\n+  iam_policy_name        = \"KarpenterIRSA-${module.eks.cluster_name}\"\n+  iam_policy_description = \"Karpenter IAM role for service account\"\n}\n
    "},{"location":"UPGRADE-20.0/#terraform-state-moves","title":"Terraform State Moves","text":""},{"location":"UPGRADE-20.0/#authentication-mode-changes","title":"\u26a0\ufe0f Authentication Mode Changes \u26a0\ufe0f","text":"

    Changing the authentication_mode is a one-way decision. See announcement blog for further details:

    Switching authentication modes on an existing cluster is a one-way operation. You can switch from CONFIG_MAP to API_AND_CONFIG_MAP. You can then switch from API_AND_CONFIG_MAP to API. You cannot revert these operations in the opposite direction. Meaning you cannot switch back to CONFIG_MAP or API_AND_CONFIG_MAP from API.

    [!IMPORTANT] If migrating to cluster access entries and you will NOT have any entries that remain in the aws-auth ConfigMap, you do not need to remove the configmap from the statefile. You can simply follow the migration guide and once access entries have been created, you can let Terraform remove/delete the aws-auth ConfigMap.

    If you WILL have entries that remain in the aws-auth ConfigMap, then you will need to remove the ConfigMap resources from the statefile to avoid any disruptions. When you add the new aws-auth sub-module and apply the changes, the sub-module will upsert the ConfigMap on the cluster. Provided the necessary entries are defined in that sub-module's definition, it will \"re-adopt\" the ConfigMap under Terraform's control.

    "},{"location":"UPGRADE-20.0/#authentication_mode-api_and_config_map","title":"authentication_mode = \"API_AND_CONFIG_MAP\"","text":"

    When using authentication_mode = \"API_AND_CONFIG_MAP\" and there are entries that will remain in the configmap (entries that cannot be replaced by cluster access entry), you will first need to update the authentication_mode on the cluster to \"API_AND_CONFIG_MAP\". To help make this upgrade process easier, a copy of the changes defined in the v20.0.0 PR have been captured here but with the aws-auth components still provided in the module. This means you get the equivalent of the v20.0.0 module, but it still includes support for the aws-auth configmap. You can follow the provided README on that interim migration module for the order of execution and return here once the authentication_mode has been updated to \"API_AND_CONFIG_MAP\". Note - EKS automatically adds access entries for the roles used by EKS managed node groups and Fargate profiles; users do not need to do anything additional for these roles.

    Once the authentication_mode has been updated, next you will need to remove the configmap from the statefile to avoid any disruptions:

    [!NOTE] This is only required if there are entries that will remain in the aws-auth ConfigMap after migrating. Otherwise, you can skip this step and let Terraform destroy the ConfigMap.

    terraform state rm 'module.eks.kubernetes_config_map_v1_data.aws_auth[0]'\nterraform state rm 'module.eks.kubernetes_config_map.aws_auth[0]' # include if Terraform created the original configmap\n
    "},{"location":"UPGRADE-20.0/#i-terraform-17-users","title":"\u2139\ufe0f Terraform 1.7+ users","text":"

    If you are using Terraform v1.7+, you can utilize the remove to facilitate both the removal of the configmap through code. You can create a fork/clone of the provided migration module and add the remove blocks and apply those changes before proceeding. We do not want to force users onto the bleeding edge with this module, so we have not included remove support at this time.

    Once the configmap has been removed from the statefile, you can add the new aws-auth sub-module and copy the relevant definitions from the EKS module over to the new aws-auth sub-module definition (see before after diff above). When you apply the changes with the new sub-module, the configmap in the cluster will get updated with the contents provided in the sub-module definition, so please be sure all of the necessary entries are added before applying the changes. In the before/example above - the configmap would remove any entries for roles used by node groups and/or Fargate Profiles, but maintain the custom entries for users and roles passed into the module definition.

    "},{"location":"UPGRADE-20.0/#authentication_mode-api","title":"authentication_mode = \"API\"","text":"

    In order to switch to API only using cluster access entry, you first need to update the authentication_mode on the cluster to API_AND_CONFIG_MAP without modifying the aws-auth configmap. To help make this upgrade process easier, a copy of the changes defined in the v20.0.0 PR have been captured here but with the aws-auth components still provided in the module. This means you get the equivalent of the v20.0.0 module, but it still includes support for the aws-auth configmap. You can follow the provided README on that interim migration module for the order of execution and return here once the authentication_mode has been updated to \"API_AND_CONFIG_MAP\". Note - EKS automatically adds access entries for the roles used by EKS managed node groups and Fargate profiles; users do not need to do anything additional for these roles.

    Once the authentication_mode has been updated, you can update the authentication_mode on the cluster to API and remove the aws-auth configmap components.

    "},{"location":"compute_resources/","title":"Compute Resources","text":""},{"location":"compute_resources/#table-of-contents","title":"Table of Contents","text":"
    • EKS Managed Node Groups
    • Self Managed Node Groups
    • Fargate Profiles
    • Default Configurations

    \u2139\ufe0f Only the pertinent attributes are shown below for brevity

    "},{"location":"compute_resources/#eks-managed-node-groups","title":"EKS Managed Node Groups","text":"

    Refer to the EKS Managed Node Group documentation documentation for service related details.

    1. The module creates a custom launch template by default to ensure settings such as tags are propagated to instances. Please note that many of the customization options listed here are only available when a custom launch template is created. To use the default template provided by the AWS EKS managed node group service, disable the launch template creation by setting use_custom_launch_template to false:
      eks_managed_node_groups = {\n    default = {\n      use_custom_launch_template = false\n    }\n  }\n
    1. Native support for Bottlerocket OS is provided by providing the respective AMI type:
      eks_managed_node_groups = {\n    bottlerocket_default = {\n      use_custom_launch_template = false\n\n      ami_type = \"BOTTLEROCKET_x86_64\"\n    }\n  }\n
    1. Bottlerocket OS is supported in a similar manner. However, note that the user data for Bottlerocket OS uses the TOML format:
      eks_managed_node_groups = {\n    bottlerocket_prepend_userdata = {\n      ami_type = \"BOTTLEROCKET_x86_64\"\n\n      bootstrap_extra_args = <<-EOT\n        # extra args added\n        [settings.kernel]\n        lockdown = \"integrity\"\n      EOT\n    }\n  }\n
    1. When using a custom AMI, the AWS EKS Managed Node Group service will NOT inject the necessary bootstrap script into the supplied user data. Users can elect to provide their own user data to bootstrap and connect or opt in to use the module provided user data:
      eks_managed_node_groups = {\n    custom_ami = {\n      ami_id = \"ami-0caf35bc73450c396\"\n\n      # By default, EKS managed node groups will not append bootstrap script;\n      # this adds it back in using the default template provided by the module\n      # Note: this assumes the AMI provided is an EKS optimized AMI derivative\n      enable_bootstrap_user_data = true\n\n      pre_bootstrap_user_data = <<-EOT\n        export FOO=bar\n      EOT\n\n      # Because we have full control over the user data supplied, we can also run additional\n      # scripts/configuration changes after the bootstrap script has been run\n      post_bootstrap_user_data = <<-EOT\n        echo \"you are free little kubelet!\"\n      EOT\n    }\n  }\n
    1. There is similar support for Bottlerocket OS:
      eks_managed_node_groups = {\n    bottlerocket_custom_ami = {\n      ami_id   = \"ami-0ff61e0bcfc81dc94\"\n      ami_type = \"BOTTLEROCKET_x86_64\"\n\n      # use module user data template to bootstrap\n      enable_bootstrap_user_data = true\n      # this will get added to the template\n      bootstrap_extra_args = <<-EOT\n        # extra args added\n        [settings.kernel]\n        lockdown = \"integrity\"\n\n        [settings.kubernetes.node-labels]\n        \"label1\" = \"foo\"\n        \"label2\" = \"bar\"\n\n        [settings.kubernetes.node-taints]\n        \"dedicated\" = \"experimental:PreferNoSchedule\"\n        \"special\" = \"true:NoSchedule\"\n      EOT\n    }\n  }\n

    See the examples/eks-managed-node-group/ example for a working example of various configurations.

    "},{"location":"compute_resources/#self-managed-node-groups","title":"Self Managed Node Groups","text":"

    Refer to the Self Managed Node Group documentation documentation for service related details.

    1. The self-managed-node-group uses the latest AWS EKS Optimized AMI (Linux) for the given Kubernetes version by default:
      cluster_version = \"1.31\"\n\n  # This self managed node group will use the latest AWS EKS Optimized AMI for Kubernetes 1.27\n  self_managed_node_groups = {\n    default = {}\n  }\n
    1. To use Bottlerocket, specify the ami_type as one of the respective \"BOTTLEROCKET_*\" types and supply a Bottlerocket OS AMI:
      cluster_version = \"1.31\"\n\n  self_managed_node_groups = {\n    bottlerocket = {\n      ami_id   = data.aws_ami.bottlerocket_ami.id\n      ami_type = \"BOTTLEROCKET_x86_64\"\n    }\n  }\n

    See the examples/self-managed-node-group/ example for a working example of various configurations.

    "},{"location":"compute_resources/#fargate-profiles","title":"Fargate Profiles","text":"

    Fargate profiles are straightforward to use and therefore no further details are provided here. See the tests/fargate-profile/ tests for a working example of various configurations.

    "},{"location":"compute_resources/#default-configurations","title":"Default Configurations","text":"

    Each type of compute resource (EKS managed node group, self managed node group, or Fargate profile) provides the option for users to specify a default configuration. These default configurations can be overridden from within the compute resource's individual definition. The order of precedence for configurations (from highest to least precedence):

    • Compute resource individual configuration
    • Compute resource family default configuration (eks_managed_node_group_defaults, self_managed_node_group_defaults, fargate_profile_defaults)
      • Module default configuration (see variables.tf and node_groups.tf)

    For example, the following creates 4 AWS EKS Managed Node Groups:

      eks_managed_node_group_defaults = {\n    ami_type               = \"AL2_x86_64\"\n    disk_size              = 50\n    instance_types         = [\"m6i.large\", \"m5.large\", \"m5n.large\", \"m5zn.large\"]\n  }\n\n  eks_managed_node_groups = {\n    # Uses module default configurations overridden by configuration above\n    default = {}\n\n    # This further overrides the instance types used\n    compute = {\n      instance_types = [\"c5.large\", \"c6i.large\", \"c6d.large\"]\n    }\n\n    # This further overrides the instance types and disk size used\n    persistent = {\n      disk_size = 1024\n      instance_types = [\"r5.xlarge\", \"r6i.xlarge\", \"r5b.xlarge\"]\n    }\n\n    # This overrides the OS used\n    bottlerocket = {\n      ami_type = \"BOTTLEROCKET_x86_64\"\n    }\n  }\n
    "},{"location":"faq/","title":"Frequently Asked Questions","text":"
    • Setting disk_size or remote_access does not make any changes
    • I received an error: expect exactly one securityGroup tagged with kubernetes.io/cluster/<NAME> ...
    • Why are nodes not being registered?
    • Why are there no changes when a node group's desired_size is modified?
    • How do I access compute resource attributes?
    • What add-ons are available?
    • What configuration values are available for an add-on?
    "},{"location":"faq/#setting-disk_size-or-remote_access-does-not-make-any-changes","title":"Setting disk_size or remote_access does not make any changes","text":"

    disk_size, and remote_access can only be set when using the EKS managed node group default launch template. This module defaults to providing a custom launch template to allow for custom security groups, tag propagation, etc. If you wish to forgo the custom launch template route, you can set use_custom_launch_template = false and then you can set disk_size and remote_access.

    "},{"location":"faq/#i-received-an-error-expect-exactly-one-securitygroup-tagged-with-kubernetesioclustername","title":"I received an error: expect exactly one securityGroup tagged with kubernetes.io/cluster/<NAME> ...","text":"

    By default, EKS creates a cluster primary security group that is created outside of the module and the EKS service adds the tag { \"kubernetes.io/cluster/<CLUSTER_NAME>\" = \"owned\" }. This on its own does not cause any conflicts for addons such as the AWS Load Balancer Controller until users decide to attach both the cluster primary security group and the shared node security group created by the module (by setting attach_cluster_primary_security_group = true). The issue is not with having multiple security groups in your account with this tag key:value combination, but having multiple security groups with this tag key:value combination attached to nodes in the same cluster. There are a few ways to resolve this depending on your use case/intentions:

    \u26a0\ufe0f <CLUSTER_NAME> below needs to be replaced with the name of your cluster

    1. If you want to use the cluster primary security group, you can disable the creation of the shared node security group with:
      create_node_security_group            = false # default is true\n  attach_cluster_primary_security_group = true # default is false\n
    1. By not attaching the cluster primary security group. The cluster primary security group has quite broad access and the module has instead provided a security group with the minimum amount of access to launch an empty EKS cluster successfully and users are encouraged to open up access when necessary to support their workload.
      attach_cluster_primary_security_group = false # this is the default for the module\n

    In theory, if you are attaching the cluster primary security group, you shouldn't need to use the shared node security group created by the module. However, this is left up to users to decide for their requirements and use case.

    If you choose to use Custom Networking, make sure to only attach the security groups matching your choice above in your ENIConfig resources. This will ensure you avoid redundant tags.

    "},{"location":"faq/#why-are-nodes-not-being-registered","title":"Why are nodes not being registered?","text":"

    Nodes not being able to register with the EKS control plane is generally due to networking mis-configurations.

    1. At least one of the cluster endpoints (public or private) must be enabled.

    If you require a public endpoint, setting up both (public and private) and restricting the public endpoint via setting cluster_endpoint_public_access_cidrs is recommended. More info regarding communication with an endpoint is available here.

    1. Nodes need to be able to contact the EKS cluster endpoint. By default, the module only creates a public endpoint. To access the endpoint, the nodes need outgoing internet access:

    2. Nodes in private subnets: via a NAT gateway or instance along with the appropriate routing rules

    3. Nodes in public subnets: ensure that nodes are launched with public IPs (enable through either the module here or your subnet setting defaults)

    Important: If you apply only the public endpoint and configure the cluster_endpoint_public_access_cidrs to restrict access, know that EKS nodes will also use the public endpoint and you must allow access to the endpoint. If not, then your nodes will fail to work correctly.

    1. The private endpoint can also be enabled by setting cluster_endpoint_private_access = true. Ensure that VPC DNS resolution and hostnames are also enabled for your VPC when the private endpoint is enabled.

    2. Nodes need to be able to connect to other AWS services to function (download container images, make API calls to assume roles, etc.). If for some reason you cannot enable public internet access for nodes you can add VPC endpoints to the relevant services: EC2 API, ECR API, ECR DKR and S3.

    "},{"location":"faq/#why-are-there-no-changes-when-a-node-groups-desired_size-is-modified","title":"Why are there no changes when a node group's desired_size is modified?","text":"

    The module is configured to ignore this value. Unfortunately, Terraform does not support variables within the lifecycle block. The setting is ignored to allow autoscaling via controllers such as cluster autoscaler or Karpenter to work properly and without interference by Terraform. Changing the desired count must be handled outside of Terraform once the node group is created.

    "},{"location":"faq/#how-do-i-access-compute-resource-attributes","title":"How do I access compute resource attributes?","text":"

    Examples of accessing the attributes of the compute resource(s) created by the root module are shown below. Note - the assumption is that your cluster module definition is named eks as in module \"eks\" { ... }:

    • EKS Managed Node Group attributes
    eks_managed_role_arns = [for group in module.eks_managed_node_group : group.iam_role_arn]\n
    • Self Managed Node Group attributes
    self_managed_role_arns = [for group in module.self_managed_node_group : group.iam_role_arn]\n
    • Fargate Profile attributes
    fargate_profile_pod_execution_role_arns = [for group in module.fargate_profile : group.fargate_profile_pod_execution_role_arn]\n
    "},{"location":"faq/#what-add-ons-are-available","title":"What add-ons are available?","text":"

    The available EKS add-ons can be found here. You can also retrieve the available addons from the API using:

    aws eks describe-addon-versions --query 'addons[*].addonName'\n
    "},{"location":"faq/#what-configuration-values-are-available-for-an-add-on","title":"What configuration values are available for an add-on?","text":"

    You can retrieve the configuration value schema for a given addon using the following command:

    aws eks describe-addon-configuration --addon-name <value> --addon-version <value> --query 'configurationSchema' --output text | jq\n

    For example:

    aws eks describe-addon-configuration --addon-name coredns --addon-version v1.11.1-eksbuild.8 --query 'configurationSchema' --output text | jq\n

    Returns (at the time of writing):

    {\n  \"$ref\": \"#/definitions/Coredns\",\n  \"$schema\": \"http://json-schema.org/draft-06/schema#\",\n  \"definitions\": {\n    \"Coredns\": {\n      \"additionalProperties\": false,\n      \"properties\": {\n        \"affinity\": {\n          \"default\": {\n            \"affinity\": {\n              \"nodeAffinity\": {\n                \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                  \"nodeSelectorTerms\": [\n                    {\n                      \"matchExpressions\": [\n                        {\n                          \"key\": \"kubernetes.io/os\",\n                          \"operator\": \"In\",\n                          \"values\": [\n                            \"linux\"\n                          ]\n                        },\n                        {\n                          \"key\": \"kubernetes.io/arch\",\n                          \"operator\": \"In\",\n                          \"values\": [\n                            \"amd64\",\n                            \"arm64\"\n                          ]\n                        }\n                      ]\n                    }\n                  ]\n                }\n              },\n              \"podAntiAffinity\": {\n                \"preferredDuringSchedulingIgnoredDuringExecution\": [\n                  {\n                    \"podAffinityTerm\": {\n                      \"labelSelector\": {\n                        \"matchExpressions\": [\n                          {\n                            \"key\": \"k8s-app\",\n                            \"operator\": \"In\",\n                            \"values\": [\n                              \"kube-dns\"\n                            ]\n                          }\n                        ]\n                      },\n                      \"topologyKey\": \"kubernetes.io/hostname\"\n                    },\n                    \"weight\": 100\n                  }\n                ]\n              }\n            }\n          },\n          \"description\": \"Affinity of the coredns pods\",\n          \"type\": [\n            \"object\",\n            \"null\"\n          ]\n        },\n        \"computeType\": {\n          \"type\": \"string\"\n        },\n        \"corefile\": {\n          \"description\": \"Entire corefile contents to use with installation\",\n          \"type\": \"string\"\n        },\n        \"nodeSelector\": {\n          \"additionalProperties\": {\n            \"type\": \"string\"\n          },\n          \"type\": \"object\"\n        },\n        \"podAnnotations\": {\n          \"properties\": {},\n          \"title\": \"The podAnnotations Schema\",\n          \"type\": \"object\"\n        },\n        \"podDisruptionBudget\": {\n          \"description\": \"podDisruptionBudget configurations\",\n          \"enabled\": {\n            \"default\": true,\n            \"description\": \"the option to enable managed PDB\",\n            \"type\": \"boolean\"\n          },\n          \"maxUnavailable\": {\n            \"anyOf\": [\n              {\n                \"pattern\": \".*%$\",\n                \"type\": \"string\"\n              },\n              {\n                \"type\": \"integer\"\n              }\n            ],\n            \"default\": 1,\n            \"description\": \"minAvailable value for managed PDB, can be either string or integer; if it's string, should end with %\"\n          },\n          \"minAvailable\": {\n            \"anyOf\": [\n              {\n                \"pattern\": \".*%$\",\n                \"type\": \"string\"\n              },\n              {\n                \"type\": \"integer\"\n              }\n            ],\n            \"description\": \"maxUnavailable value for managed PDB, can be either string or integer; if it's string, should end with %\"\n          },\n          \"type\": \"object\"\n        },\n        \"podLabels\": {\n          \"properties\": {},\n          \"title\": \"The podLabels Schema\",\n          \"type\": \"object\"\n        },\n        \"replicaCount\": {\n          \"type\": \"integer\"\n        },\n        \"resources\": {\n          \"$ref\": \"#/definitions/Resources\"\n        },\n        \"tolerations\": {\n          \"default\": [\n            {\n              \"key\": \"CriticalAddonsOnly\",\n              \"operator\": \"Exists\"\n            },\n            {\n              \"effect\": \"NoSchedule\",\n              \"key\": \"node-role.kubernetes.io/control-plane\"\n            }\n          ],\n          \"description\": \"Tolerations of the coredns pod\",\n          \"items\": {\n            \"type\": \"object\"\n          },\n          \"type\": \"array\"\n        },\n        \"topologySpreadConstraints\": {\n          \"description\": \"The coredns pod topology spread constraints\",\n          \"type\": \"array\"\n        }\n      },\n      \"title\": \"Coredns\",\n      \"type\": \"object\"\n    },\n    \"Limits\": {\n      \"additionalProperties\": false,\n      \"properties\": {\n        \"cpu\": {\n          \"type\": \"string\"\n        },\n        \"memory\": {\n          \"type\": \"string\"\n        }\n      },\n      \"title\": \"Limits\",\n      \"type\": \"object\"\n    },\n    \"Resources\": {\n      \"additionalProperties\": false,\n      \"properties\": {\n        \"limits\": {\n          \"$ref\": \"#/definitions/Limits\"\n        },\n        \"requests\": {\n          \"$ref\": \"#/definitions/Limits\"\n        }\n      },\n      \"title\": \"Resources\",\n      \"type\": \"object\"\n    }\n  }\n}\n

    [!NOTE] The available configuration values will vary between add-on versions, typically more configuration values will be added in later versions as functionality is enabled by EKS.

    "},{"location":"local/","title":"Local Development","text":""},{"location":"local/#documentation-site","title":"Documentation Site","text":"

    In order to run the documentation site locally, you will need to have the following installed locally:

    • Python 3.x
    • mkdocs
    • The following pip packages for mkdocs (i.e. - pip install ...)
      • mkdocs-material
      • mkdocs-include-markdown-plugin
      • mkdocs-awesome-pages-plugin

    To run the documentation site locally, run the following command from the root of the repository:

    mkdocs serve\n

    Opening the documentation at the link posted in the terminal output (i.e. - http://127.0.0.1:8000/terraform-aws-eks/)

    "},{"location":"network_connectivity/","title":"Network Connectivity","text":""},{"location":"network_connectivity/#cluster-endpoint","title":"Cluster Endpoint","text":""},{"location":"network_connectivity/#public-endpoint-w-restricted-cidrs","title":"Public Endpoint w/ Restricted CIDRs","text":"

    When restricting the clusters public endpoint to only the CIDRs specified by users, it is recommended that you also enable the private endpoint, or ensure that the CIDR blocks that you specify include the addresses that nodes and Fargate pods (if you use them) access the public endpoint from.

    Please refer to the AWS documentation for further information

    "},{"location":"network_connectivity/#security-groups","title":"Security Groups","text":"
    • Cluster Security Group
    • This module by default creates a cluster security group (\"additional\" security group when viewed from the console) in addition to the default security group created by the AWS EKS service. This \"additional\" security group allows users to customize inbound and outbound rules via the module as they see fit
      • The default inbound/outbound rules provided by the module are derived from the AWS minimum recommendations in addition to NTP and HTTPS public internet egress rules (without, these show up in VPC flow logs as rejects - they are used for clock sync and downloading necessary packages/updates)
      • The minimum inbound/outbound rules are provided for cluster and node creation to succeed without errors, but users will most likely need to add the necessary port and protocol for node-to-node communication (this is user specific based on how nodes are configured to communicate across the cluster)
      • Users have the ability to opt out of the security group creation and instead provide their own externally created security group if so desired
      • The security group that is created is designed to handle the bare minimum communication necessary between the control plane and the nodes, as well as any external egress to allow the cluster to successfully launch without error
    • Users also have the option to supply additional, externally created security groups to the cluster as well via the cluster_additional_security_group_ids variable
    • Lastly, users are able to opt in to attaching the primary security group automatically created by the EKS service by setting attach_cluster_primary_security_group = true from the root module for the respective node group (or set it within the node group defaults). This security group is not managed by the module; it is created by the EKS service. It permits all traffic within the domain of the security group as well as all egress traffic to the internet.

    • Node Group Security Group(s)

    • Users have the option to assign their own externally created security group(s) to the node group via the vpc_security_group_ids variable

    See the example snippet below which adds additional security group rules to the cluster security group as well as the shared node security group (for node-to-node access). Users can use this extensibility to open up network access as they see fit using the security groups provided by the module:

      ...\n  # Extend cluster security group rules\n  cluster_security_group_additional_rules = {\n    egress_nodes_ephemeral_ports_tcp = {\n      description                = \"To node 1025-65535\"\n      protocol                   = \"tcp\"\n      from_port                  = 1025\n      to_port                    = 65535\n      type                       = \"egress\"\n      source_node_security_group = true\n    }\n  }\n\n  # Extend node-to-node security group rules\n  node_security_group_additional_rules = {\n    ingress_self_all = {\n      description = \"Node to node all ports/protocols\"\n      protocol    = \"-1\"\n      from_port   = 0\n      to_port     = 0\n      type        = \"ingress\"\n      self        = true\n    }\n    egress_all = {\n      description      = \"Node all egress\"\n      protocol         = \"-1\"\n      from_port        = 0\n      to_port          = 0\n      type             = \"egress\"\n      cidr_blocks      = [\"0.0.0.0/0\"]\n      ipv6_cidr_blocks = [\"::/0\"]\n    }\n  }\n  ...\n
    The security groups created by this module are depicted in the image shown below along with their default inbound/outbound rules:

    "},{"location":"user_data/","title":"User Data & Bootstrapping","text":"

    Users can see the various methods of using and providing user data through the user data tests as well more detailed information on the design and possible configurations via the user data module itself

    "},{"location":"user_data/#summary","title":"Summary","text":"
    • AWS EKS Managed Node Groups
    • By default, any supplied user data is pre-pended to the user data supplied by the EKS Managed Node Group service
    • If users supply an ami_id, the service no longers supplies user data to bootstrap nodes; users can enable enable_bootstrap_user_data and use the module provided user data template, or provide their own user data template
    • AMI types of BOTTLEROCKET_*, user data must be in TOML format
    • AMI types of WINDOWS_*, user data must be in powershell/PS1 script format
    • Self Managed Node Groups
    • AL2_x86_64 AMI type (default) -> the user data template (bash/shell script) provided by the module is used as the default; users are able to provide their own user data template
    • BOTTLEROCKET_* AMI types -> the user data template (TOML file) provided by the module is used as the default; users are able to provide their own user data template
    • WINDOWS_* AMI types -> the user data template (powershell/PS1 script) provided by the module is used as the default; users are able to provide their own user data template

    The templates provided by the module can be found under the templates directory

    "},{"location":"user_data/#eks-managed-node-group","title":"EKS Managed Node Group","text":"

    When using an EKS managed node group, users have 2 primary routes for interacting with the bootstrap user data:

    1. If a value for ami_id is not provided, users can supply additional user data that is pre-pended before the EKS Managed Node Group bootstrap user data. You can read more about this process from the AWS supplied documentation

    2. Users can use the following variables to facilitate this process:

      pre_bootstrap_user_data = \"...\"\n
    3. If a custom AMI is used, then per the AWS documentation, users will need to supply the necessary user data to bootstrap and register nodes with the cluster when launched. There are two routes to facilitate this bootstrapping process:

    4. If the AMI used is a derivative of the AWS EKS Optimized AMI , users can opt in to using a template provided by the module that provides the minimum necessary configuration to bootstrap the node when launched:
      • Users can use the following variables to facilitate this process:
        enable_bootstrap_user_data = true # to opt in to using the module supplied bootstrap user data template\npre_bootstrap_user_data    = \"...\"\nbootstrap_extra_args       = \"...\"\npost_bootstrap_user_data   = \"...\"\n
    5. If the AMI is NOT an AWS EKS Optimized AMI derivative, or if users wish to have more control over the user data that is supplied to the node when launched, users have the ability to supply their own user data template that will be rendered instead of the module supplied template. Note - only the variables that are supplied to the templatefile() for the respective AMI type are available for use in the supplied template, otherwise users will need to pre-render/pre-populate the template before supplying the final template to the module for rendering as user data.
      • Users can use the following variables to facilitate this process:
        user_data_template_path  = \"./your/user_data.sh\" # user supplied bootstrap user data template\npre_bootstrap_user_data  = \"...\"\nbootstrap_extra_args     = \"...\"\npost_bootstrap_user_data = \"...\"\n
    \u2139\ufe0f When using bottlerocket, the supplied user data (TOML format) is merged in with the values supplied by EKS. Therefore, pre_bootstrap_user_data and post_bootstrap_user_data are not valid since the bottlerocket OS handles when various settings are applied. If you wish to supply additional configuration settings when using bottlerocket, supply them via the bootstrap_extra_args variable. For the AL2_* AMI types, bootstrap_extra_args are settings that will be supplied to the AWS EKS Optimized AMI bootstrap script such as kubelet extra args, etc. See the bottlerocket GitHub repository documentation for more details on what settings can be supplied via the bootstrap_extra_args variable."},{"location":"user_data/#self-managed-node-group","title":"Self Managed Node Group","text":"

    Self managed node groups require users to provide the necessary bootstrap user data. Users can elect to use the user data template provided by the module for their respective AMI type or provide their own user data template for rendering by the module.

    • If the AMI used is a derivative of the AWS EKS Optimized AMI , users can opt in to using a template provided by the module that provides the minimum necessary configuration to bootstrap the node when launched:
    • Users can use the following variables to facilitate this process:
      enable_bootstrap_user_data = true # to opt in to using the module supplied bootstrap user data template\npre_bootstrap_user_data    = \"...\"\nbootstrap_extra_args       = \"...\"\npost_bootstrap_user_data   = \"...\"\n
    • If the AMI is NOT an AWS EKS Optimized AMI derivative, or if users wish to have more control over the user data that is supplied to the node when launched, users have the ability to supply their own user data template that will be rendered instead of the module supplied template. Note - only the variables that are supplied to the templatefile() for the respective AMI type are available for use in the supplied template, otherwise users will need to pre-render/pre-populate the template before supplying the final template to the module for rendering as user data.
      • Users can use the following variables to facilitate this process:
        user_data_template_path  = \"./your/user_data.sh\" # user supplied bootstrap user data template\npre_bootstrap_user_data  = \"...\"\nbootstrap_extra_args     = \"...\"\npost_bootstrap_user_data = \"...\"\n
    "},{"location":"user_data/#logic-diagram","title":"Logic Diagram","text":"

    The rough flow of logic that is encapsulated within the _user_data module can be represented by the following diagram to better highlight the various manners in which user data can be populated.

    "}]} \ No newline at end of file