copyright | lastupdated | ||
---|---|---|---|
|
2018-10-25 |
{:java: #java .ph data-hd-programlang='java'} {:swift: #swift .ph data-hd-programlang='swift'} {:ios: #ios data-hd-operatingsystem="ios"} {:android: #android data-hd-operatingsystem="android"} {:shortdesc: .shortdesc} {:new_window: target="_blank"} {:codeblock: .codeblock} {:screen: .screen} {:tip: .tip} {:pre: .pre}
Multiple deployment environments are common when building a solution. They reflect the lifecycle of a project from development to production. This tutorial introduces tools like the {{site.data.keyword.Bluemix_notm}} CLI and Terraform to automate the creation and maintenance of these deployment environments. {:shortdesc}
Developers do not like to write the same thing twice. The DRY principle is one example of this. Similarly they don't like having to go through tons of clicks in a user interface to setup an environment. Consequently shell scripts have been long used by system administrators and developers to automate repetitive, error-prone and uninteresting tasks.
Infrastructure as a Service (IaaS), Platform as a Service (PaaS), Container as a Service (CaaS), Functions as a Service (FaaS) have given developers high level of abstraction and it became easier to acquire resources like bare metal servers, managed databases, virtual machines, Kubernetes clusters, etc. But once you have provisioned these resources, you need to connect them together, to configure user access, to update the configuration over time, etc. Being able to automate all these steps and to repeat the installation, configuration under different environments is a must-have these days.
Multiple environments are pretty common in a project to support the different phases of the development cycle with slight differences between the environments like capacity, networking, credentials, log verbosity. In this other tutorial, we've introduced best practices to organize users, teams and applications and a sample scenario. The sample scenario considers three environments, Development, Testing and Production. How to automate the creation of these environments? What tools could be used?
{: #objectives}
- Define a set of environments to deploy
- Write scripts using the {{site.data.keyword.Bluemix_notm}} CLI and Terraform to automate the deployment of these environments
- Deploy these environments in your account
{: #services}
This tutorial uses the following products:
- {{site.data.keyword.Bluemix_notm}} provider for Terraform
- {{site.data.keyword.containershort_notm}}
- Identity and Access Management
- {{site.data.keyword.Bluemix_notm}} command line interface - the
ibmcloud
CLI - HashiCorp Terraform
This tutorial may incur costs. Use the Pricing Calculator to generate a cost estimate based on your projected usage.
{: #architecture}
- A set of Terraform files are created to describe the target infrastructure as code.
- An operator uses
terraform apply
to provision the environments. - Shell scripts are written to complete the configuration of the environments.
- The operator runs the scripts against the environments
- The environments are fully configured, ready to be used.
{: #tools}
The first tool to interact with {{site.data.keyword.Bluemix_notm}} and to create repeatable deployments is the {{site.data.keyword.Bluemix_notm}} command line interface - the ibmcloud
CLI. With ibmcloud
and its plugins, you can automate the creation and configuration of your cloud resources. {{site.data.keyword.virtualmachinesshort}}, Kubernetes clusters, {{site.data.keyword.openwhisk_short}}, Cloud Foundry apps and services, you can provision all of them from the command line.
Another tool introduced in this tutorial is Terraform by HashiCorp. Quoting HashiCorp, Terraform enables you to safely and predictably create, change, and improve infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned. It is infrastructure as code. You write down what your infrastructure should look like and Terraform will create, update, remove cloud resources as needed.
To support a multi-cloud approach, Terraform works with providers. A provider is responsible for understanding API interactions and exposing resources. {{site.data.keyword.Bluemix_notm}} has its provider for Terraform enabling users of {{site.data.keyword.Bluemix_notm}} to manage resources with Terraform. Although Terraform is categorized as infrastructure as code, it is not limited to Infrastructure-As-A-Service resources. The {{site.data.keyword.Bluemix_notm}} Provider for Terraform supports IaaS (bare metal, virtual machine, network services, etc.), CaaS ({{site.data.keyword.containershort_notm}} and Kubernetes clusters), PaaS (Cloud Foundry and services) and FaaS ({{site.data.keyword.openwhisk_short}}) resources.
{: #scripts}
As you start describing your infrastructure-as-code, it is critical to treat files you create as regular code, thus storing them in a source control management system. Overtime this will bring good properties such as using the source control review workflow to validate changes before applying them, adding a continuous integration pipeline to automatically deploy infrastructure changes.
This Git repository has all the configuration files needed to setup the environments defined earlier. You can clone the repository to follow the next sections detailing the content of the files.
git clone https://github.com/IBM-Cloud/multiple-environments-as-code
The repository is structured as follow:
Directory | Description |
---|---|
terraform | Home for the Terraform files |
terraform/global | Terraform files to provision resources common to the three environments |
terraform/per-environment | Terraform files specific to a given environment |
terraform/roles | Terraform files to configure user policies |
The Development, Testing and Production environments pretty much look the same.
They share a common organization and environment-specific resources. They will differ by the allocated capacity and the access rights. The terraform files reflect this with a global configuration to provision the Cloud Foundry organization and a per-environment configuration, using Terraform workspaces, to provision the environment-specific resources:
All environments share a common Cloud Foundry organization and each environment has its own space.
Under the terraform/global directory, you find the Terraform scripts to provision this organization. main.tf contains the definition for the organization:
# create a new organization for the project
resource "ibm_org" "organization" {
name = "${var.org_name}"
managers = "${var.org_managers}"
users = "${var.org_users}"
auditors = "${var.org_auditors}"
billing_managers = "${var.org_billing_managers}"
}
In this resource, all properties are configured through variables. In the next sections, you will learn how to set these variables.
To fully deploy the environments, you will use a mix of Terraform and the {{site.data.keyword.Bluemix_notm}} CLI. Shell scripts written with the CLI may need to reference this organization or the account by name or ID. The global directory also includes outputs.tf which will produce a file containing this information as keys/values suitable to be reused in scripting:
# generate a property file suitable for shell scripts with useful variables relating to the environment
resource "local_file" "output" {
content = <<EOF
ACCOUNT_GUID=${data.ibm_account.account.id}
ORG_GUID=${ibm_org.organization.id}
ORG_NAME=${var.org_name}
EOF
filename = "../outputs/global.env"
}
There are different approaches to manage multiple environments with Terraform. You could duplicate the Terraform files under separate directories, one directory per environment. With Terraform modules you could factor common configuration as a group and reuse modules across environments - reducing the code duplication. Separate directories mean you can evolve the development environment to test changes and then propagate the changes to other environments. It is common in this case to also have the Terraform modules in their own source code repository so that you can reference a specific version of a module in your environment files.
Given the environments are rather simple and similar, you are going to use another Terraform concept called workspaces. Workspaces allow to use the same terraform files (.tf) with different environments. In the example, development, testing and production are workspaces. They will use the same Terraform definitions but with different configuration variables (different names, different capacities).
Each environment requires:
- a dedicated Cloud Foundry space
- a dedicated resource group
- a Kubernetes cluster
- a database
- a file storage
The Cloud Foundry space is linked to the organization created in the previous step. The environment Terraform files need to reference this organization. This is where Terraform remote state will help. It allows to reference an existing Terraform state in read-only mode. This is a very useful construct to split your Terraform configuration in smaller pieces leaving the responsibility of individual pieces to different teams. backend.tf contains the definition of the global remote state used to find the organization created earlier:
data "terraform_remote_state" "global" {
backend = "local"
config {
path = "${path.module}/../global/terraform.tfstate"
}
}
Once you can reference the organization, it is straightforward to create a space within this organization. main.tf contains the definition of the resources for the environment.
# a Cloud Foundry space per environment
resource "ibm_space" "space" {
name = "${var.environment_name}"
org = "${data.terraform_remote_state.global.org_name}"
managers = "${var.space_managers}"
auditors = "${var.space_auditors}"
developers = "${var.space_developers}"
}
Notice how the organization name is referenced from the global remote state. The other properties are taken from configuration variables.
Next comes the resource group.
# a resource group
resource "ibm_resource_group" "group" {
name = "${var.environment_name}"
quota_id = "${data.ibm_resource_quota.quota.id}"
}
data "ibm_resource_quota" "quota" {
name = "${var.resource_quota}"
}
The Kubernetes cluster is created in this resource group. The {{site.data.keyword.Bluemix_notm}} provider has a Terraform resource to represent a cluster:
# a cluster
resource "ibm_container_cluster" "cluster" {
name = "${var.environment_name}-cluster"
datacenter = "${var.cluster_datacenter}"
org_guid = "${data.terraform_remote_state.global.org_guid}"
space_guid = "${ibm_space.space.id}"
account_guid = "${data.terraform_remote_state.global.account_guid}"
hardware = "${var.cluster_hardware}"
machine_type = "${var.cluster_machine_type}"
public_vlan_id = "${var.cluster_public_vlan_id}"
private_vlan_id = "${var.cluster_private_vlan_id}"
resource_group_id = "${ibm_resource_group.group.id}"
}
resource "ibm_container_worker_pool" "cluster_workerpool" {
worker_pool_name = "${var.environment_name}-pool"
machine_type = "${var.cluster_machine_type}"
cluster = "${ibm_container_cluster.cluster.id}"
size_per_zone = "${var.worker_num}"
hardware = "${var.cluster_hardware}"
resource_group_id = "${ibm_resource_group.group.id}"
}
resource "ibm_container_worker_pool_zone_attachment" "cluster_zone" {
cluster = "${ibm_container_cluster.cluster.id}"
worker_pool = "${element(split("/",ibm_container_worker_pool.cluster_workerpool.id),1)}"
zone = "${var.cluster_datacenter}"
public_vlan_id = "${var.cluster_public_vlan_id}"
private_vlan_id = "${var.cluster_private_vlan_id}"
resource_group_id = "${ibm_resource_group.group.id}"
}
Again most of the properties will be initialized from configuration variables. You can adjust the datacenter, the number of workers, the type of workers.
IAM-enabled services like {{site.data.keyword.cos_full_notm}} and {site.data.keyword.cloudant_short_notm}} are created as resources within the group too:
# a database
resource "ibm_resource_instance" "database" {
name = "database"
service = "cloudantnosqldb"
plan = "${var.cloudantnosqldb_plan}"
location = "${var.cloudantnosqldb_location}"
resource_group_id = "${ibm_resource_group.group.id}"
}
# a cloud object storage
resource "ibm_resource_instance" "objectstorage" {
name = "objectstorage"
service = "cloud-object-storage"
plan = "${var.cloudobjectstorage_plan}"
location = "${var.cloudobjectstorage_location}"
resource_group_id = "${ibm_resource_group.group.id}"
}
Kubernetes bindings (secrets) can be added to retrieve the service credentials from your applications:
# bind the cloudant service to the cluster
resource "ibm_container_bind_service" "bind_database" {
cluster_name_id = "${ibm_container_cluster.cluster.id}"
service_instance_name = "${ibm_resource_instance.database.name}"
namespace_id = "default"
account_guid = "${data.terraform_remote_state.global.account_guid}"
org_guid = "${data.terraform_remote_state.global.org_guid}"
space_guid = "${ibm_space.space.id}"
resource_group_id = "${ibm_resource_group.group.id}"
}
# bind the cloud object storage service to the cluster
resource "ibm_container_bind_service" "bind_objectstorage" {
cluster_name_id = "${ibm_container_cluster.cluster.id}"
space_guid = "${ibm_space.space.id}"
service_instance_id = "${ibm_resource_instance.objectstorage.name}"
namespace_id = "default"
account_guid = "${data.terraform_remote_state.global.account_guid}"
org_guid = "${data.terraform_remote_state.global.org_guid}"
space_guid = "${ibm_space.space.id}"
resource_group_id = "${ibm_resource_group.group.id}"
}
- Follow these instructions to install the CLI
- Validate the installation by running:
{: codeblock}
ibmcloud
- Download and install Terraform for your system.
- Download the Terraform binary for the {{site.data.keyword.Bluemix_notm}} provider. To setup Terraform with {{site.data.keyword.Bluemix_notm}} provider, refer to this link {:tip}
- Create a
.terraformrc
file in your home directory that points to the Terraform binary. In the following example,/opt/provider/terraform-provider-ibm
is the route to the directory.{: codeblock}# ~/.terraformrc providers { ibm = "/opt/provider/terraform-provider-ibm" }
If you have not done it yet, clone the tutorial repository:
git clone https://github.com/IBM-Cloud/multiple-environments-as-code
{: codeblock}
-
If you don't already have one, obtain a Platform API key and save the API key for future reference.
If in later steps you plan on creating a new Cloud Foundry organization to host the deployment environments, make sure you are the owner of the account.
-
Copy terraform/credentials.tfvars.tmpl to terraform/credentials.tfvars by running the below command
cp terraform/credentials.tfvars.tmpl terraform/credentials.tfvars
{: codeblock}
-
Edit
terraform/credentials.tfvars
and set the value foribmcloud_api_key
to the Platform API key you obtained.
You can choose either to create a new organization or to reuse (import) an existing one. To create the parent organization of the three deployment environments, you need to be the account owner.
-
Change to the
terraform/global
directory -
Copy global.tfvars.tmpl to
global.tfvars
cp global.tfvars.tmpl global.tfvars
{: codeblock}
-
Edit
global.tfvars
- Set org_name to the organization name to create
- Set org_managers to a list of user IDs you want to grant the Manager role in the org - the user creating the org is automatically a manager and should not be added to the list
- Set org_users to a list of all users you want to invite into the org - users need to be added there if you want to configure their access in further steps
org_name = "a-new-organization" org_managers = [ "user1@domain.com", "another-user@anotherdomain.com" ] org_users = [ "user1@domain.com", "another-user@anotherdomain.com", "more-user@domain.com" ]
{: codeblock}
-
Initialize Terraform from the
terraform/global
folderterraform init
{: codeblock}
-
Look at the Terraform plan
terraform plan -var-file=../credentials.tfvars -var-file=global.tfvars
{: codeblock}
-
Apply the changes
terraform apply -var-file=../credentials.tfvars -var-file=global.tfvars
{: codeblock}
Once Terraform completes, it will have created:
- a new Cloud Foundry organization
- a
global.env
file under theoutputs
directory in your checkout. This file has environment variables you could reference in other scripts - the
terraform.tfstate
file
This tutorial uses the
local
backend provider for Terraform state. Handy when discovering Terraform or working alone on a project, but when working in a team, or on larger infrastructure, Terraform also supports saving the state to a remote location. Given the Terraform state is critical to Terraform operations, it is recommended to use a remote, highly available, resilient storage for the Terraform state Refer to Terraform Backend Types for a list of available options. Some backends even support versioning and locking of Terraform states.
If you are not the account owner but you manage an organization in the account, you can also import an existing organization into Terraform
- Retrieve the organization GUID
{: codeblock}
ibmcloud iam org <org_name> --guid
- Change to the
terraform/global
directory - Copy global.tfvars.tmpl to
global.tfvars
{: codeblock}cp global.tfvars.tmpl global.tfvars
- Initialize Terraform
{: codeblock}
terraform init
- After initializing Terraform, import the organization into the Terraform state
{: codeblock}
terraform import -var-file=../credentials.tfvars -var-file=global.tfvars ibm_org.organization <guid>
- Tune
global.tfvars
to match the existing organization name and structure - Apply the changes
{: codeblock}
terraform apply -var-file=../credentials.tfvars -var-file=global.tfvars
This section will focus on the development
environment. The steps will be the same for the other environments, only the values you pick for the variables will differ.
- Change to the
terraform/per-environment
folder of the checkout - Copy the template
tfvars
file. There is one per environment:{: codeblock}cp development.tfvars.tmpl development.tfvars cp testing.tfvars.tmpl testing.tfvars cp production.tfvars.tmpl production.tfvars
- Edit
development.tfvars
-
Set environment_name to the name of the Cloud Foundry space you want to create
-
Set space_developers to the list of developers for this space. Make sure to add your name to the list so that Terraform can provision services on your behalf.
-
Set cluster_datacenter to the location where you want to create the cluster. Find the available locations with:
ibmcloud cs locations
{: codeblock}
-
Set the private (cluster_private_vlan_id) and public (cluster_public_vlan_id) VLANs for the cluster. Find the available VLANs for the location with:
ibmcloud cs vlans <location>
{: codeblock}
-
Set the cluster_machine_type. Find the available machine types and characteristics for the location with:
ibmcloud cs machine-types <location>
{: codeblock}
-
Set the resource_quota. Find the available resource quota definitions with:
ibmcloud resource quotas
{: codeblock}
-
- Initialize Terraform
{: codeblock}
terraform init
- Create a new Terraform workspace for the development environment
{: codeblock} Later to switch between environments use
terraform workspace new development
{: codeblock}terraform workspace select development
- Look at the Terraform plan
{: codeblock} It should report:
terraform plan -var-file=../credentials.tfvars -var-file=development.tfvars
{: codeblock}Plan: 7 to add, 0 to change, 0 to destroy.
- Apply the changes
{: codeblock}
terraform apply -var-file=../credentials.tfvars -var-file=development.tfvars
Once Terraform completes, it will have created:
- a resource group
- a Cloud Foundry space
- a Kubernetes cluster with a worker pool and a zone attached to it
- a database
- a Kubernetes secret with the database credentials
- a storage
- a Kubernetes secret with the storage credentials
- a
development.env
file under theoutputs
directory in your checkout. This file has environment variables you could reference in other scripts - the environment specific
terraform.tfstate
underterraform.tfstate.d/development
.
You can repeat the steps for the testing
and production
.
In the previous steps, roles in Cloud Foundry organization and spaces could be configured with the Terraform provider. For user policies on other resources like the Kubernetes clusters, you will be using the roles folder in the cloned repo.
For the Development environment as defined in this tutorial, the policies to define are:
IAM Access policies | |
---|---|
Developer |
|
Tester |
|
Operator |
|
Pipeline Functional User |
|
Given a team may be composed of several developers, testers, you can leverage the access group concept to simplify the configuration of user policies. Access groups can be created by the account owner so that the same access can be assigned to all entities within the group with a single policy.
For the Developer role in the Development environment, this translates to:
resource "ibm_iam_access_group" "developer_role" {
name = "${var.access_group_name_developer_role}"
description = "${var.access_group_description}"
}
resource "ibm_iam_access_group_policy" "resourcepolicy_developer" {
access_group_id = "${ibm_iam_access_group.developer_role.id}"
roles = ["Viewer"]
resources = [{
resource_type = "resource-group"
resource = "${data.terraform_remote_state.per_environment_dev.resource_group_id}"
}]
}
resource "ibm_iam_access_group_policy" "developer_monitoring_policy" {
access_group_id = "${ibm_iam_access_group.developer_role.id}"
roles = ["Administrator","Editor","Viewer"]
resources = [{
service = "monitoring"
resource_group_id = "${data.terraform_remote_state.per_environment_dev.resource_group_id}"
}]
}
The roles/development/main.tf file of the checkout has examples of these resources for the defined Developer, Operator , tester, and Functional User roles. To set the policies as defined in a previous section for the users with the Developer, Operator, Tester and Function user roles in the development environment,
-
Change to the
terraform/roles/development
directory -
Copy the template
tfvars
file. There is one per environment (you can find theproduction
andtesting
templates under their respective folders inroles
directory)cp development.tfvars.tmpl development.tfvars
-
Edit
development.tfvars
- Set iam_access_members_developers to the list of developers to whom you would like to grant the access.
- Set iam_access_members_operators to the list of operators and so on.
-
Initialize Terraform
terraform init
{: codeblock}
-
Look at the Terraform plan
terraform plan -var-file=../../credentials.tfvars -var-file=development.tfvars
{: codeblock} It should report:
Plan: 14 to add, 0 to change, 0 to destroy.
{: codeblock}
-
Apply the changes
terraform apply -var-file=../../credentials.tfvars -var-file=development.tfvars
You can repeat the steps for the testing
and production
.
- Navigate to the
development
folder underroles
{: codeblock}cd terraform/roles/development
- Destroy the access groups and access policies
{: codeblock}
terraform destroy -var-file=../../credentials.tfvars -var-file=development.tfvars
- Activate the
development
workspace{: codeblock}cd terraform/per-environment terraform workspace select development
- Destroy the resource group, spaces, services, clusters
{: codeblock}
terraform destroy -var-file=../credentials.tfvars -var-file=development.tfvars
- Repeat the steps for the
testing
andproduction
workspaces - If you created it, destroy the organization
{: codeblock}
cd terraform/global terraform destroy -var-file=../credentials.tfvars -var-file=global.tfvars