Review this curated collection of dispatch workflows.
Take this path when you want to get up-and-running as quickly as possible with the least amount of fuss.
Action | Link |
---|---|
Create workflows | Choose create before clicking on the Run workflow button |
Remote Backend Support | ✅ |
Toolset image | ✅ |
Record or remember the resource group name you specify in this action as you will need it in later steps. | |
DNS Zone for base domain | ✅ |
DNS Zone for sub domain | ✅ |
Create workshop environment | ✅ |
Cleanup workflows | Choose destroy before clicking on the Run workflow button |
Destroy workshop environment | ✅ |
This action should be run with th same inputs used to create an environment. If this is multi-tenant you will want to run this once for each tenant. Additionally there is an option to clean up core components this is defaulted to no only choose yes if you are destroying all tenant environments since this will destroy the main DNS resource group as well as the Shared Image Gallery. |
|
DNS Zone for sub domain | ✅ |
DNS Zone for base domain | ✅ |
Clean Workflow Logs | ✅ |
Administer resources one at a time.
There are two types of actions defined, those that can be manually triggered (i.e., dispatched), and those that can only be called by another action. All actions are located here and can be run by providing the required parameters. Go here to inspect the source for each action.
Note that for most dispatch actions, you have the option to either create or destroy the resources.
Module | Github Action | Terraform |
---|---|---|
Resource group | ✅ | ✅ |
If your environment will be multi-tenant and you want to maintain separation of control between participants and their associated child DNS domains, create a resource group for the parent DNS zone separately from the rest of the resources. | ||
Key Vault | ✅ | ✅ |
Key Vault Secrets | ✅ | ✅ |
DNS Zone for main domain | ✅ | ✅ |
DNS Zone for sub domain | ✅ | ✅ |
Virtual Network | ✅ | ✅ |
AKS Cluster | ✅ | ✅ |
Container registry | ✅ | ✅ |
Bastion | ✅ | ✅ |
All Credentials are stored in Azure Key Vault. There is a KV per resource group (participant) where the credentials are stored that are specific to resources created for that participant (see architecture diagram).
There is only one credential that needs to be pulled down to get started, all other credentials will be accessible from the bastion host. This credential is the private ssh key for the bastion host. If you are the workshop owner and working in a multi-tenant environment you will need to hand this credential out to each participant. From there each participant will be able to access everything they need from the bastion host.
First, log into Azure using the service principal you created earlier.
az login --service-principal -u <clientID> -p <clientSecret> --tenant <tenantId>
Then, run the script to pull down all private keys along with the IP address of the bastion that it is associated with. This will create a folder workshop-sshkeys
and loop over each resource group that matches participant-x
it will then get the bastion IP and the ssh key from vault and write a file out into the directory with SSH key in it and ip in the name (e.g, participant-x-bastion.172.16.78.9.pem
).
cd /tmp
gh repo clone clicktruck/scripts
./scripts/azure/fetch-azure-ssh-key.sh
Once you SSH to the VM there will be credentials for the ACR registry in the home directory in files called acr-user
and acr-password
there will also be a kubeconfig in the home directory as well as it has been added under ~/.kube/config
.