all fixs to make hashistack up and running in 0.9.58 #165
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Upgrade hashistack from version 0.9.33 to version 0.9.58.
We have tested the install in :
on differents OS :
Adding "common_vars" roles
We faced severals time to errors like
undefined variable
so to counter this, we have added thecommon_vars
role in these files :Upgrade Scaleway provider version
We need to upgrade scaleway provider from version 2.37.0 to version 2.42.1 (latest) to be able to deploy infrastructure on scaleway
Multi nodes deployment by default
It seems the default deployment is multi node so we need to add
hs_stage0_archi
variable inplaybook/init.yml
fileAdd duplicity python package
We got error like duplicity python libarary not found. We need to add
duplicity
inroles/stage1_pip/defaults/main.yml
fileVariables renaming
We faced severals times to undefined variables due to name changing
public_ipv4
edge_public_ipv4
hs_nsupdate_host̀
acme_nsupdate_host
Add missing paths
We got error like directory does not exist during the execution of init playbook. To fix we have added these missing paths in
playbooks/init.yml
{{ _output_dir }}/host_vars
{{ _output_dir }}/host_vars/{{ hs_workspace }}-{{ hs_archi }}
Add creation of nomad volumes directories
We got error like directory does not exist during the nomad installation. So we added the missing task to create to required volumes in
roles/nomad/tasks/common/_install.yml
Define missing variable
hs_infra_default_user
inroles/common_vars/defaults/main.yml
to avoid undefined variablehs_archi
inplaybooks/init.yml
Remove online tag on prepare step
We need to delete
online
tag on fewprepare
steps to be able to execute prepare phase even in offline mode in :roles/vault/tasks/debian/main.yml
roles/consul/tasks/debian/main.yml
roles/nomad/tasks/debian/main.yml
roles/envoy/tasks/debian/main.yml
Split nomad installation
To be able to download docker images during online step and execute nomad group and user creation during offline mode we need to split in two differents steps
Add tf_action variable
In ̀roles/grafana/vars/main.yml`
## Fix load dashboards
In
roles/grafana/files/tf_setup/files/dashboards/consul.json
Change roles grafana labs to support RedHat Family OS
Change also installation method from package manager to binary
Add tasks to support Redhat Family
Install librsync-devel package in:
roles/stage1_pip/tasks/redhat/_prepare.yml
Uninstall chardet if installed via RPM and Install Python packages in:
roles/stage1_pip/tasks/redhat/_install.yml
Load br_netfilter module for Nomad network configuration and ensure Docker is started and enabled in:
roles/nomad/tasks/redhat/main.yml
roles/nomad/tasks/common/_configure.yml
Install skopeo in:
roles/nomad/tasks/common/_prepare.yml
Add local rpm repo
For the RHEL offline installation we added ansible tasks in
playbooks/00_offline_prepare.yml
to create a local rpm repo to be able to install all required packages and their dependencies