Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Shift on Stack DT #1744

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
144 changes: 144 additions & 0 deletions scenarios/reproducers/dt-osasinfra.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,144 @@
---
cifmw_architecture_scenario: osasinfra
cifmw_arch_automation_file: osasinfra.yaml

# Automation section. Most of those parameters will be passed to the
# controller-0 as-is and be consumed by the `deploy-va.sh` script.
# Please note, all paths are on the controller-0, meaning managed by the
# Framework. Please do not edit them!
_arch_repo: "/home/zuul/src/github.com/openstack-k8s-operators/architecture"
cifmw_ceph_client_vars: /tmp/ceph_client.yml
cifmw_ceph_client_values_post_ceph_path_src: >-
{{ _arch_repo }}/examples/dt/osasinfra/values.yaml
cifmw_ceph_client_values_post_ceph_path_dst: >-
{{ cifmw_ceph_client_values_post_ceph_path_src }}
cifmw_ceph_client_service_values_post_ceph_path_src: >-
{{ _arch_repo }}/examples/dt/osasinfra/service-values.yaml
cifmw_ceph_client_service_values_post_ceph_path_dst: >-
{{ cifmw_ceph_client_service_values_post_ceph_path_src }}


# workaround https://issues.redhat.com/browse/OSPRH-6675
cifmw_ceph_spec_public_network: "{{ cifmw_networking_definition.networks.ctlplane.network }}"

# HERE if you want to overload kustomization, you can uncomment this parameter
# and push the data structure you want to apply.
# cifmw_architecture_user_kustomize:
# stage_0:
# 'network-values':
# data:
# starwars: Obiwan

# HERE, if you want to stop the deployment loop at any stage, you can uncomment
# the following parameter and update the value to match the stage you want to
# reach. Known stages are:
# pre_kustomize_stage_INDEX
# pre_apply_stage_INDEX
# post_apply_stage_INDEX
#
# cifmw_deploy_architecture_stopper:

cifmw_allow_vms_to_reach_osp_api: true
cifmw_networking_mapper_definition_patches_01:
networks:
tenant:
mtu: 1500

# HCI requires bigger size to hold OCP on OSP disks
cifmw_block_device_size: 100G
cifmw_libvirt_manager_compute_disksize: 200
cifmw_libvirt_manager_compute_memory: 50
cifmw_libvirt_manager_compute_cpus: 8

cifmw_libvirt_manager_configuration:
networks:
osp_trunk: |
<network>
<name>osp_trunk</name>
<forward mode='nat'/>
<bridge name='osp_trunk' stp='on' delay='0'/>
<dns enable="no"/>
<ip family='ipv4'
address='{{ cifmw_networking_definition.networks.ctlplane.network |
ansible.utils.nthhost(1) }}'
prefix='{{ cifmw_networking_definition.networks.ctlplane.network |
ansible.utils.ipaddr('prefix') }}'>
</ip>
</network>
ocpbm: |
<network>
<name>ocpbm</name>
<forward mode='nat'/>
<bridge name='ocpbm' stp='on' delay='0'/>
<dns enable="no"/>
<ip family='ipv4' address='192.168.111.1' prefix='24'>
</ip>
</network>
ocppr: |
<network>
<name>ocppr</name>
<forward mode='bridge'/>
<bridge name='ocppr'/>
</network>
vms:
ocp:
amount: 3
admin_user: core
image_local_dir: "{{ cifmw_basedir }}/images/"
disk_file_name: "ocp_master"
disksize: "100"
cpus: 16
memory: 32
root_part_id: 4
uefi: true
nets:
- ocppr
- ocpbm
- osp_trunk
compute:
uefi: "{{ cifmw_use_uefi }}"
root_part_id: "{{ cifmw_root_partition_id }}"
amount: "{{ [cifmw_libvirt_manager_compute_amount|int, 3] | max }}"
image_url: "{{ cifmw_discovered_image_url }}"
sha256_image_name: "{{ cifmw_discovered_hash }}"
image_local_dir: "{{ cifmw_basedir }}/images/"
disk_file_name: "base-os.qcow2"
disksize: "{{ [cifmw_libvirt_manager_compute_disksize|int, 50] | max }}"
memory: "{{ [cifmw_libvirt_manager_compute_memory|int, 8] | max }}"
cpus: "{{ [cifmw_libvirt_manager_compute_cpus|int, 4] | max }}"
nets:
- ocpbm
- osp_trunk
controller:
uefi: "{{ cifmw_use_uefi }}"
root_part_id: "{{ cifmw_root_partition_id }}"
image_url: "{{ cifmw_discovered_image_url }}"
sha256_image_name: "{{ cifmw_discovered_hash }}"
image_local_dir: "{{ cifmw_basedir }}/images/"
disk_file_name: "base-os.qcow2"
disksize: 50
memory: 8
cpus: 4
nets:
- ocpbm
- osp_trunk

## devscript support for OCP deploy
cifmw_devscripts_config_overrides:
fips_mode: "{{ cifmw_fips_enabled | default(false) | bool }}"
Comment on lines +126 to +128
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know certain things were moved out of cifmw_devscripts_config_overrides during #1777. Is fips_mode still valid here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if it's needed, or has any effect. I copied from the va-hci.yml reproducer.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cjeanner Do you know if fips_mode is still a valid option in cifmw_devscripts_config_overrides?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it is, yes


# Note: with that extra_network_names "osp_trunk", we instruct
# devscripts role to create a new network, and associate it to
# the OCP nodes. This one is a "private network", and will hold
# the VLANs used for network isolation.

# Please create a custom env file to provide:
# cifmw_devscripts_ci_token:
# cifmw_devscripts_pull_secret:

# Test Ceph file and object storage (block is enabled by default)
cifmw_ceph_daemons_layout:
rgw_enabled: true
dashboard_enabled: false
cephfs_enabled: true
ceph_nfs_enabled: true
10 changes: 0 additions & 10 deletions scenarios/reproducers/shift-on-stack-overrides.yml

This file was deleted.