diff --git a/CHANGELOG.md b/CHANGELOG.md
index a50429e32..bd19b8418 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -226,7 +226,7 @@
**Features**
-**Foundation Cetral**:
+**Foundation Central**:
- Ansible module for Foundation Central
- Ansible module for API Keys to authenticate with FC
- Ansible info module for API Keys
@@ -332,6 +332,6 @@
- solve python 2.7 issues [\#41](https://github.com/nutanix/nutanix.ansible/pull/41)
- device index calculation fixes, updates for get by name functionality[\#254](https://github.com/nutanix/nutanix.ansible/pull/42)
- Client SDK with inventory [\#45](https://github.com/nutanix/nutanix.ansible/pull/45)
-- Fix error messages for get_uuid() reponse [\#47](https://github.com/nutanix/nutanix.ansible/pull/47)
+- Fix error messages for get_uuid() response [\#47](https://github.com/nutanix/nutanix.ansible/pull/47)
**Full Changelog**: [here](https://github.com/nutanix/nutanix.ansible/commits/v1.0.0-beta.1)
diff --git a/CHANGELOG.rst b/CHANGELOG.rst
index d34f34be6..2ce41f80e 100644
--- a/CHANGELOG.rst
+++ b/CHANGELOG.rst
@@ -100,7 +100,7 @@ New Modules
- ntnx_ndb_db_servers_info - info module for ndb db server vms info
- ntnx_ndb_linked_databases - module to manage linked databases of a database instance
- ntnx_ndb_maintenance_tasks - module to add and remove maintenance related tasks
-- ntnx_ndb_maintenance_window - module to create, update and delete mainetance window
+- ntnx_ndb_maintenance_window - module to create, update and delete maintenance window
- ntnx_ndb_maintenance_windows_info - module for fetching maintenance windows info
- ntnx_ndb_profiles - module for create, update and delete of profiles
- ntnx_ndb_profiles_info - info module for ndb profiles
@@ -173,7 +173,7 @@ Bugfixes
New Modules
-----------
-- ntnx_acps - acp module which suports acp Create, update and delete operations
+- ntnx_acps - acp module which supports acp Create, update and delete operations
- ntnx_acps_info - acp info module
- ntnx_address_groups - module which supports address groups CRUD operations
- ntnx_address_groups_info - address groups info module
@@ -186,7 +186,7 @@ New Modules
- ntnx_projects_info - projects info module
- ntnx_roles - module which supports role CRUD operations
- ntnx_roles_info - role info module
-- ntnx_service_groups - service_groups module which suports service_groups CRUD operations
+- ntnx_service_groups - service_groups module which supports service_groups CRUD operations
- ntnx_service_groups_info - service_group info module
- ntnx_user_groups - user_groups module which supports pc user_groups management create delete operations
- ntnx_user_groups_info - User Groups info module
@@ -203,7 +203,7 @@ New Modules
- ntnx_image_placement_policy - image placement policy module which supports Create, update and delete operations
- ntnx_images - images module which supports pc images management CRUD operations
- ntnx_images_info - images info module
-- ntnx_security_rules - security_rule module which suports security_rule CRUD operations
+- ntnx_security_rules - security_rule module which supports security_rule CRUD operations
- ntnx_security_rules_info - security_rule info module
- ntnx_static_routes - vpc static routes
- ntnx_static_routes_info - vpc static routes info module
@@ -243,8 +243,8 @@ New Modules
- ntnx_foundation_central - Nutanix module to imaged Nodes and optionally create cluster
- ntnx_foundation_central_api_keys - Nutanix module which creates api key for foundation central
- ntnx_foundation_central_api_keys_info - Nutanix module which returns the api key
-- ntnx_foundation_central_imaged_clusters_info - Nutanix module which returns the imaged clusters within the Foudation Central
-- ntnx_foundation_central_imaged_nodes_info - Nutanix module which returns the imaged nodes within the Foudation Central
+- ntnx_foundation_central_imaged_clusters_info - Nutanix module which returns the imaged clusters within the Foundation Central
+- ntnx_foundation_central_imaged_nodes_info - Nutanix module which returns the imaged nodes within the Foundation Central
- ntnx_foundation_discover_nodes_info - Nutanix module which returns nodes discovered by Foundation
- ntnx_foundation_hypervisor_images_info - Nutanix module which returns the hypervisor images uploaded to Foundation
- ntnx_foundation_image_upload - Nutanix module which uploads hypervisor or AOS image to foundation vm.
@@ -274,7 +274,7 @@ Bugfixes
- Bug/cluster UUID issue68 [\#72](https://github.com/nutanix/nutanix.ansible/pull/72)
- Client SDK with inventory [\#45](https://github.com/nutanix/nutanix.ansible/pull/45)
- Creating a VM based on a disk_image without specifying the size_gb
-- Fix error messages for get_uuid() reponse [\#47](https://github.com/nutanix/nutanix.ansible/pull/47)
+- Fix error messages for get_uuid() response [\#47](https://github.com/nutanix/nutanix.ansible/pull/47)
- Fix/integ [\#96](https://github.com/nutanix/nutanix.ansible/pull/96)
- Sanity and python fix [\#46](https://github.com/nutanix/nutanix.ansible/pull/46)
- Task/fix failing sanity [\#117](https://github.com/nutanix/nutanix.ansible/pull/117)
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index b2822bfb5..b76ea1545 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -83,7 +83,7 @@
## Workflow
1. Create a github issue with following details
- * **Title** should contain one of the follwoing
+ * **Title** should contain one of the following
- [Feat] Develop ansible module for \
- [Imprv] Modify ansible module to support \
- [Bug] Fix \ bug in \
@@ -106,7 +106,7 @@
* `imprv/issue#`
* `bug/issue#`
-3. Develop `sanity`, `unit` and `integrtaion` tests.
+3. Develop `sanity`, `unit` and `integration` tests.
4. Create a [pull request](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request)
diff --git a/README.md b/README.md
index d4d167d41..b8dfb5e49 100644
--- a/README.md
+++ b/README.md
@@ -1,10 +1,13 @@
# Nutanix Ansible
+
Official nutanix ansible collection
# About
+
Nutanix ansible collection nutanix.ncp is the official Nutanix ansible collection to automate Nutanix Cloud Platform (ncp).
It is designed keeping simplicity as the core value. Hence it is
+
1. Easy to use
2. Easy to develop
@@ -17,12 +20,15 @@ Ansible Nutanix Provider leverages the community-supported model. See [Open Sour
# Version compatibility
## Ansible
+
This collection requires ansible-core>=2.15.0
## Python
+
This collection requires Python 3.9 or greater
## Prism Central
+
> For the 1.1.0 release of the ansible plugin it will have N-2 compatibility with the Prism Central APIs. This release was tested against Prism Central versions pc2022.1.0.2, pc.2021.9.0.5 and pc.2021.8.0.1.
> For the 1.2.0 release of the ansible plugin it will have N-2 compatibility with the Prism Central APIs. This release was tested against Prism Central versions pc.2022.4, pc2022.1.0.2 and pc.2021.9.0.5.
@@ -43,19 +49,20 @@ This collection requires Python 3.9 or greater
> For the 1.9.2 release of the ansible plugin it will have N-1 compatibility with the Prism Central APIs. This release was sanity tested against Prism Central version pc.2024.1 .
-
### Notes:
+
1. Static routes module (ntnx_static_routes) is supported for PC versions >= pc.2022.1
2. Adding cluster references in projects module (ntnx_projects) is supported for PC versions >= pc.2022.1
3. For Users and User Groups modules (ntnx_users and ntnx_user_groups), adding Identity Provider (IdP) & Organizational Unit (OU) based users/groups are supported for PC versions >= pc.2022.1
-4. ntnx_security_rules - The ``apptier`` option in target group has been removed. New option called ``apptiers`` has been added to support multi tier policy.
+4. ntnx_security_rules - The `apptier` option in target group has been removed. New option called `apptiers` has been added to support multi tier policy.
Prism Central based examples: https://github.com/nutanix/nutanix.ansible/tree/main/examples/
## Foundation
+
> For the 1.1.0 release of the ansible plugin, it will have N-1 compatibility with the Foundation. This release was tested against Foundation versions v5.2 and v5.1.1
> For the 1.9.1 release of the ansible plugin, it was tested against versions v5.2
@@ -63,11 +70,13 @@ Prism Central based examples: https://github.com/nutanix/nutanix.ansible/tree/ma
Foundation based examples : https://github.com/nutanix/nutanix.ansible/tree/main/examples/foundation
## Foundation Central
+
> For the 1.1.0 release of the ansible plugin, it will have N-1 compatibility with the Foundation Central . This release was tested against Foundation Central versions v1.3 and v1.2
Foundation Central based examples : https://github.com/nutanix/nutanix.ansible/tree/main/examples/fc
## Karbon
+
> For the 1.6.0 release of the ansible plugin, it will have N-2 compatibility with the Karbon. This release was tested against Karbon versions v2.3.0, v2.4.0 and v2.5.0
> For the 1.9.0 release of the ansible plugin, it was tested against Karbon versions v2.6.0, v2.7.0 and v2.8.0
@@ -87,9 +96,11 @@ Karbon based examples : https://github.com/nutanix/nutanix.ansible/tree/main/exa
NDB based examples : https://github.com/nutanix/nutanix.ansible/tree/main/examples/ndb
### Notes:
+
1. Currently NDB based modules are supported and tested against postgres based databases.
# Installing the collection
+
**Prerequisite**
Ansible should be pre-installed. If not, please follow official ansible [install guide](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html) .
@@ -98,26 +109,28 @@ For Developers, please follow [this install guide](
**1. Clone the GitHub repository to a local directory**
-```git clone https://github.com/nutanix/nutanix.ansible.git```
+`git clone https://github.com/nutanix/nutanix.ansible.git`
**2. Git checkout release version**
-```git checkout -b ```
+`git checkout -b `
**3. Build the collection**
-```ansible-galaxy collection build```
+`ansible-galaxy collection build`
**4. Install the collection**
-```ansible-galaxy collection install nutanix-ncp-.tar.gz```
+`ansible-galaxy collection install nutanix-ncp-.tar.gz`
**Note** Add `--force` option for rebuilding or reinstalling to overwrite existing data
# Using this collection
-You can either call modules by their Fully Qualified Collection Namespace (FQCN), such as nutanix.ncp.ntnx_vms, or you can call modules by their short name if you list the nutanix.ncp collection in the playbook's ```collections:``` keyword
+
+You can either call modules by their Fully Qualified Collection Namespace (FQCN), such as nutanix.ncp.ntnx_vms, or you can call modules by their short name if you list the nutanix.ncp collection in the playbook's `collections:` keyword
For example, the playbook for iaas.yml is as follows:
+
```yaml
---
- name: IaaS Provisioning
@@ -143,7 +156,9 @@ For example, the playbook for iaas.yml is as follows:
- include_role:
name: fip
```
+
To run this playbook, use ansible-playbook command as follows:
+
```
ansible-playbook
ansible-playbook examples/iaas/iaas.yml
@@ -154,7 +169,7 @@ ansible-playbook examples/iaas/iaas.yml
## Modules
| Name | Description |
-|----------------------------------------------|--------------------------------------------------------------------------------------------------|
+| -------------------------------------------- | ------------------------------------------------------------------------------------------------ |
| ntnx_acps | Create, Update, Delete acp. |
| ntnx_acps_info | Get acp info. |
| ntnx_address_groups | Create, Update, Delete Nutanix address groups. |
@@ -243,7 +258,7 @@ ansible-playbook examples/iaas/iaas.yml
| ntnx_ndb_database_restore | perform database restore |
| ntnx_ndb_database_scale | perform database scaling |
| ntnx_ndb_linked_databases | Add and remove linked databases of database instance |
-| ntnx_ndb_replicate_database_snapshots | replicate snapshots accross clusters in time machines |
+| ntnx_ndb_replicate_database_snapshots | replicate snapshots across clusters in time machines |
| ntnx_ndb_register_db_server_vm | register database server vm |
| ntnx_ndb_maintenance_tasks | Add and remove maintenance tasks in window |
| ntnx_ndb_maintenance_window | Create, update and delete maintenance window |
@@ -253,11 +268,12 @@ ansible-playbook examples/iaas/iaas.yml
## Inventory Plugins
-| Name | Description |
-| --- | --- |
+| Name | Description |
+| ----------------------- | ---------------------------- |
| ntnx_prism_vm_inventory | Nutanix VMs inventory source |
# Module documentation and examples
+
```
ansible-doc nutanix.ncp.
```
@@ -266,8 +282,8 @@ ansible-doc nutanix.ncp.
We glady welcome contributions from the community. From updating the documentation to adding more functions for Ansible, all ideas are welcome. Thank you in advance for all of your issues, pull requests, and comments!
-* [Contributing Guide](CONTRIBUTING.md)
-* [Code of Conduct](CODE_OF_CONDUCT.md)
+- [Contributing Guide](CONTRIBUTING.md)
+- [Code of Conduct](CODE_OF_CONDUCT.md)
# Testing
@@ -276,10 +292,12 @@ We glady welcome contributions from the community. From updating the documentati
To conduct integration tests for a specific Ansible module such as the `ntnx_vms` module, the following step-by-step procedures can be followed:
### Prerequisites
+
- Ensure you are in the installed collection directory where the module is located. For example:
-`/Users/mac.user1/.ansible/collections/ansible_collections/nutanix/ncp`
+ `/Users/mac.user1/.ansible/collections/ansible_collections/nutanix/ncp`
### Setting up Variables
+
1. Navigate to the `tests/integration/targets` directory within the collection.
2. Define the necessary variables within the feature-specific var files, such as `tests/integration/targets/prepare_env/vars/main.yml`, `tests/integration/targets/prepare_foundation_env/vars/main.yml`,`tests/integration/targets/prepare_ndb_env/tasks/prepare_env.yml`, etc.
@@ -287,39 +305,43 @@ To conduct integration tests for a specific Ansible module such as the `ntnx_vms
Note: For Karbon and FC tests, use the PC vars exclusively, as these features rely on pc setup. Not all variables are mandatory; define only the required variables for the particular feature to be tested.
3. Run the test setup playbook for the specific feature you intend to test to create entities in setup:
- - For PC, NDB, and Foundation tests, execute the relevant commands:
- ```bash
- ansible-playbook prepare_env/tasks/prepare_env.yml
- ansible-playbook prepare_ndb_env/tasks/prepare_env.yml
- ansible-playbook prepare_foundation_env/tasks/prepare_foundation_env.yml
- ```
+ - For PC, NDB, and Foundation tests, execute the relevant commands:
+ ```bash
+ ansible-playbook prepare_env/tasks/prepare_env.yml
+ ansible-playbook prepare_ndb_env/tasks/prepare_env.yml
+ ansible-playbook prepare_foundation_env/tasks/prepare_foundation_env.yml
+ ```
### Running Integration Tests
+
1. Conduct integration tests for all modules using:
- ```bash
- ansible-test integration
- ```
+
+ ```bash
+ ansible-test integration
+ ```
2. To perform integration tests for a specific module:
- ```bash
- ansible-test integration module_test_name
- ```
- Replace `module_test_name` with test directory name under tests/integration/targets.
+ ```bash
+ ansible-test integration module_test_name
+ ```
+ Replace `module_test_name` with test directory name under tests/integration/targets.
### Cleanup
+
1. After completing the integration tests, perform a cleanup specific to the tested feature:
- - For PC tests, execute the command:
- ```bash
- ansible-playbook prepare_env/tasks/clean_up.yml
- ```
- - For Foundation tests, execute the command:
- ```bash
- ansible-playbook prepare_foundation_env/tasks/clean_up.yml
- ```
+ - For PC tests, execute the command:
+ ```bash
+ ansible-playbook prepare_env/tasks/clean_up.yml
+ ```
+ - For Foundation tests, execute the command:
+ ```bash
+ ansible-playbook prepare_foundation_env/tasks/clean_up.yml
+ ```
By following these steps, you can perform comprehensive integration testing for the specified Ansible module and ensure a clean testing environment afterward. Define only the necessary variables for the specific feature you intend to test.
# Examples
+
## Playbook for IaaS provisioning on Nutanix
**Refer to [`examples/iaas`](https://github.com/nutanix/nutanix.ansible/tree/main/examples/iaas) for full implementation**
@@ -332,46 +354,122 @@ By following these steps, you can perform comprehensive integration testing for
collections:
- nutanix.ncp
vars:
- nutanix_host:
- nutanix_username:
- nutanix_password:
- validate_certs: true
+ nutanix_host:
+ nutanix_username:
+ nutanix_password:
+ validate_certs: true
tasks:
- name: Inputs for external subnets task
include_tasks: external_subnet.yml
with_items:
- - { name: Ext-Nat, vlan_id: 102, ip: 10.44.3.192, prefix: 27, gip: 10.44.3.193, sip: 10.44.3.198, eip: 10.44.3.207, eNat: True }
+ - {
+ name: Ext-Nat,
+ vlan_id: 102,
+ ip: 10.44.3.192,
+ prefix: 27,
+ gip: 10.44.3.193,
+ sip: 10.44.3.198,
+ eip: 10.44.3.207,
+ eNat: True,
+ }
- name: Inputs for vpcs task
include_tasks: vpc.yml
with_items:
- - { name: Prod, subnet_name: Ext-Nat}
- - { name: Dev, subnet_name: Ext-Nat}
+ - { name: Prod, subnet_name: Ext-Nat }
+ - { name: Dev, subnet_name: Ext-Nat }
- name: Inputs for overlay subnets
include_tasks: overlay_subnet.yml
with_items:
- - { name: Prod-SubnetA, vpc_name: Prod , nip: 10.1.1.0, prefix: 24, gip: 10.1.1.1, sip: 10.1.1.2, eip: 10.1.1.5,
- domain_name: "calm.nutanix.com", dns_servers : ["8.8.8.8","8.8.8.4"], domain_search: ["calm.nutanix.com","eng.nutanix.com"] }
- - { name: Prod-SubnetB, vpc_name: Prod , nip: 10.1.2.0, prefix: 24, gip: 10.1.2.1, sip: 10.1.2.2, eip: 10.1.2.5,
- domain_name: "calm.nutanix.com", dns_servers : ["8.8.8.8","8.8.8.4"], domain_search: ["calm.nutanix.com","eng.nutanix.com"] }
- - { name: Dev-SubnetA, vpc_name: Dev , nip: 10.1.1.0, prefix: 24, gip: 10.1.1.1, sip: 10.1.1.2, eip: 10.1.1.5,
- domain_name: "calm.nutanix.com", dns_servers : ["8.8.8.8","8.8.8.4"], domain_search: ["calm.nutanix.com","eng.nutanix.com"] }
- - { name: Dev-SubnetB, vpc_name: Dev , nip: 10.1.2.0, prefix: 24, gip: 10.1.2.1, sip: 10.1.2.2, eip: 10.1.2.5,
- domain_name: "calm.nutanix.com", dns_servers : ["8.8.8.8","8.8.8.4"], domain_search: ["calm.nutanix.com","eng.nutanix.com"] }
+ - {
+ name: Prod-SubnetA,
+ vpc_name: Prod,
+ nip: 10.1.1.0,
+ prefix: 24,
+ gip: 10.1.1.1,
+ sip: 10.1.1.2,
+ eip: 10.1.1.5,
+ domain_name: "calm.nutanix.com",
+ dns_servers: ["8.8.8.8", "8.8.8.4"],
+ domain_search: ["calm.nutanix.com", "eng.nutanix.com"],
+ }
+ - {
+ name: Prod-SubnetB,
+ vpc_name: Prod,
+ nip: 10.1.2.0,
+ prefix: 24,
+ gip: 10.1.2.1,
+ sip: 10.1.2.2,
+ eip: 10.1.2.5,
+ domain_name: "calm.nutanix.com",
+ dns_servers: ["8.8.8.8", "8.8.8.4"],
+ domain_search: ["calm.nutanix.com", "eng.nutanix.com"],
+ }
+ - {
+ name: Dev-SubnetA,
+ vpc_name: Dev,
+ nip: 10.1.1.0,
+ prefix: 24,
+ gip: 10.1.1.1,
+ sip: 10.1.1.2,
+ eip: 10.1.1.5,
+ domain_name: "calm.nutanix.com",
+ dns_servers: ["8.8.8.8", "8.8.8.4"],
+ domain_search: ["calm.nutanix.com", "eng.nutanix.com"],
+ }
+ - {
+ name: Dev-SubnetB,
+ vpc_name: Dev,
+ nip: 10.1.2.0,
+ prefix: 24,
+ gip: 10.1.2.1,
+ sip: 10.1.2.2,
+ eip: 10.1.2.5,
+ domain_name: "calm.nutanix.com",
+ dns_servers: ["8.8.8.8", "8.8.8.4"],
+ domain_search: ["calm.nutanix.com", "eng.nutanix.com"],
+ }
- name: Inputs for vm task
include_tasks: vm.yml
with_items:
- - {name: "Prod-Wordpress-App", desc: "Prod-Wordpress-App", is_connected: True , subnet_name: Prod-SubnetA, image_name: "wordpress-appserver", private_ip: ""}
- - {name: "Prod-Wordpress-DB", desc: "Prod-Wordpress-DB", is_connected: True , subnet_name: Prod-SubnetB, image_name: "wordpress-db", private_ip: 10.1.2.5}
- - {name: "Dev-Wordpress-App", desc: "Dev-Wordpress-App", is_connected: True , subnet_name: Dev-SubnetA, image_name: "wordpress-appserver", private_ip: ""}
- - {name: "Dev-Wordpress-DB", desc: "Dev-Wordpress-DB", is_connected: True , subnet_name: Dev-SubnetB, image_name: "wordpress-db",private_ip: 10.1.2.5}
+ - {
+ name: "Prod-Wordpress-App",
+ desc: "Prod-Wordpress-App",
+ is_connected: True,
+ subnet_name: Prod-SubnetA,
+ image_name: "wordpress-appserver",
+ private_ip: "",
+ }
+ - {
+ name: "Prod-Wordpress-DB",
+ desc: "Prod-Wordpress-DB",
+ is_connected: True,
+ subnet_name: Prod-SubnetB,
+ image_name: "wordpress-db",
+ private_ip: 10.1.2.5,
+ }
+ - {
+ name: "Dev-Wordpress-App",
+ desc: "Dev-Wordpress-App",
+ is_connected: True,
+ subnet_name: Dev-SubnetA,
+ image_name: "wordpress-appserver",
+ private_ip: "",
+ }
+ - {
+ name: "Dev-Wordpress-DB",
+ desc: "Dev-Wordpress-DB",
+ is_connected: True,
+ subnet_name: Dev-SubnetB,
+ image_name: "wordpress-db",
+ private_ip: 10.1.2.5,
+ }
- name: Inputs for Floating IP task
include_tasks: fip.yml
with_items:
- - {vm_name: "Prod-Wordpress-App"}
- - {vm_name: "Dev-Wordpress-App"}
-
+ - { vm_name: "Prod-Wordpress-App" }
+ - { vm_name: "Dev-Wordpress-App" }
```
diff --git a/changelogs/changelog.yaml b/changelogs/changelog.yaml
index 4d6506826..b0ff11f68 100644
--- a/changelogs/changelog.yaml
+++ b/changelogs/changelog.yaml
@@ -3,457 +3,469 @@ releases:
1.0.0:
changes:
bugfixes:
- - Creating a VM based on a disk_image without specifying the size_gb
- - icmp "any" code value in module PBR
+ - Creating a VM based on a disk_image without specifying the size_gb
+ - icmp "any" code value in module PBR
minor_changes:
- - Add meta file for collection
- - Allow environment variables for nutanix connection parameters
- release_date: '2022-03-02'
+ - Add meta file for collection
+ - Allow environment variables for nutanix connection parameters
+ release_date: "2022-03-02"
1.0.0-beta.1:
changes:
bugfixes:
- - Client SDK with inventory [\#45](https://github.com/nutanix/nutanix.ansible/pull/45)
- - Fix error messages for get_uuid() reponse [\#47](https://github.com/nutanix/nutanix.ansible/pull/47)
- - black fixes [\#30](https://github.com/nutanix/nutanix.ansible/pull/30)
- - black fixes [\#32](https://github.com/nutanix/nutanix.ansible/pull/32)
- - clear unused files and argument [\#29](https://github.com/nutanix/nutanix.ansible/pull/29)
- - device index calculation fixes, updates for get by name functionality[\#254](https://github.com/nutanix/nutanix.ansible/pull/42)
- - fixes to get spec from collection [\#17](https://github.com/nutanix/nutanix.ansible/pull/17)
- - solve python 2.7 issues [\#41](https://github.com/nutanix/nutanix.ansible/pull/41)
- - updates for guest customization spec [\#20](https://github.com/nutanix/nutanix.ansible/pull/20)
+ - Client SDK with inventory [\#45](https://github.com/nutanix/nutanix.ansible/pull/45)
+ - Fix error messages for get_uuid() response [\#47](https://github.com/nutanix/nutanix.ansible/pull/47)
+ - black fixes [\#30](https://github.com/nutanix/nutanix.ansible/pull/30)
+ - black fixes [\#32](https://github.com/nutanix/nutanix.ansible/pull/32)
+ - clear unused files and argument [\#29](https://github.com/nutanix/nutanix.ansible/pull/29)
+ - device index calculation fixes, updates for get by name functionality[\#254](https://github.com/nutanix/nutanix.ansible/pull/42)
+ - fixes to get spec from collection [\#17](https://github.com/nutanix/nutanix.ansible/pull/17)
+ - solve python 2.7 issues [\#41](https://github.com/nutanix/nutanix.ansible/pull/41)
+ - updates for guest customization spec [\#20](https://github.com/nutanix/nutanix.ansible/pull/20)
major_changes:
- - CICD pipeline using GitHub actions
+ - CICD pipeline using GitHub actions
modules:
- - description: Nutanix module for vms
- name: ntnx_vms
- namespace: ''
- release_date: '2022-01-28'
+ - description: Nutanix module for vms
+ name: ntnx_vms
+ namespace: ""
+ release_date: "2022-01-28"
1.0.0-beta.2:
changes:
bugfixes:
- - Bug/cluster UUID issue68 [\#72](https://github.com/nutanix/nutanix.ansible/pull/72)
- - Fix/integ [\#96](https://github.com/nutanix/nutanix.ansible/pull/96)
- - Sanity and python fix [\#46](https://github.com/nutanix/nutanix.ansible/pull/46)
- - Task/fix failing sanity [\#117](https://github.com/nutanix/nutanix.ansible/pull/117)
- - clean up pbrs.py [\#113](https://github.com/nutanix/nutanix.ansible/pull/113)
- - code cleanup - fix github issue#59 [\#60](https://github.com/nutanix/nutanix.ansible/pull/60)
- - fix project name [\#107](https://github.com/nutanix/nutanix.ansible/pull/107)
- - fixed variables names issue74 [\#77](https://github.com/nutanix/nutanix.ansible/pull/77)
+ - Bug/cluster UUID issue68 [\#72](https://github.com/nutanix/nutanix.ansible/pull/72)
+ - Fix/integ [\#96](https://github.com/nutanix/nutanix.ansible/pull/96)
+ - Sanity and python fix [\#46](https://github.com/nutanix/nutanix.ansible/pull/46)
+ - Task/fix failing sanity [\#117](https://github.com/nutanix/nutanix.ansible/pull/117)
+ - clean up pbrs.py [\#113](https://github.com/nutanix/nutanix.ansible/pull/113)
+ - code cleanup - fix github issue#59 [\#60](https://github.com/nutanix/nutanix.ansible/pull/60)
+ - fix project name [\#107](https://github.com/nutanix/nutanix.ansible/pull/107)
+ - fixed variables names issue74 [\#77](https://github.com/nutanix/nutanix.ansible/pull/77)
minor_changes:
- - Codegen - Ansible code generator
- - Imprv cluster uuid [\#75](https://github.com/nutanix/nutanix.ansible/pull/75)
- - Imprv/code coverage [\#97](https://github.com/nutanix/nutanix.ansible/pull/97)
- - Imprv/vpcs network prefix [\#81](https://github.com/nutanix/nutanix.ansible/pull/81)
+ - Codegen - Ansible code generator
+ - Imprv cluster uuid [\#75](https://github.com/nutanix/nutanix.ansible/pull/75)
+ - Imprv/code coverage [\#97](https://github.com/nutanix/nutanix.ansible/pull/97)
+ - Imprv/vpcs network prefix [\#81](https://github.com/nutanix/nutanix.ansible/pull/81)
modules:
- - description: Nutanix module for floating Ips
- name: ntnx_floating_ips
- namespace: ''
- - description: Nutanix module for policy based routing
- name: ntnx_pbrs
- namespace: ''
- - description: Nutanix module for subnets
- name: ntnx_subnets
- namespace: ''
- - description: Nutanix module for vpcs
- name: ntnx_vpcs
- namespace: ''
- release_date: '2022-02-22'
+ - description: Nutanix module for floating Ips
+ name: ntnx_floating_ips
+ namespace: ""
+ - description: Nutanix module for policy based routing
+ name: ntnx_pbrs
+ namespace: ""
+ - description: Nutanix module for subnets
+ name: ntnx_subnets
+ namespace: ""
+ - description: Nutanix module for vpcs
+ name: ntnx_vpcs
+ namespace: ""
+ release_date: "2022-02-22"
1.1.0:
changes:
minor_changes:
- - Added integration tests for foundation and foundation central
+ - Added integration tests for foundation and foundation central
1.1.0-beta.1:
modules:
- - description: Nutanix module to image nodes and optionally create clusters
- name: ntnx_foundation
- namespace: ''
- - description: Nutanix module which configures IPMI IP address on BMC of nodes.
- name: ntnx_foundation_bmc_ipmi_config
- namespace: ''
- - description: Nutanix module which returns nodes discovered by Foundation
- name: ntnx_foundation_discover_nodes_info
- namespace: ''
- - description: Nutanix module which returns the hypervisor images uploaded to
- Foundation
- name: ntnx_foundation_hypervisor_images_info
- namespace: ''
- - description: Nutanix module which uploads hypervisor or AOS image to foundation
- vm.
- name: ntnx_foundation_image_upload
- namespace: ''
- - description: Nutanix module which returns node network information discovered
- by Foundation
- name: ntnx_foundation_node_network_info
- namespace: ''
- release_date: '2022-04-11'
+ - description: Nutanix module to image nodes and optionally create clusters
+ name: ntnx_foundation
+ namespace: ""
+ - description: Nutanix module which configures IPMI IP address on BMC of nodes.
+ name: ntnx_foundation_bmc_ipmi_config
+ namespace: ""
+ - description: Nutanix module which returns nodes discovered by Foundation
+ name: ntnx_foundation_discover_nodes_info
+ namespace: ""
+ - description:
+ Nutanix module which returns the hypervisor images uploaded to
+ Foundation
+ name: ntnx_foundation_hypervisor_images_info
+ namespace: ""
+ - description:
+ Nutanix module which uploads hypervisor or AOS image to foundation
+ vm.
+ name: ntnx_foundation_image_upload
+ namespace: ""
+ - description:
+ Nutanix module which returns node network information discovered
+ by Foundation
+ name: ntnx_foundation_node_network_info
+ namespace: ""
+ release_date: "2022-04-11"
1.1.0-beta.2:
modules:
- - description: Nutanix module to imaged Nodes and optionally create cluster
- name: ntnx_foundation_central
- namespace: ''
- - description: Nutanix module which creates api key for foundation central
- name: ntnx_foundation_central_api_keys
- namespace: ''
- - description: Nutanix module which returns the api key
- name: ntnx_foundation_central_api_keys_info
- namespace: ''
- - description: Nutanix module which returns the imaged clusters within the Foudation
- Central
- name: ntnx_foundation_central_imaged_clusters_info
- namespace: ''
- - description: Nutanix module which returns the imaged nodes within the Foudation
- Central
- name: ntnx_foundation_central_imaged_nodes_info
- namespace: ''
- release_date: '2022-04-28'
+ - description: Nutanix module to imaged Nodes and optionally create cluster
+ name: ntnx_foundation_central
+ namespace: ""
+ - description: Nutanix module which creates api key for foundation central
+ name: ntnx_foundation_central_api_keys
+ namespace: ""
+ - description: Nutanix module which returns the api key
+ name: ntnx_foundation_central_api_keys_info
+ namespace: ""
+ - description:
+ Nutanix module which returns the imaged clusters within the Foundation
+ Central
+ name: ntnx_foundation_central_imaged_clusters_info
+ namespace: ""
+ - description:
+ Nutanix module which returns the imaged nodes within the Foundation
+ Central
+ name: ntnx_foundation_central_imaged_nodes_info
+ namespace: ""
+ release_date: "2022-04-28"
1.2.0:
changes:
minor_changes:
- - VM's update functionality
+ - VM's update functionality
modules:
- - description: Nutanix info module for floating Ips
- name: ntnx_floating_ips_info
- namespace: ''
- - description: Nutanix info module for policy based routing
- name: ntnx_pbrs_info
- namespace: ''
- - description: Nutanix info module for subnets
- name: ntnx_subnets_info
- namespace: ''
- - description: VM module which supports VM clone operations
- name: ntnx_vms_clone
- namespace: ''
- - description: Nutanix info module for vms
- name: ntnx_vms_info
- namespace: ''
- - description: VM module which supports ova creation
- name: ntnx_vms_ova
- namespace: ''
- - description: Nutanix info module for vpcs
- name: ntnx_vpcs_info
- namespace: ''
- release_date: '2022-06-03'
+ - description: Nutanix info module for floating Ips
+ name: ntnx_floating_ips_info
+ namespace: ""
+ - description: Nutanix info module for policy based routing
+ name: ntnx_pbrs_info
+ namespace: ""
+ - description: Nutanix info module for subnets
+ name: ntnx_subnets_info
+ namespace: ""
+ - description: VM module which supports VM clone operations
+ name: ntnx_vms_clone
+ namespace: ""
+ - description: Nutanix info module for vms
+ name: ntnx_vms_info
+ namespace: ""
+ - description: VM module which supports ova creation
+ name: ntnx_vms_ova
+ namespace: ""
+ - description: Nutanix info module for vpcs
+ name: ntnx_vpcs_info
+ namespace: ""
+ release_date: "2022-06-03"
1.3.0:
modules:
- - description: image placement policies info module
- name: ntnx_image_placement_policies_info
- namespace: ''
- - description: image placement policy module which supports Create, update and
- delete operations
- name: ntnx_image_placement_policy
- namespace: ''
- - description: images module which supports pc images management CRUD operations
- name: ntnx_images
- namespace: ''
- - description: images info module
- name: ntnx_images_info
- namespace: ''
- - description: security_rule module which suports security_rule CRUD operations
- name: ntnx_security_rules
- namespace: ''
- - description: security_rule info module
- name: ntnx_security_rules_info
- namespace: ''
- - description: vpc static routes
- name: ntnx_static_routes
- namespace: ''
- - description: vpc static routes info module
- name: ntnx_static_routes_info
- namespace: ''
- release_date: '2022-07-04'
+ - description: image placement policies info module
+ name: ntnx_image_placement_policies_info
+ namespace: ""
+ - description:
+ image placement policy module which supports Create, update and
+ delete operations
+ name: ntnx_image_placement_policy
+ namespace: ""
+ - description: images module which supports pc images management CRUD operations
+ name: ntnx_images
+ namespace: ""
+ - description: images info module
+ name: ntnx_images_info
+ namespace: ""
+ - description: security_rule module which supports security_rule CRUD operations
+ name: ntnx_security_rules
+ namespace: ""
+ - description: security_rule info module
+ name: ntnx_security_rules_info
+ namespace: ""
+ - description: vpc static routes
+ name: ntnx_static_routes
+ namespace: ""
+ - description: vpc static routes info module
+ name: ntnx_static_routes_info
+ namespace: ""
+ release_date: "2022-07-04"
1.4.0:
changes:
bugfixes:
- - Fix examples of info modules [\#226](https://github.com/nutanix/nutanix.ansible/issues/226)
+ - Fix examples of info modules [\#226](https://github.com/nutanix/nutanix.ansible/issues/226)
modules:
- - description: acp module which suports acp Create, update and delete operations
- name: ntnx_acps
- namespace: ''
- - description: acp info module
- name: ntnx_acps_info
- namespace: ''
- - description: module which supports address groups CRUD operations
- name: ntnx_address_groups
- namespace: ''
- - description: address groups info module
- name: ntnx_address_groups_info
- namespace: ''
- - description: category module which supports pc category management CRUD operations
- name: ntnx_categories
- namespace: ''
- - description: categories info module
- name: ntnx_categories_info
- namespace: ''
- - description: cluster info module
- name: ntnx_clusters_info
- namespace: ''
- - description: host info module
- name: ntnx_hosts_info
- namespace: ''
- - description: permissions info module
- name: ntnx_permissions_info
- namespace: ''
- - description: module for create, update and delete pc projects
- name: ntnx_projects
- namespace: ''
- - description: projects info module
- name: ntnx_projects_info
- namespace: ''
- - description: module which supports role CRUD operations
- name: ntnx_roles
- namespace: ''
- - description: role info module
- name: ntnx_roles_info
- namespace: ''
- - description: service_groups module which suports service_groups CRUD operations
- name: ntnx_service_groups
- namespace: ''
- - description: service_group info module
- name: ntnx_service_groups_info
- namespace: ''
- - description: user_groups module which supports pc user_groups management create
- delete operations
- name: ntnx_user_groups
- namespace: ''
- - description: User Groups info module
- name: ntnx_user_groups_info
- namespace: ''
- - description: users module which supports pc users management create delete operations
- name: ntnx_users
- namespace: ''
- - description: users info module
- name: ntnx_users_info
- namespace: ''
- release_date: '2022-07-28'
+ - description: acp module which supports acp Create, update and delete operations
+ name: ntnx_acps
+ namespace: ""
+ - description: acp info module
+ name: ntnx_acps_info
+ namespace: ""
+ - description: module which supports address groups CRUD operations
+ name: ntnx_address_groups
+ namespace: ""
+ - description: address groups info module
+ name: ntnx_address_groups_info
+ namespace: ""
+ - description: category module which supports pc category management CRUD operations
+ name: ntnx_categories
+ namespace: ""
+ - description: categories info module
+ name: ntnx_categories_info
+ namespace: ""
+ - description: cluster info module
+ name: ntnx_clusters_info
+ namespace: ""
+ - description: host info module
+ name: ntnx_hosts_info
+ namespace: ""
+ - description: permissions info module
+ name: ntnx_permissions_info
+ namespace: ""
+ - description: module for create, update and delete pc projects
+ name: ntnx_projects
+ namespace: ""
+ - description: projects info module
+ name: ntnx_projects_info
+ namespace: ""
+ - description: module which supports role CRUD operations
+ name: ntnx_roles
+ namespace: ""
+ - description: role info module
+ name: ntnx_roles_info
+ namespace: ""
+ - description: service_groups module which supports service_groups CRUD operations
+ name: ntnx_service_groups
+ namespace: ""
+ - description: service_group info module
+ name: ntnx_service_groups_info
+ namespace: ""
+ - description:
+ user_groups module which supports pc user_groups management create
+ delete operations
+ name: ntnx_user_groups
+ namespace: ""
+ - description: User Groups info module
+ name: ntnx_user_groups_info
+ namespace: ""
+ - description: users module which supports pc users management create delete operations
+ name: ntnx_users
+ namespace: ""
+ - description: users info module
+ name: ntnx_users_info
+ namespace: ""
+ release_date: "2022-07-28"
1.5.0:
modules:
- - description: Nutanix module for protection rules
- name: ntnx_protection_rules
- namespace: ''
- - description: Nutanix info module for protection rules
- name: ntnx_protection_rules_info
- namespace: ''
- - description: Nutanix module for recovery plan jobs
- name: ntnx_recovery_plan_jobs
- namespace: ''
- - description: Nutanix info module for protection
- name: ntnx_recovery_plan_jobs_info
- namespace: ''
- - description: Nutanix module for recovery plan
- name: ntnx_recovery_plans
- namespace: ''
- - description: Nutanix info module for recovery plan
- name: ntnx_recovery_plans_info
- namespace: ''
+ - description: Nutanix module for protection rules
+ name: ntnx_protection_rules
+ namespace: ""
+ - description: Nutanix info module for protection rules
+ name: ntnx_protection_rules_info
+ namespace: ""
+ - description: Nutanix module for recovery plan jobs
+ name: ntnx_recovery_plan_jobs
+ namespace: ""
+ - description: Nutanix info module for protection
+ name: ntnx_recovery_plan_jobs_info
+ namespace: ""
+ - description: Nutanix module for recovery plan
+ name: ntnx_recovery_plans
+ namespace: ""
+ - description: Nutanix info module for recovery plan
+ name: ntnx_recovery_plans_info
+ namespace: ""
1.6.0:
modules:
- - description: Nutanix module for karbon clusters
- name: ntnx_karbon_clusters
- namespace: ''
- - description: Nutanix info module for karbon clusters with kubeconifg and ssh
- config
- name: ntnx_karbon_clusters_info
- namespace: ''
- - description: Nutanix module for karbon private registry
- name: ntnx_karbon_registries
- namespace: ''
- - description: Nutanix info module for karbon private registry
- name: ntnx_karbon_registries_info
- namespace: ''
- release_date: '2022-09-09'
+ - description: Nutanix module for karbon clusters
+ name: ntnx_karbon_clusters
+ namespace: ""
+ - description:
+ Nutanix info module for karbon clusters with kubeconifg and ssh
+ config
+ name: ntnx_karbon_clusters_info
+ namespace: ""
+ - description: Nutanix module for karbon private registry
+ name: ntnx_karbon_registries
+ namespace: ""
+ - description: Nutanix info module for karbon private registry
+ name: ntnx_karbon_registries_info
+ namespace: ""
+ release_date: "2022-09-09"
1.7.0:
changes:
bugfixes:
- - ntnx_projects - [Bug] Clusters and subnets configured in project are not visible
- in new projects UI [\#283](https://github.com/nutanix/nutanix.ansible/issues/283)
- - ntnx_vms - Subnet Name --> UUID Lookup should be PE Cluster Aware [\#260](https://github.com/nutanix/nutanix.ansible/issues/260)
- - nutanix.ncp.ntnx_prism_vm_inventory - [Bug] Inventory does not fetch more
- than 500 Entities [[\#228](https://github.com/nutanix/nutanix.ansible/issues/228)]
+ - ntnx_projects - [Bug] Clusters and subnets configured in project are not visible
+ in new projects UI [\#283](https://github.com/nutanix/nutanix.ansible/issues/283)
+ - ntnx_vms - Subnet Name --> UUID Lookup should be PE Cluster Aware [\#260](https://github.com/nutanix/nutanix.ansible/issues/260)
+ - nutanix.ncp.ntnx_prism_vm_inventory - [Bug] Inventory does not fetch more
+ than 500 Entities [[\#228](https://github.com/nutanix/nutanix.ansible/issues/228)]
minor_changes:
- - examples - [Imprv] Add version related notes to examples [\#279](https://github.com/nutanix/nutanix.ansible/issues/279)
- - examples - [Imprv] Fix IaaS example [\#250](https://github.com/nutanix/nutanix.ansible/issues/250)
- - examples - [Imprv] add examples of Images and Static Routes Module [\#256](https://github.com/nutanix/nutanix.ansible/issues/256)
- - ntnx_projects - [Feat] Add capability to configure role mappings with collaboration
- on/off in ntnx_projects [\#252](https://github.com/nutanix/nutanix.ansible/issues/252)
- - ntnx_projects - [Imprv] add vpcs and overlay subnets configure capability
- to module ntnx_projects [\#289](https://github.com/nutanix/nutanix.ansible/issues/289)
- - ntnx_vms - [Imprv] add functionality to set network mac_address to module
- ntnx_vms [\#201](https://github.com/nutanix/nutanix.ansible/issues/201)
- - nutanix.ncp.ntnx_prism_vm_inventory - [Imprv] add functionality constructed
- to module inventory [\#235](https://github.com/nutanix/nutanix.ansible/issues/235)
- release_date: '2022-09-30'
+ - examples - [Imprv] Add version related notes to examples [\#279](https://github.com/nutanix/nutanix.ansible/issues/279)
+ - examples - [Imprv] Fix IaaS example [\#250](https://github.com/nutanix/nutanix.ansible/issues/250)
+ - examples - [Imprv] add examples of Images and Static Routes Module [\#256](https://github.com/nutanix/nutanix.ansible/issues/256)
+ - ntnx_projects - [Feat] Add capability to configure role mappings with collaboration
+ on/off in ntnx_projects [\#252](https://github.com/nutanix/nutanix.ansible/issues/252)
+ - ntnx_projects - [Imprv] add vpcs and overlay subnets configure capability
+ to module ntnx_projects [\#289](https://github.com/nutanix/nutanix.ansible/issues/289)
+ - ntnx_vms - [Imprv] add functionality to set network mac_address to module
+ ntnx_vms [\#201](https://github.com/nutanix/nutanix.ansible/issues/201)
+ - nutanix.ncp.ntnx_prism_vm_inventory - [Imprv] add functionality constructed
+ to module inventory [\#235](https://github.com/nutanix/nutanix.ansible/issues/235)
+ release_date: "2022-09-30"
1.8.0:
modules:
- - description: module for authorizing db server vm
- name: ntnx_ndb_authorize_db_server_vms
- namespace: ''
- - description: Create, Update and Delete NDB clusters
- name: ntnx_ndb_clusters
- namespace: ''
- - description: module for database clone refresh.
- name: ntnx_ndb_database_clone_refresh
- namespace: ''
- - description: module for create, update and delete of ndb database clones
- name: ntnx_ndb_database_clones
- namespace: ''
- - description: module for performing log catchups action
- name: ntnx_ndb_database_log_catchup
- namespace: ''
- - description: module for restoring database instance
- name: ntnx_ndb_database_restore
- namespace: ''
- - description: module for scaling database instance
- name: ntnx_ndb_database_scale
- namespace: ''
- - description: module for creating, updating and deleting database snapshots
- name: ntnx_ndb_database_snapshots
- namespace: ''
- - description: module for create, delete and update of database server vms
- name: ntnx_ndb_db_server_vms
- namespace: ''
- - description: module to manage linked databases of a database instance
- name: ntnx_ndb_linked_databases
- namespace: ''
- - description: module to add and remove maintenance related tasks
- name: ntnx_ndb_maintenance_tasks
- namespace: ''
- - description: module to create, update and delete mainetance window
- name: ntnx_ndb_maintenance_window
- namespace: ''
- - description: module for fetching maintenance windows info
- name: ntnx_ndb_maintenance_windows_info
- namespace: ''
- - description: module for create, update and delete of profiles
- name: ntnx_ndb_profiles
- namespace: ''
- - description: module for database instance registration
- name: ntnx_ndb_register_database
- namespace: ''
- - description: module for registration of database server vm
- name: ntnx_ndb_register_db_server_vm
- namespace: ''
- - description: module for replicating database snapshots across clusters of time
- machine
- name: ntnx_ndb_replicate_database_snapshots
- namespace: ''
- - description: moudle for creating, updating and deleting slas
- name: ntnx_ndb_slas
- namespace: ''
- - description: info module for ndb snapshots info
- name: ntnx_ndb_snapshots_info
- namespace: ''
- - description: Module for create, update and delete of stretched vlan.
- name: ntnx_ndb_stretched_vlans
- namespace: ''
- - description: module for create, update and delete of tags
- name: ntnx_ndb_tags
- namespace: ''
- - description: Module for create, update and delete for data access management
- in time machines.
- name: ntnx_ndb_time_machine_clusters
- namespace: ''
- - description: Module for create, update and delete of ndb vlan.
- name: ntnx_ndb_vlans
- namespace: ''
- - description: info module for ndb vlans
- name: ntnx_ndb_vlans_info
- namespace: ''
- release_date: '2023-02-28'
+ - description: module for authorizing db server vm
+ name: ntnx_ndb_authorize_db_server_vms
+ namespace: ""
+ - description: Create, Update and Delete NDB clusters
+ name: ntnx_ndb_clusters
+ namespace: ""
+ - description: module for database clone refresh.
+ name: ntnx_ndb_database_clone_refresh
+ namespace: ""
+ - description: module for create, update and delete of ndb database clones
+ name: ntnx_ndb_database_clones
+ namespace: ""
+ - description: module for performing log catchups action
+ name: ntnx_ndb_database_log_catchup
+ namespace: ""
+ - description: module for restoring database instance
+ name: ntnx_ndb_database_restore
+ namespace: ""
+ - description: module for scaling database instance
+ name: ntnx_ndb_database_scale
+ namespace: ""
+ - description: module for creating, updating and deleting database snapshots
+ name: ntnx_ndb_database_snapshots
+ namespace: ""
+ - description: module for create, delete and update of database server vms
+ name: ntnx_ndb_db_server_vms
+ namespace: ""
+ - description: module to manage linked databases of a database instance
+ name: ntnx_ndb_linked_databases
+ namespace: ""
+ - description: module to add and remove maintenance related tasks
+ name: ntnx_ndb_maintenance_tasks
+ namespace: ""
+ - description: module to create, update and delete maintenance window
+ name: ntnx_ndb_maintenance_window
+ namespace: ""
+ - description: module for fetching maintenance windows info
+ name: ntnx_ndb_maintenance_windows_info
+ namespace: ""
+ - description: module for create, update and delete of profiles
+ name: ntnx_ndb_profiles
+ namespace: ""
+ - description: module for database instance registration
+ name: ntnx_ndb_register_database
+ namespace: ""
+ - description: module for registration of database server vm
+ name: ntnx_ndb_register_db_server_vm
+ namespace: ""
+ - description:
+ module for replicating database snapshots across clusters of time
+ machine
+ name: ntnx_ndb_replicate_database_snapshots
+ namespace: ""
+ - description: moudle for creating, updating and deleting slas
+ name: ntnx_ndb_slas
+ namespace: ""
+ - description: info module for ndb snapshots info
+ name: ntnx_ndb_snapshots_info
+ namespace: ""
+ - description: Module for create, update and delete of stretched vlan.
+ name: ntnx_ndb_stretched_vlans
+ namespace: ""
+ - description: module for create, update and delete of tags
+ name: ntnx_ndb_tags
+ namespace: ""
+ - description:
+ Module for create, update and delete for data access management
+ in time machines.
+ name: ntnx_ndb_time_machine_clusters
+ namespace: ""
+ - description: Module for create, update and delete of ndb vlan.
+ name: ntnx_ndb_vlans
+ namespace: ""
+ - description: info module for ndb vlans
+ name: ntnx_ndb_vlans_info
+ namespace: ""
+ release_date: "2023-02-28"
1.8.0-beta.1:
modules:
- - description: info module for database clones
- name: ntnx_ndb_clones_info
- namespace: ''
- - description: info module for ndb clusters info
- name: ntnx_ndb_clusters_info
- namespace: ''
- - description: Module for create, update and delete of single instance database.
- Currently, postgres type database is officially supported.
- name: ntnx_ndb_databases
- namespace: ''
- - description: info module for ndb database instances
- name: ntnx_ndb_databases_info
- namespace: ''
- - description: info module for ndb db server vms info
- name: ntnx_ndb_db_servers_info
- namespace: ''
- - description: info module for ndb profiles
- name: ntnx_ndb_profiles_info
- namespace: ''
- - description: info module for ndb slas
- name: ntnx_ndb_slas_info
- namespace: ''
- - description: info module for ndb time machines
- name: ntnx_ndb_time_machines_info
- namespace: ''
- release_date: '2022-10-20'
+ - description: info module for database clones
+ name: ntnx_ndb_clones_info
+ namespace: ""
+ - description: info module for ndb clusters info
+ name: ntnx_ndb_clusters_info
+ namespace: ""
+ - description:
+ Module for create, update and delete of single instance database.
+ Currently, postgres type database is officially supported.
+ name: ntnx_ndb_databases
+ namespace: ""
+ - description: info module for ndb database instances
+ name: ntnx_ndb_databases_info
+ namespace: ""
+ - description: info module for ndb db server vms info
+ name: ntnx_ndb_db_servers_info
+ namespace: ""
+ - description: info module for ndb profiles
+ name: ntnx_ndb_profiles_info
+ namespace: ""
+ - description: info module for ndb slas
+ name: ntnx_ndb_slas_info
+ namespace: ""
+ - description: info module for ndb time machines
+ name: ntnx_ndb_time_machines_info
+ namespace: ""
+ release_date: "2022-10-20"
1.9.0:
changes:
bugfixes:
- - info modules - [Bug] Multiple filters params are not considered for fetching
- entities in PC based info modules [[\#352](https://github.com/nutanix/nutanix.ansible/issues/352)]
- - ntnx_foundation - [Bug] clusters parameters not being passed to Foundation
- Server in module nutanix.ncp.ntnx_foundation [[\#307](https://github.com/nutanix/nutanix.ansible/issues/307)]
- - ntnx_karbon_clusters - [Bug] error in sample karbon/create_k8s_cluster.yml
- [[\#349](https://github.com/nutanix/nutanix.ansible/issues/349)]
- - ntnx_karbon_clusters - [Bug] impossible to deploy NKE cluster with etcd using
- disk smaller than 120GB [[\#350](https://github.com/nutanix/nutanix.ansible/issues/350)]
- - ntnx_subnets - [Bug] wrong virtual_switch selected in module ntnx_subnets
- [\#328](https://github.com/nutanix/nutanix.ansible/issues/328)
+ - info modules - [Bug] Multiple filters params are not considered for fetching
+ entities in PC based info modules [[\#352](https://github.com/nutanix/nutanix.ansible/issues/352)]
+ - ntnx_foundation - [Bug] clusters parameters not being passed to Foundation
+ Server in module nutanix.ncp.ntnx_foundation [[\#307](https://github.com/nutanix/nutanix.ansible/issues/307)]
+ - ntnx_karbon_clusters - [Bug] error in sample karbon/create_k8s_cluster.yml
+ [[\#349](https://github.com/nutanix/nutanix.ansible/issues/349)]
+ - ntnx_karbon_clusters - [Bug] impossible to deploy NKE cluster with etcd using
+ disk smaller than 120GB [[\#350](https://github.com/nutanix/nutanix.ansible/issues/350)]
+ - ntnx_subnets - [Bug] wrong virtual_switch selected in module ntnx_subnets
+ [\#328](https://github.com/nutanix/nutanix.ansible/issues/328)
deprecated_features:
- - ntnx_security_rules - The ``apptier`` option in target group has been removed.
- New option called ``apptiers`` has been added to support multi tier policy.
+ - ntnx_security_rules - The ``apptier`` option in target group has been removed.
+ New option called ``apptiers`` has been added to support multi tier policy.
minor_changes:
- - ntnx_profiles_info - [Impr] Develop ansible module for getting available IPs
- for given network profiles in NDB [\#345](https://github.com/nutanix/nutanix.ansible/issues/345)
- - ntnx_security_rules - [Imprv] Flow Network Security Multi-Tier support in
- Security Policy definition [\#319](https://github.com/nutanix/nutanix.ansible/issues/319)
+ - ntnx_profiles_info - [Impr] Develop ansible module for getting available IPs
+ for given network profiles in NDB [\#345](https://github.com/nutanix/nutanix.ansible/issues/345)
+ - ntnx_security_rules - [Imprv] Flow Network Security Multi-Tier support in
+ Security Policy definition [\#319](https://github.com/nutanix/nutanix.ansible/issues/319)
modules:
- - description: Create,Update and Delete a worker node pools with the provided
- configuration.
- name: ntnx_karbon_clusters_node_pools
- namespace: ''
- - description: info module for ndb tags info
- name: ntnx_ndb_tags_info
- namespace: ''
- release_date: '2023-07-11'
+ - description:
+ Create,Update and Delete a worker node pools with the provided
+ configuration.
+ name: ntnx_karbon_clusters_node_pools
+ namespace: ""
+ - description: info module for ndb tags info
+ name: ntnx_ndb_tags_info
+ namespace: ""
+ release_date: "2023-07-11"
1.9.1:
changes:
bugfixes:
- - ntnx_foundation - [Bug] Error when Clusters Block is missing in module ntnx_foundation
- [[\#397](https://github.com/nutanix/nutanix.ansible/issues/397)]
- - ntnx_ndb_time_machines_info - [Bug] ntnx_ndb_time_machines_info not fetching
- all attributes when name is used for fetching [[\#418](https://github.com/nutanix/nutanix.ansible/issues/418)]
- - ntnx_security_rules - Fix Syntax Errors in Create App Security Rule Example
- [[\#394](https://github.com/nutanix/nutanix.ansible/pull/394/files)]
- - ntnx_vms - [Bug] Error when updating size_gb using the int filter in module
- ntnx_vms [[\#400](https://github.com/nutanix/nutanix.ansible/issues/400)]
- - ntnx_vms - [Bug] hard_poweroff has been moved to state from operation [[\#415](https://github.com/nutanix/nutanix.ansible/issues/415)]
- - ntnx_vms_clone - [Bug] cannot change boot_config when cloning in module ntnx_vms_clone
- [[\#360](https://github.com/nutanix/nutanix.ansible/issues/359)]
- - website - [Bug] Github page deployment action is failing. [[\#483](https://github.com/nutanix/nutanix.ansible/issues/483)]
+ - ntnx_foundation - [Bug] Error when Clusters Block is missing in module ntnx_foundation
+ [[\#397](https://github.com/nutanix/nutanix.ansible/issues/397)]
+ - ntnx_ndb_time_machines_info - [Bug] ntnx_ndb_time_machines_info not fetching
+ all attributes when name is used for fetching [[\#418](https://github.com/nutanix/nutanix.ansible/issues/418)]
+ - ntnx_security_rules - Fix Syntax Errors in Create App Security Rule Example
+ [[\#394](https://github.com/nutanix/nutanix.ansible/pull/394/files)]
+ - ntnx_vms - [Bug] Error when updating size_gb using the int filter in module
+ ntnx_vms [[\#400](https://github.com/nutanix/nutanix.ansible/issues/400)]
+ - ntnx_vms - [Bug] hard_poweroff has been moved to state from operation [[\#415](https://github.com/nutanix/nutanix.ansible/issues/415)]
+ - ntnx_vms_clone - [Bug] cannot change boot_config when cloning in module ntnx_vms_clone
+ [[\#360](https://github.com/nutanix/nutanix.ansible/issues/359)]
+ - website - [Bug] Github page deployment action is failing. [[\#483](https://github.com/nutanix/nutanix.ansible/issues/483)]
minor_changes:
- - docs - [Imprv] add doc regarding running integration tests locally [[\#435](https://github.com/nutanix/nutanix.ansible/issues/435)]
- - info modules - [Imprv] add examples for custom_filter [[\#416](https://github.com/nutanix/nutanix.ansible/issues/416)]
- - ndb clones - [Imprv] Enable database clones and clone refresh using latest
- snapshot flag [[\#391](https://github.com/nutanix/nutanix.ansible/issues/391)]
- - ndb clones - [Imprv] add examples for NDB database clone under examples folder
- [[\#386](https://github.com/nutanix/nutanix.ansible/issues/386)]
- - ntnx_prism_vm_inventory - Add support for PC Categories [[\#405](https://github.com/nutanix/nutanix.ansible/issues/405)]
- - ntnx_prism_vm_inventory - [Imprv] add examples for dynamic inventory using
- ntnx_prism_vm_inventory [[\#401](https://github.com/nutanix/nutanix.ansible/issues/401)]
- - ntnx_vms - [Imprv] add possibility to specify / modify vm user ownership and
- project [[\#378](https://github.com/nutanix/nutanix.ansible/issues/378)]
- - ntnx_vms - owner association upon vm creation module [[\#359](https://github.com/nutanix/nutanix.ansible/issues/359)]
- - ntnx_vms_info - [Imprv] add examples with guest customization for module ntnx_vms
- [[\#395](https://github.com/nutanix/nutanix.ansible/issues/395)]
+ - docs - [Imprv] add doc regarding running integration tests locally [[\#435](https://github.com/nutanix/nutanix.ansible/issues/435)]
+ - info modules - [Imprv] add examples for custom_filter [[\#416](https://github.com/nutanix/nutanix.ansible/issues/416)]
+ - ndb clones - [Imprv] Enable database clones and clone refresh using latest
+ snapshot flag [[\#391](https://github.com/nutanix/nutanix.ansible/issues/391)]
+ - ndb clones - [Imprv] add examples for NDB database clone under examples folder
+ [[\#386](https://github.com/nutanix/nutanix.ansible/issues/386)]
+ - ntnx_prism_vm_inventory - Add support for PC Categories [[\#405](https://github.com/nutanix/nutanix.ansible/issues/405)]
+ - ntnx_prism_vm_inventory - [Imprv] add examples for dynamic inventory using
+ ntnx_prism_vm_inventory [[\#401](https://github.com/nutanix/nutanix.ansible/issues/401)]
+ - ntnx_vms - [Imprv] add possibility to specify / modify vm user ownership and
+ project [[\#378](https://github.com/nutanix/nutanix.ansible/issues/378)]
+ - ntnx_vms - owner association upon vm creation module [[\#359](https://github.com/nutanix/nutanix.ansible/issues/359)]
+ - ntnx_vms_info - [Imprv] add examples with guest customization for module ntnx_vms
+ [[\#395](https://github.com/nutanix/nutanix.ansible/issues/395)]
release_summary: This release included bug fixes and improvement.
- release_date: '2023-10-09'
+ release_date: "2023-10-09"
1.9.2:
changes:
breaking_changes:
- - nutanix.ncp collection - Due to all versions of ansible-core version less
- than v2.15.0 are EOL, we are also deprecating support for same and minimum
- version to use this collection is ansible-core==2.15.0. [[\#479](https://github.com/nutanix/nutanix.ansible/issues/479)]
+ - nutanix.ncp collection - Due to all versions of ansible-core version less
+ than v2.15.0 are EOL, we are also deprecating support for same and minimum
+ version to use this collection is ansible-core==2.15.0. [[\#479](https://github.com/nutanix/nutanix.ansible/issues/479)]
release_summary: Deprecating support for ansible-core less than v2.15.0
- release_date: '2024-05-30'
+ release_date: "2024-05-30"
diff --git a/examples/acp.yml b/examples/acp.yml
index 8efb39915..1d3168200 100644
--- a/examples/acp.yml
+++ b/examples/acp.yml
@@ -2,8 +2,6 @@
- name: ACP playbook
hosts: localhost
gather_facts: false
- collections:
- - nutanix.ncp
module_defaults:
group/nutanix.ncp.ntnx:
nutanix_host:
@@ -12,15 +10,14 @@
validate_certs: false
tasks:
-
- name: Create ACP with all specfactions
- ntnx_acps:
- validate_certs: False
+ nutanix.ncp.ntnx_acps:
+ validate_certs: false
state: present
nutanix_host: "{{ IP }}"
nutanix_username: "{{ username }}"
nutanix_password: "{{ password }}"
- name: acp_with_all_specfactions
+ name: acp_with_all_specifications
role:
uuid: "{{ role.uuid }}"
user_uuids:
@@ -41,7 +38,7 @@
collection: ALL
- name: Delete ACP
- ntnx_acps:
+ nutanix.ncp.ntnx_acps:
state: absent
acp_uuid: "{{ acp_uuid }}"
register: result
diff --git a/examples/fc/fc.yml b/examples/fc/fc.yml
index 00f9732fb..0489d25d1 100644
--- a/examples/fc/fc.yml
+++ b/examples/fc/fc.yml
@@ -2,99 +2,96 @@
- name: Foundation Central Playbook
hosts: localhost
gather_facts: false
- collections:
- - nutanix.ncp
-
tasks:
- - name: Nodes Imaging with Cluster Creation with manual mode.
- ntnx_foundation_central:
- nutanix_host: "{{ pc }}"
- nutanix_username: "{{ username }}"
- nutanix_password: "{{ password }}"
- validate_certs: false
- cluster_name: "test"
- # skip_cluster_creation: false #set this to true to skip cluster creation
- common_network_settings:
- cvm_dns_servers:
- - 10.x.xx.xx
- hypervisor_dns_servers:
- - 10.x.xx.xx
- cvm_ntp_servers:
- - "ntp"
- hypervisor_ntp_servers:
- - "ntp"
- nodes_list:
- - manual_mode:
- cvm_gateway: "10.xx.xx.xx"
- cvm_netmask: "xx.xx.xx.xx"
- cvm_ip: "10.x.xx.xx"
- hypervisor_gateway: "10.x.xx.xxx"
- hypervisor_netmask: "xx.xx.xx.xx"
- hypervisor_ip: "10.x.x.xx"
- hypervisor_hostname: "Host-1"
- imaged_node_uuid: ""
- use_existing_network_settings: false
- ipmi_gateway: "10.x.xx.xx"
- ipmi_netmask: "xx.xx.xx.xx"
- ipmi_ip: "10.x.xx.xx"
- image_now: true
- hypervisor_type: "kvm"
+ - name: Nodes Imaging with Cluster Creation with manual mode.
+ nutanix.ncp.ntnx_foundation_central:
+ nutanix_host: "{{ pc }}"
+ nutanix_username: "{{ username }}"
+ nutanix_password: "{{ password }}"
+ validate_certs: false
+ cluster_name: test
+ # skip_cluster_creation: false #set this to true to skip cluster creation
+ common_network_settings:
+ cvm_dns_servers:
+ - 10.x.xx.xx
+ hypervisor_dns_servers:
+ - 10.x.xx.xx
+ cvm_ntp_servers:
+ - ntp
+ hypervisor_ntp_servers:
+ - ntp
+ nodes_list:
+ - manual_mode:
+ cvm_gateway: 10.xx.xx.xx
+ cvm_netmask: xx.xx.xx.xx
+ cvm_ip: 10.x.xx.xx
+ hypervisor_gateway: 10.x.xx.xxx
+ hypervisor_netmask: xx.xx.xx.xx
+ hypervisor_ip: 10.x.x.xx
+ hypervisor_hostname: Host-1
+ imaged_node_uuid:
+ use_existing_network_settings: false
+ ipmi_gateway: 10.x.xx.xx
+ ipmi_netmask: xx.xx.xx.xx
+ ipmi_ip: 10.x.xx.xx
+ image_now: true
+ hypervisor_type: kvm
- - manual_mode:
- cvm_gateway: "10.xx.xx.xx"
- cvm_netmask: "xx.xx.xx.xx"
- cvm_ip: "10.x.xx.xx"
- hypervisor_gateway: "10.x.xx.xxx"
- hypervisor_netmask: "xx.xx.xx.xx"
- hypervisor_ip: "10.x.x.xx"
- hypervisor_hostname: "Host-2"
- imaged_node_uuid: ""
- use_existing_network_settings: false
- ipmi_gateway: "10.x.xx.xx"
- ipmi_netmask: "xx.xx.xx.xx"
- ipmi_ip: "10.x.xx.xx"
- image_now: true
- hypervisor_type: "kvm"
+ - manual_mode:
+ cvm_gateway: 10.xx.xx.xx
+ cvm_netmask: xx.xx.xx.xx
+ cvm_ip: 10.x.xx.xx
+ hypervisor_gateway: 10.x.xx.xxx
+ hypervisor_netmask: xx.xx.xx.xx
+ hypervisor_ip: 10.x.x.xx
+ hypervisor_hostname: Host-2
+ imaged_node_uuid:
+ use_existing_network_settings: false
+ ipmi_gateway: 10.x.xx.xx
+ ipmi_netmask: xx.xx.xx.xx
+ ipmi_ip: 10.x.xx.xx
+ image_now: true
+ hypervisor_type: kvm
- redundancy_factor: 2
- aos_package_url: ""
- hypervisor_iso_details:
- url: ""
- register: output
+ redundancy_factor: 2
+ aos_package_url:
+ hypervisor_iso_details:
+ url:
+ register: output
- - name: Nodes Imaging without Cluster Creation with discovery mode.
- ntnx_foundation_central:
- nutanix_host: "{{ pc }}"
- nutanix_username: "{{ username }}"
- nutanix_password: "{{ password }}"
- validate_certs: false
- cluster_name: "test"
- skip_cluster_creation: true
- common_network_settings:
- cvm_dns_servers:
- - 10.x.xx.xx
- hypervisor_dns_servers:
- - 10.x.xx.xx
- cvm_ntp_servers:
- - "ntp"
- hypervisor_ntp_servers:
- - "ntp"
- nodes_list:
- - discovery_mode:
- node_serial: ""
- - discovery_mode:
- node_serial: ""
- - discovery_mode:
- node_serial: ""
- discovery_override:
- cvm_ip:
+ - name: Nodes Imaging without Cluster Creation with discovery mode.
+ nutanix.ncp.ntnx_foundation_central:
+ nutanix_host: "{{ pc }}"
+ nutanix_username: "{{ username }}"
+ nutanix_password: "{{ password }}"
+ validate_certs: false
+ cluster_name: test
+ skip_cluster_creation: true
+ common_network_settings:
+ cvm_dns_servers:
+ - 10.x.xx.xx
+ hypervisor_dns_servers:
+ - 10.x.xx.xx
+ cvm_ntp_servers:
+ - ntp
+ hypervisor_ntp_servers:
+ - ntp
+ nodes_list:
+ - discovery_mode:
+ node_serial:
+ - discovery_mode:
+ node_serial:
+ - discovery_mode:
+ node_serial:
+ discovery_override:
+ cvm_ip:
- redundancy_factor: 2
- aos_package_url: ""
- hypervisor_iso_details:
- url: ""
- register: output
+ redundancy_factor: 2
+ aos_package_url:
+ hypervisor_iso_details:
+ url:
+ register: output
- - name: output of list
- debug:
- msg: '{{ output }}'
+ - name: Output of list
+ ansible.builtin.debug:
+ msg: "{{ output }}"
diff --git a/examples/foundation/node_discovery_network_info.yml b/examples/foundation/node_discovery_network_info.yml
index 2f81eb083..526d77338 100644
--- a/examples/foundation/node_discovery_network_info.yml
+++ b/examples/foundation/node_discovery_network_info.yml
@@ -1,25 +1,25 @@
+---
# Here we will discover nodes and also get node network info of particular some discovered nodes
- name: Discover nodes and get their network info
hosts: localhost
gather_facts: false
- collections:
- - nutanix.ncp
tasks:
- - name: Discover all nodes
- ntnx_foundation_discover_nodes_info:
- nutanix_host: "10.xx.xx.xx"
+ - name: Discover all nodes
+ nutanix.ncp.ntnx_foundation_discover_nodes_info:
+ nutanix_host: 10.xx.xx.xx
# unskip line 12 to include configured(nodes part of cluster) nodes in the output
# include_configured: true
- register: discovered_nodes
+ register: discovered_nodes
- # get network info of nodes discovered from ntnx_foundation_discover_nodes_info module
- - name: Get node network info of some discovered nodes
- ntnx_foundation_node_network_info:
- nutanix_host: "10.xx.xx.xx"
- nodes:
- - "{{discovered_nodes.blocks.0.nodes.0.ipv6_address}}"
- - "{{discovered_nodes.blocks.1.nodes.0.ipv6_address}}"
- register: result
+ # get network info of nodes discovered from ntnx_foundation_discover_nodes_info module
+ - name: Get node network info of some discovered nodes
+ nutanix.ncp.ntnx_foundation_node_network_info:
+ nutanix_host: 10.xx.xx.xx
+ nodes:
+ - "{{discovered_nodes.blocks.0.nodes.0.ipv6_address}}"
+ - "{{discovered_nodes.blocks.1.nodes.0.ipv6_address}}"
+ register: result
- - debug:
- msg: "{{ result }}"
+ - name: Print node network info
+ ansible.builtin.debug:
+ msg: "{{ result }}"
diff --git a/examples/images.yml b/examples/images.yml
index 82f11491d..5624ac958 100644
--- a/examples/images.yml
+++ b/examples/images.yml
@@ -2,8 +2,6 @@
- name: Images playbook
hosts: localhost
gather_facts: false
- collections:
- - nutanix.ncp
module_defaults:
group/nutanix.ncp.ntnx:
nutanix_host:
@@ -12,14 +10,14 @@
validate_certs: false
tasks:
- name: Setting Variables
- set_fact:
+ ansible.builtin.set_fact:
image_uuid: ""
source_path: ""
source_uri: ""
- clusters_name: ""
+ clusters_name: ""
- - name: create image from local workstation
- ntnx_images:
+ - name: Create image from local workstation
+ nutanix.ncp.ntnx_images:
state: "present"
source_path: "{{source_path}}"
clusters:
@@ -38,8 +36,8 @@
product_version: "1.2.0"
wait: true
- - name: create image from with source as remote server file location
- ntnx_images:
+ - name: Create image from with source as remote server file location
+ nutanix.ncp.ntnx_images:
state: "present"
source_uri: "{{source_uri}}"
clusters:
@@ -58,8 +56,8 @@
product_version: "1.2.0"
wait: true
- - name: override categories of existing image
- ntnx_images:
+ - name: Override categories of existing image
+ nutanix.ncp.ntnx_images:
state: "present"
image_uuid: "{{image-uuid}}"
categories:
@@ -69,15 +67,15 @@
- Backup
wait: true
- - name: dettach all categories from existing image
- ntnx_images:
+ - name: Dettach all categories from existing image
+ nutanix.ncp.ntnx_images:
state: "present"
image_uuid: "00000000-0000-0000-0000-000000000000"
remove_categories: true
wait: true
- - name: delete existing image
- ntnx_images:
+ - name: Delete existing image
+ nutanix.ncp.ntnx_images:
state: "absent"
image_uuid: "00000000-0000-0000-0000-000000000000"
wait: true
diff --git a/examples/karbon/create_registries.yml b/examples/karbon/create_registries.yml
index 42c75e310..5992fbee8 100644
--- a/examples/karbon/create_registries.yml
+++ b/examples/karbon/create_registries.yml
@@ -1,9 +1,7 @@
---
-- name: create registeries
+- name: Create registeries
hosts: localhost
gather_facts: false
- collections:
- - nutanix.ncp
module_defaults:
group/nutanix.ncp.ntnx:
nutanix_host:
@@ -12,30 +10,31 @@
validate_certs: false
tasks:
- - set_fact:
- registry_name:
- url:
- port_number:
- username:
- password:
+ - name: Set vars
+ ansible.builtin.set_fact:
+ registry_name:
+ url:
+ port_number:
+ username:
+ password:
- - name: create registry
- ntnx_karbon_registries:
- name: "{{registry_name}}"
- url: "{{url}}"
- port: "{{port_number}}"
- register: result
+ - name: Create registry
+ nutanix.ncp.ntnx_karbon_registries:
+ name: "{{ registry_name }}"
+ url: "{{ url }}"
+ port: "{{ port_number }}"
+ register: result
- - name: delete registry
- ntnx_karbon_registries:
- name: "{{registry_name}}"
- state: absent
- register: result
+ - name: Delete registry
+ nutanix.ncp.ntnx_karbon_registries:
+ name: "{{ registry_name }}"
+ state: absent
+ register: result
- - name: create registry with username and password
- ntnx_karbon_registries:
- name: "{{registry_name}}"
- url: "{{url}}"
- username: "{{username}}"
- password: "{{password}}"
- register: result
+ - name: Create registry with username and password
+ nutanix.ncp.ntnx_karbon_registries:
+ name: "{{ registry_name }}"
+ url: "{{ url }}"
+ username: "{{ username }}"
+ password: "{{ password }}"
+ register: result
diff --git a/examples/karbon/registries_info.yml b/examples/karbon/registries_info.yml
index 81c2d8742..935658ee6 100644
--- a/examples/karbon/registries_info.yml
+++ b/examples/karbon/registries_info.yml
@@ -1,9 +1,7 @@
---
-- name: get registeries info
+- name: Get registeries info
hosts: localhost
gather_facts: false
- collections:
- - nutanix.ncp
module_defaults:
group/nutanix.ncp.ntnx:
nutanix_host:
@@ -12,11 +10,11 @@
validate_certs: false
tasks:
- - name: test getting all registries
- ntnx_karbon_registries_info:
- register: registries
+ - name: Test getting all registries
+ nutanix.ncp.ntnx_karbon_registries_info:
+ register: registries
- - name: test getting particular register using name
- ntnx_karbon_registries_info:
+ - name: Test getting particular register using name
+ nutanix.ncp.ntnx_karbon_registries_info:
registry_name: "{{ registries.response[1].name }}"
- register: result
+ register: result
diff --git a/examples/ndb/README.md b/examples/ndb/README.md
index 761d0ec59..52491bbc3 100644
--- a/examples/ndb/README.md
+++ b/examples/ndb/README.md
@@ -1,5 +1,5 @@
# Nutanix Database Service
-Nutanix ansibe collection nutanix.ncp from v1.8.0 will contain modules for supporting Nutanix Database Service (NDB) features.
+Nutanix ansible collection nutanix.ncp from v1.8.0 will contain modules for supporting Nutanix Database Service (NDB) features.
These modules are based on workflow :
diff --git a/examples/ndb/db_server_vms.yml b/examples/ndb/db_server_vms.yml
index 7ae35cc47..faa0f288a 100644
--- a/examples/ndb/db_server_vms.yml
+++ b/examples/ndb/db_server_vms.yml
@@ -2,8 +2,6 @@
- name: NDB db server vms
hosts: localhost
gather_facts: false
- collections:
- - nutanix.ncp
module_defaults:
group/nutanix.ncp.ntnx:
nutanix_host:
@@ -12,44 +10,43 @@
validate_certs: false
tasks:
-
- - name: create spec for db server vm using time machine
- check_mode: yes
- ntnx_ndb_db_server_vms:
- wait: True
- name: "ansible-created-vm1-from-time-machine"
- desc: "ansible-created-vm1-from-time-machine-time-machine"
+ - name: Create spec for db server vm using time machine
+ check_mode: true
+ nutanix.ncp.ntnx_ndb_db_server_vms:
+ wait: true
+ name: ansible-created-vm1-from-time-machine
+ desc: ansible-created-vm1-from-time-machine-time-machine
time_machine:
- uuid: "test_uuid"
- snapshot_uuid: "test_snapshot_uuid"
+ uuid: test_uuid
+ snapshot_uuid: test_snapshot_uuid
compute_profile:
- uuid: "test_compute_uuid"
+ uuid: test_compute_uuid
network_profile:
- uuid: "test_network_uuid"
+ uuid: test_network_uuid
cluster:
- uuid: "test_cluster_uuid"
- password: "test_password"
- pub_ssh_key: "test_public_key"
- database_type: "postgres_database"
+ uuid: test_cluster_uuid
+ password: test_password
+ pub_ssh_key: test_public_key
+ database_type: postgres_database
automated_patching:
maintenance_window:
- uuid: "test_window_uuid"
+ uuid: test_window_uuid
tasks:
- - type: "OS_PATCHING"
- pre_task_cmd: "ls"
- post_task_cmd: "ls -a"
- - type: "DB_PATCHING"
- pre_task_cmd: "ls -l"
- post_task_cmd: "ls -F"
+ - type: OS_PATCHING
+ pre_task_cmd: ls
+ post_task_cmd: ls -a
+ - type: DB_PATCHING
+ pre_task_cmd: ls -l
+ post_task_cmd: ls -F
register: check_mode_result
- - name: create spec for db server vm using software profile and names of profile
- check_mode: yes
- ntnx_ndb_db_server_vms:
- wait: True
+ - name: Create spec for db server vm using software profile and names of profile
+ check_mode: true
+ nutanix.ncp.ntnx_ndb_db_server_vms:
+ wait: true
name: "{{ vm1_name }}"
- desc: "ansible-created-vm1-desc"
+ desc: ansible-created-vm1-desc
software_profile:
name: "{{ software_profile.name }}"
compute_profile:
@@ -60,25 +57,25 @@
name: "{{ cluster.cluster1.name }}"
password: "{{ vm_password }}"
pub_ssh_key: "{{ public_ssh_key }}"
- time_zone: "UTC"
- database_type: "postgres_database"
+ time_zone: UTC
+ database_type: postgres_database
automated_patching:
maintenance_window:
name: "{{ maintenance.window_name }}"
tasks:
- - type: "OS_PATCHING"
- pre_task_cmd: "ls"
- post_task_cmd: "ls -a"
- - type: "DB_PATCHING"
- pre_task_cmd: "ls -l"
- post_task_cmd: "ls -F"
+ - type: OS_PATCHING
+ pre_task_cmd: ls
+ post_task_cmd: ls -a
+ - type: DB_PATCHING
+ pre_task_cmd: ls -l
+ post_task_cmd: ls -F
register: result
- - name: create db server vm using software profile
- ntnx_ndb_db_server_vms:
- wait: True
+ - name: Create db server vm using software profile
+ nutanix.ncp.ntnx_ndb_db_server_vms:
+ wait: true
name: "{{ vm1_name }}"
- desc: "ansible-created-vm1-desc"
+ desc: ansible-created-vm1-desc
software_profile:
name: "{{ software_profile.name }}"
compute_profile:
@@ -89,232 +86,226 @@
name: "{{ cluster.cluster1.name }}"
password: "{{ vm_password }}"
pub_ssh_key: "{{ public_ssh_key }}"
- time_zone: "UTC"
- database_type: "postgres_database"
+ time_zone: UTC
+ database_type: postgres_database
automated_patching:
maintenance_window:
name: "{{ maintenance.window_name }}"
tasks:
- - type: "OS_PATCHING"
- pre_task_cmd: "ls"
- post_task_cmd: "ls -a"
- - type: "DB_PATCHING"
- pre_task_cmd: "ls -l"
- post_task_cmd: "ls -F"
+ - type: OS_PATCHING
+ pre_task_cmd: ls
+ post_task_cmd: ls -a
+ - type: DB_PATCHING
+ pre_task_cmd: ls -l
+ post_task_cmd: ls -F
register: result
-
- - name: update db server vm name, desc, credentials, tags
- ntnx_ndb_db_server_vms:
- wait: True
- uuid: "{{db_server_uuid}}"
- name: "{{vm1_name_updated}}"
- desc: "ansible-created-vm1-updated-desc"
- reset_name_in_ntnx_cluster: True
- reset_desc_in_ntnx_cluster: True
+ - name: Update db server vm name, desc, credentials, tags
+ nutanix.ncp.ntnx_ndb_db_server_vms:
+ wait: true
+ uuid: "{{ db_server_uuid }}"
+ name: "{{ vm1_name_updated }}"
+ desc: ansible-created-vm1-updated-desc
+ reset_name_in_ntnx_cluster: true
+ reset_desc_in_ntnx_cluster: true
update_credentials:
- - username: "{{vm_username}}"
- password: "{{vm_password}}"
+ - username: "{{ vm_username }}"
+ password: "{{ vm_password }}"
tags:
ansible-db-server-vms: ansible-updated
register: result
- - name: create spec for update db server vm credentials
- check_mode: yes
- ntnx_ndb_db_server_vms:
- wait: True
- uuid: "{{db_server_uuid}}"
+ - name: Create spec for update db server vm credentials
+ check_mode: true
+ nutanix.ncp.ntnx_ndb_db_server_vms:
+ wait: true
+ uuid: "{{ db_server_uuid }}"
update_credentials:
- - username: "user"
- password: "pass"
+ - username: user
+ password: pass
register: result
-
- name: List NDB db_servers
- ntnx_ndb_db_servers_info:
+ nutanix.ncp.ntnx_ndb_db_servers_info:
register: db_servers
-
- - name: get NDB db_servers using it's name
- ntnx_ndb_db_servers_info:
+ - name: Get NDB db_servers using it's name
+ nutanix.ncp.ntnx_ndb_db_servers_info:
filters:
load_metrics: true
- load_databases: True
+ load_databases: true
value_type: name
- value: "{{db_servers.response[0].name}}"
+ value: "{{ db_servers.response[0].name }}"
register: result
- - name: get NDB db_servers using it's ip
- ntnx_ndb_db_servers_info:
+ - name: Get NDB db_servers using it's ip
+ nutanix.ncp.ntnx_ndb_db_servers_info:
filters:
value_type: ip
- value: "{{db_servers.response[0].ipAddresses[0]}}"
+ value: "{{ db_servers.response[0].ipAddresses[0] }}"
register: result
- - name: get NDB db_servers using it's name
- ntnx_ndb_db_servers_info:
- name: "{{db_servers.response[0].name}}"
+ - name: Get NDB db_servers using it's name
+ nutanix.ncp.ntnx_ndb_db_servers_info:
+ name: "{{ db_servers.response[0].name }}"
register: result
- - name: get NDB db_servers using it's id
- ntnx_ndb_db_servers_info:
- uuid: "{{db_servers.response[0].id}}"
+ - name: Get NDB db_servers using it's id
+ nutanix.ncp.ntnx_ndb_db_servers_info:
+ uuid: "{{ db_servers.response[0].id }}"
register: result
- - name: get NDB db_servers using ip
- ntnx_ndb_db_servers_info:
- server_ip: "{{db_servers.response[0].ipAddresses[0]}}"
+ - name: Get NDB db_servers using ip
+ nutanix.ncp.ntnx_ndb_db_servers_info:
+ server_ip: "{{ db_servers.response[0].ipAddresses[0] }}"
register: result
################################### maintenance tasks update tasks #############################
- - name: create spec for adding maintenance window tasks to db server vm
- check_mode: yes
- ntnx_ndb_maintenance_tasks:
+ - name: Create spec for adding maintenance window tasks to db server vm
+ check_mode: true
+ nutanix.ncp.ntnx_ndb_maintenance_tasks:
db_server_vms:
- - name: "{{vm1_name_updated}}"
- - uuid: "test_vm_1"
+ - name: "{{ vm1_name_updated }}"
+ - uuid: test_vm_1
db_server_clusters:
- - uuid: "test_cluter_1"
- - uuid: "test_cluter_2"
+ - uuid: test_cluter_1
+ - uuid: test_cluter_2
maintenance_window:
- name: "{{maintenance.window_name}}"
+ name: "{{ maintenance.window_name }}"
tasks:
- - type: "OS_PATCHING"
- pre_task_cmd: "ls -a"
- post_task_cmd: "ls"
- - type: "DB_PATCHING"
- pre_task_cmd: "ls -a"
- post_task_cmd: "ls"
+ - type: OS_PATCHING
+ pre_task_cmd: ls -a
+ post_task_cmd: ls
+ - type: DB_PATCHING
+ pre_task_cmd: ls -a
+ post_task_cmd: ls
register: result
- - name: create spec for removing maintenance window tasks from above created vm
- check_mode: yes
- ntnx_ndb_maintenance_tasks:
+ - name: Create spec for removing maintenance window tasks from above created vm
+ check_mode: true
+ nutanix.ncp.ntnx_ndb_maintenance_tasks:
db_server_vms:
- - uuid: "{{db_server_uuid}}"
+ - uuid: "{{ db_server_uuid }}"
maintenance_window:
- uuid: "{{maintenance.window_uuid}}"
+ uuid: "{{ maintenance.window_uuid }}"
tasks: []
register: result
-
- - name: remove maintenance tasks
- ntnx_ndb_maintenance_tasks:
+ - name: Remove maintenance tasks
+ nutanix.ncp.ntnx_ndb_maintenance_tasks:
db_server_vms:
- - uuid: "{{db_server_uuid}}"
+ - uuid: "{{ db_server_uuid }}"
maintenance_window:
- uuid: "{{maintenance.window_uuid}}"
+ uuid: "{{ maintenance.window_uuid }}"
tasks: []
register: result
- name: Add maitenance window task for vm
- ntnx_ndb_maintenance_tasks:
+ nutanix.ncp.ntnx_ndb_maintenance_tasks:
db_server_vms:
- - name: "{{vm1_name_updated}}"
+ - name: "{{ vm1_name_updated }}"
maintenance_window:
- name: "{{maintenance.window_name}}"
+ name: "{{ maintenance.window_name }}"
tasks:
- - type: "OS_PATCHING"
- pre_task_cmd: "ls -a"
- post_task_cmd: "ls"
- - type: "DB_PATCHING"
- pre_task_cmd: "ls -a"
- post_task_cmd: "ls"
+ - type: OS_PATCHING
+ pre_task_cmd: ls -a
+ post_task_cmd: ls
+ - type: DB_PATCHING
+ pre_task_cmd: ls -a
+ post_task_cmd: ls
register: result
################################### DB server VM unregistration tasks #############################
- - name: generate check mode spec for unregister with default values
- check_mode: yes
- ntnx_ndb_db_server_vms:
- state: "absent"
- wait: True
- uuid: "{{db_server_uuid}}"
+ - name: Generate check mode spec for unregister with default values
+ check_mode: true
+ nutanix.ncp.ntnx_ndb_db_server_vms:
+ state: absent
+ wait: true
+ uuid: "{{ db_server_uuid }}"
register: result
- - name: genereate check mode spec for delete vm with vgs and snapshots
- check_mode: yes
- ntnx_ndb_db_server_vms:
- state: "absent"
- uuid: "{{db_server_uuid}}"
- delete_from_cluster: True
- delete_vgs: True
- delete_vm_snapshots: True
+ - name: Genereate check mode spec for delete vm with vgs and snapshots
+ check_mode: true
+ nutanix.ncp.ntnx_ndb_db_server_vms:
+ state: absent
+ uuid: "{{ db_server_uuid }}"
+ delete_from_cluster: true
+ delete_vgs: true
+ delete_vm_snapshots: true
register: result
- - name: unregister vm
- ntnx_ndb_db_server_vms:
- state: "absent"
- wait: True
- uuid: "{{db_server_uuid}}"
- delete_from_cluster: False
- soft_remove: True
- delete_vgs: True
- delete_vm_snapshots: True
+ - name: Unregister vm
+ nutanix.ncp.ntnx_ndb_db_server_vms:
+ state: absent
+ wait: true
+ uuid: "{{ db_server_uuid }}"
+ delete_from_cluster: false
+ soft_remove: true
+ delete_vgs: true
+ delete_vm_snapshots: true
register: result
################################### DB server VM Registration tasks #############################
-
- - name: generate spec for registeration of the previous unregistered vm using check mode
- check_mode: yes
- ntnx_ndb_register_db_server_vm:
- ip: "{{vm_ip}}"
- desc: "register-vm-desc"
+ - name: Generate spec for registeration of the previous unregistered vm using check mode
+ check_mode: true
+ nutanix.ncp.ntnx_ndb_register_db_server_vm:
+ ip: "{{ vm_ip }}"
+ desc: register-vm-desc
reset_desc_in_ntnx_cluster: true
cluster:
- name: "{{cluster.cluster1.name}}"
+ name: "{{ cluster.cluster1.name }}"
postgres:
- software_path: "{{postgres.software_home}}"
- private_ssh_key: "check-key"
- username: "{{vm_username}}"
+ software_path: "{{ postgres.software_home }}"
+ private_ssh_key: check-key
+ username: "{{ vm_username }}"
automated_patching:
maintenance_window:
name: "{{ maintenance.window_name }}"
tasks:
- - type: "OS_PATCHING"
- pre_task_cmd: "ls"
- post_task_cmd: "ls -a"
- - type: "DB_PATCHING"
- pre_task_cmd: "ls -l"
- post_task_cmd: "ls -F"
- working_directory: "/check"
+ - type: OS_PATCHING
+ pre_task_cmd: ls
+ post_task_cmd: ls -a
+ - type: DB_PATCHING
+ pre_task_cmd: ls -l
+ post_task_cmd: ls -F
+ working_directory: /check
register: result
- - name: register the previous unregistered vm
- ntnx_ndb_register_db_server_vm:
- ip: "{{vm_ip}}"
- desc: "register-vm-desc"
+ - name: Register the previous unregistered vm
+ nutanix.ncp.ntnx_ndb_register_db_server_vm:
+ ip: "{{ vm_ip }}"
+ desc: register-vm-desc
cluster:
- name: "{{cluster.cluster1.name}}"
+ name: "{{ cluster.cluster1.name }}"
postgres:
listener_port: 5432
- software_path: "{{postgres.software_home}}"
- username: "{{vm_username}}"
- password: "{{vm_password}}"
+ software_path: "{{ postgres.software_home }}"
+ username: "{{ vm_username }}"
+ password: "{{ vm_password }}"
automated_patching:
maintenance_window:
name: "{{ maintenance.window_name }}"
tasks:
- - type: "OS_PATCHING"
- pre_task_cmd: "ls"
- post_task_cmd: "ls -a"
- - type: "DB_PATCHING"
- pre_task_cmd: "ls -l"
- post_task_cmd: "ls -F"
+ - type: OS_PATCHING
+ pre_task_cmd: ls
+ post_task_cmd: ls -a
+ - type: DB_PATCHING
+ pre_task_cmd: ls -l
+ post_task_cmd: ls -F
register: result
################################### DB server VM Delete tasks #############################
-
- - name: unregister db server vm
- ntnx_ndb_db_server_vms:
- state: "absent"
- wait: True
- uuid: "{{db_server_uuid}}"
+ - name: Unregister db server vm
+ nutanix.ncp.ntnx_ndb_db_server_vms:
+ state: absent
+ wait: true
+ uuid: "{{ db_server_uuid }}"
delete_from_cluster: false
- delete_vgs: True
- delete_vm_snapshots: True
+ delete_vgs: true
+ delete_vm_snapshots: true
register: result
diff --git a/examples/ndb/provision_postgres_ha_instance_with_ips.yml b/examples/ndb/provision_postgres_ha_instance_with_ips.yml
index 00e95fc68..eb5e2b5c6 100644
--- a/examples/ndb/provision_postgres_ha_instance_with_ips.yml
+++ b/examples/ndb/provision_postgres_ha_instance_with_ips.yml
@@ -1,6 +1,6 @@
---
-# Here we will be deploying high availibility postgres database with static IPs assigned
-# to vms and virtul IP for HA proxy
+# Here we will be deploying high availability postgres database with static IPs assigned
+# to vms and virtual IP for HA proxy
- name: Create stretched vlan
hosts: localhost
gather_facts: false
diff --git a/examples/roles_crud.yml b/examples/roles_crud.yml
index b01c02eca..d364c804f 100644
--- a/examples/roles_crud.yml
+++ b/examples/roles_crud.yml
@@ -1,8 +1,6 @@
- name: Roles crud playbook. Here we will create, update, read and delete the role.
hosts: localhost
gather_facts: false
- collections:
- - nutanix.ncp
module_defaults:
group/nutanix.ncp.ntnx:
nutanix_host:
@@ -10,12 +8,12 @@
nutanix_password:
validate_certs: false
tasks:
- - name: get some permissions for adding in roles
- ntnx_permissions_info:
+ - name: Get some permissions for adding in roles
+ nutanix.ncp.ntnx_permissions_info:
register: permissions
- name: Create a role with 2 permissions. Here we will be using name or uuid for referenceing permissions
- ntnx_roles:
+ nutanix.ncp.ntnx_roles:
state: present
name: test-ansible-role-1
desc:
@@ -26,7 +24,7 @@
register: role1
- name: Update role
- ntnx_roles:
+ nutanix.ncp.ntnx_roles:
state: present
role_uuid: "{{ role1.role_uuid }}"
name: test-ansible-role-1
@@ -36,16 +34,16 @@
register: updated_role1
- name: Read the updated role
- ntnx_roles_info:
+ nutanix.ncp.ntnx_roles_info:
role_uuid: "{{ updated_role1.role_uuid }}"
register: role1_info
- name: Print the role details
- debug:
+ ansible.builtin.debug:
msg: "{{role1_info}}"
- name: Delete the role.
- ntnx_roles:
+ nutanix.ncp.ntnx_roles:
state: absent
role_uuid: "{{ updated_role1.role_uuid }}"
wait: true
diff --git a/examples/vm.yml b/examples/vm.yml
index f88ab7064..e6c83b471 100644
--- a/examples/vm.yml
+++ b/examples/vm.yml
@@ -2,8 +2,6 @@
- name: VM playbook
hosts: localhost
gather_facts: false
- collections:
- - nutanix.ncp
module_defaults:
group/nutanix.ncp.ntnx:
nutanix_host:
@@ -11,8 +9,8 @@
nutanix_password:
validate_certs: false
tasks:
- - name: Setting Variables
- set_fact:
+ - name: Setting Variables
+ ansible.builtin.set_fact:
cluster_name: ""
script_path: ""
subnet_name: ""
@@ -20,55 +18,56 @@
password: ""
fqdn: ""
- - name: Create Cloud-init Script file
- copy:
- dest: "cloud_init.yml"
- content: |
- #cloud-config
- chpasswd:
- list: |
- root: "{{ password }}"
- expire: False
- fqdn: "{{ fqdn }}"
+ - name: Create Cloud-init Script file
+ ansible.builtin.copy:
+ mode: "0644"
+ dest: "cloud_init.yml"
+ content: |
+ #cloud-config
+ chpasswd:
+ list: |
+ root: "{{ password }}"
+ expire: False
+ fqdn: "{{ fqdn }}"
- - name: create Vm
- ntnx_vms:
- state: present
- name: "ansible_automation_demo"
- desc: "ansible_vm_description"
- categories:
- AppType:
- - "Apache_Spark"
- cluster:
- name: "{{cluster_name}}"
- networks:
- - is_connected: True
- subnet:
- name: "{{ subnet_name }}"
- # mention cluster only when there are multiple subnets with same name accross clusters
- # and subnet name is set above
- cluster:
- name: "{{cluster_name}}"
- disks:
- - type: "DISK"
- size_gb: 30
- bus: "SATA"
- clone_image:
- name: "{{ image_name }}"
- vcpus: 1
- cores_per_vcpu: 1
- memory_gb: 1
- guest_customization:
- type: "cloud_init"
- script_path: "./cloud_init.yml"
- is_overridable: True
- register: output
+ - name: Create Vm
+ nutanix.ncp.ntnx_vms:
+ state: present
+ name: "ansible_automation_demo"
+ desc: "ansible_vm_description"
+ categories:
+ AppType:
+ - "Apache_Spark"
+ cluster:
+ name: "{{cluster_name}}"
+ networks:
+ - is_connected: true
+ subnet:
+ name: "{{ subnet_name }}"
+ # mention cluster only when there are multiple subnets with same name accross clusters
+ # and subnet name is set above
+ cluster:
+ name: "{{cluster_name}}"
+ disks:
+ - type: "DISK"
+ size_gb: 30
+ bus: "SATA"
+ clone_image:
+ name: "{{ image_name }}"
+ vcpus: 1
+ cores_per_vcpu: 1
+ memory_gb: 1
+ guest_customization:
+ type: "cloud_init"
+ script_path: "./cloud_init.yml"
+ is_overridable: true
+ register: output
- - name: output of vm created
- debug:
- msg: '{{ output }}'
+ - name: Output of vm created
+ ansible.builtin.debug:
+ msg: "{{ output }}"
- - name: delete VM
- ntnx_vms:
- state: absent
- vm_uuid: "{{output.vm_uuid}}"
+ - name: Delete VM
+ nutanix.ncp.ntnx_vms:
+ state: absent
+ vm_uuid: "{{output.vm_uuid}}"
diff --git a/plugins/module_utils/entity.py b/plugins/module_utils/entity.py
index b4c256703..9ccc76e0f 100644
--- a/plugins/module_utils/entity.py
+++ b/plugins/module_utils/entity.py
@@ -344,7 +344,7 @@ def _build_url_with_query(self, url, query):
def _fetch_url(
self, url, method, data=None, raise_error=True, no_response=False, timeout=30
):
- # only jsonify if content-type supports, added to avoid in case of form-url-encodeded type data
+ # only jsonify if content-type supports, added to avoid in case of form-url-encoded type data
if self.headers["Content-Type"] == "application/json" and data is not None:
data = self.module.jsonify(data)
diff --git a/plugins/module_utils/foundation/image_nodes.py b/plugins/module_utils/foundation/image_nodes.py
index 4e11760fb..43fb40ba3 100644
--- a/plugins/module_utils/foundation/image_nodes.py
+++ b/plugins/module_utils/foundation/image_nodes.py
@@ -404,7 +404,7 @@ def _get_default_node_spec(self, node):
"ucsm_node_serial": None,
"ucsm_managed_mode": None,
"ucsm_params": None,
- "exlude_boot_serial": False,
+ "exclude_boot_serial": False,
"mitigate_low_boot_space": False,
"bond_uplinks": [],
"vswitches": [],
diff --git a/plugins/module_utils/ndb/database_engines/postgres.py b/plugins/module_utils/ndb/database_engines/postgres.py
index 223cad085..b9c29be67 100644
--- a/plugins/module_utils/ndb/database_engines/postgres.py
+++ b/plugins/module_utils/ndb/database_engines/postgres.py
@@ -115,7 +115,7 @@ def build_spec_db_instance_provision_action_arguments(self, payload, config):
spec = {"name": key, "value": config.get(key, value)}
action_arguments.append(spec)
- # handle scenariors where display names are diff
+ # handle scenarios where display names are diff
action_arguments.append(
{"name": "database_names", "value": config.get("db_name")}
)
@@ -212,7 +212,7 @@ def build_spec_db_instance_provision_action_arguments(self, payload, config):
spec = {"name": key, "value": config.get(key, default)}
action_arguments.append(spec)
- # handle scenariors where display names are different
+ # handle scenarios where display names are different
action_arguments.append(
{"name": "database_names", "value": config.get("db_name")}
)
diff --git a/plugins/module_utils/ndb/profiles/profile_types.py b/plugins/module_utils/ndb/profiles/profile_types.py
index d51c7c6c3..f651d9904 100644
--- a/plugins/module_utils/ndb/profiles/profile_types.py
+++ b/plugins/module_utils/ndb/profiles/profile_types.py
@@ -252,7 +252,7 @@ def get_create_profile_spec(self, old_spec=None, params=None, **kwargs):
self.build_spec_methods.update(
{
"software": self._build_spec_profile,
- "clusters": self._build_spec_clusters_availibilty,
+ "clusters": self._build_spec_clusters_availability,
}
)
payload, err = super().get_create_profile_spec(
@@ -269,7 +269,7 @@ def get_create_profile_spec(self, old_spec=None, params=None, **kwargs):
def get_update_profile_spec(self, old_spec=None, params=None, **kwargs):
self.build_spec_methods.update(
- {"clusters": self._build_spec_clusters_availibilty}
+ {"clusters": self._build_spec_clusters_availability}
)
payload, err = super().get_update_profile_spec(old_spec, params, **kwargs)
if err:
@@ -374,7 +374,7 @@ def _build_spec_version_create_properties(
payload["properties"] = properties
return payload, None
- def _build_spec_clusters_availibilty(self, payload, clusters):
+ def _build_spec_clusters_availability(self, payload, clusters):
_clusters = Cluster(self.module)
spec = []
clusters_name_uuid_map = _clusters.get_all_clusters_name_uuid_map()
diff --git a/plugins/module_utils/prism/idempotence_identifiers.py b/plugins/module_utils/prism/idempotence_identifiers.py
index f8e5be4ca..ed52b1928 100644
--- a/plugins/module_utils/prism/idempotence_identifiers.py
+++ b/plugins/module_utils/prism/idempotence_identifiers.py
@@ -9,10 +9,10 @@
from .prism import Prism
-class IdempotenceIdenitifiers(Prism):
+class IdempotenceIdentifiers(Prism):
def __init__(self, module):
resource_type = "/idempotence_identifiers"
- super(IdempotenceIdenitifiers, self).__init__(
+ super(IdempotenceIdentifiers, self).__init__(
module, resource_type=resource_type
)
diff --git a/plugins/module_utils/prism/projects_internal.py b/plugins/module_utils/prism/projects_internal.py
index 4c7ced086..61f235e8e 100644
--- a/plugins/module_utils/prism/projects_internal.py
+++ b/plugins/module_utils/prism/projects_internal.py
@@ -8,7 +8,7 @@
from .accounts import Account, get_account_uuid
from .acps import ACP
from .clusters import Cluster
-from .idempotence_identifiers import IdempotenceIdenitifiers
+from .idempotence_identifiers import IdempotenceIdentifiers
from .prism import Prism
from .roles import get_role_uuid
from .subnets import Subnet, get_subnet_uuid
@@ -133,9 +133,9 @@ def _build_spec_default_subnet(self, payload, subnet_ref):
if err:
return None, err
- payload["spec"]["project_detail"]["resources"][
- "default_subnet_reference"
- ] = Subnet.build_subnet_reference_spec(uuid)
+ payload["spec"]["project_detail"]["resources"]["default_subnet_reference"] = (
+ Subnet.build_subnet_reference_spec(uuid)
+ )
return payload, None
def _build_spec_subnets(self, payload, subnet_ref_list):
@@ -193,7 +193,7 @@ def _build_spec_user_and_user_groups_list(self, payload, role_mappings):
):
new_uuids_required += 1
- ii = IdempotenceIdenitifiers(self.module)
+ ii = IdempotenceIdentifiers(self.module)
# get uuids for user groups
new_uuid_list = ii.get_idempotent_uuids(new_uuids_required)
@@ -393,13 +393,11 @@ def _build_spec_role_mappings(self, payload, role_mappings):
acp["acp"]["resources"]["user_reference_list"] = role_user_groups_map[
acp["acp"]["resources"]["role_reference"]["uuid"]
]["users"]
- acp["acp"]["resources"][
- "user_group_reference_list"
- ] = role_user_groups_map[
- acp["acp"]["resources"]["role_reference"]["uuid"]
- ][
- "user_groups"
- ]
+ acp["acp"]["resources"]["user_group_reference_list"] = (
+ role_user_groups_map[
+ acp["acp"]["resources"]["role_reference"]["uuid"]
+ ]["user_groups"]
+ )
# pop the role uuid entry once used for acp update
role_user_groups_map.pop(
diff --git a/plugins/module_utils/prism/protection_rules.py b/plugins/module_utils/prism/protection_rules.py
index 5808eaae3..1f0d017b5 100644
--- a/plugins/module_utils/prism/protection_rules.py
+++ b/plugins/module_utils/prism/protection_rules.py
@@ -91,13 +91,13 @@ def _build_spec_schedules(self, payload, schedules):
az_connection_spec = {}
spec = {}
if schedule.get("source"):
- az_connection_spec[
- "source_availability_zone_index"
- ] = ordered_az_list.index(schedule["source"])
+ az_connection_spec["source_availability_zone_index"] = (
+ ordered_az_list.index(schedule["source"])
+ )
if schedule.get("destination"):
- az_connection_spec[
- "destination_availability_zone_index"
- ] = ordered_az_list.index(schedule["destination"])
+ az_connection_spec["destination_availability_zone_index"] = (
+ ordered_az_list.index(schedule["destination"])
+ )
if schedule["protection_type"] == "ASYNC":
if (
@@ -110,7 +110,7 @@ def _build_spec_schedules(self, payload, schedules):
):
return (
None,
- "rpo, rpo_unit, snapshot_type and at least one policy are required fields for aysynchronous snapshot schedule",
+ "rpo, rpo_unit, snapshot_type and at least one policy are required fields for asynchronous snapshot schedule",
)
spec["recovery_point_objective_secs"], err = convert_to_secs(
diff --git a/plugins/modules/ntnx_acps.py b/plugins/modules/ntnx_acps.py
index 6ca411489..85dc2ee65 100644
--- a/plugins/modules/ntnx_acps.py
+++ b/plugins/modules/ntnx_acps.py
@@ -157,14 +157,14 @@
user_group_uuids:
- "{{ user_group_uuid }}"
-- name: Create ACP with all specfactions
+- name: Create ACP with all specifications
ntnx_acps:
validate_certs: False
state: present
nutanix_host: "{{ IP }}"
nutanix_username: "{{ username }}"
nutanix_password: "{{ password }}"
- name: acp_with_all_specfactions
+ name: acp_with_all_specifications
role:
uuid: "{{ role.uuid }}"
user_uuids:
diff --git a/plugins/modules/ntnx_categories.py b/plugins/modules/ntnx_categories.py
index 9b01609d2..a094b4fe2 100644
--- a/plugins/modules/ntnx_categories.py
+++ b/plugins/modules/ntnx_categories.py
@@ -202,7 +202,7 @@ def create_categories(module, result):
if value not in category_key_values:
category_values_specs.append(_category_value.get_value_spec(value))
- # indempotency check
+ # idempotency check
if not category_values_specs and (
category_key_exists and (category_key == category_key_spec)
):
diff --git a/plugins/modules/ntnx_clusters_info.py b/plugins/modules/ntnx_clusters_info.py
index 33b100eb3..5e1ecf6aa 100644
--- a/plugins/modules/ntnx_clusters_info.py
+++ b/plugins/modules/ntnx_clusters_info.py
@@ -32,7 +32,7 @@
- Alaa Bishtawi (@alaa-bish)
"""
EXAMPLES = r"""
- - name: List clusterss
+ - name: List clusters
ntnx_clusters_info:
nutanix_host: "{{ ip }}"
nutanix_username: "{{ username }}"
diff --git a/plugins/modules/ntnx_floating_ips.py b/plugins/modules/ntnx_floating_ips.py
index 3986b73e0..c858adcac 100644
--- a/plugins/modules/ntnx_floating_ips.py
+++ b/plugins/modules/ntnx_floating_ips.py
@@ -82,7 +82,7 @@
nutanix_username: "{{ username }}"
nutanix_password: "{{ password }}"
external_subnet:
- uuid: "{{external_subnet.subnet_uuiid}}"
+ uuid: "{{external_subnet.subnet_uuid}}"
- name: create Floating IP with vpc Name with external subnet uuid
ntnx_floating_ips:
@@ -92,7 +92,7 @@
nutanix_username: "{{ username }}"
nutanix_password: "{{ password }}"
external_subnet:
- uuid: "{{external_subnet.subnet_uuiid}}"
+ uuid: "{{external_subnet.subnet_uuid}}"
vpc:
name: "{{vpc.vpc_name}}"
private_ip: "{{private_ip}}"
diff --git a/plugins/modules/ntnx_foundation.py b/plugins/modules/ntnx_foundation.py
index 8c373fcb9..803e2f2b2 100644
--- a/plugins/modules/ntnx_foundation.py
+++ b/plugins/modules/ntnx_foundation.py
@@ -207,7 +207,7 @@
required: false
bond_mode:
description:
- - bonde mode, "dynamic" if using LACP, "static" for LAG
+ - bond mode, "dynamic" if using LACP, "static" for LAG
type: str
choices:
- dynamic
@@ -247,7 +247,7 @@
- UCSM node serial
type: bool
required: false
- exlude_boot_serial:
+ exclude_boot_serial:
description:
- serial of boot device to be excluded, used by NX G6 platforms
type: bool
@@ -471,7 +471,7 @@
required: false
bond_mode:
description:
- - bonde mode, "dynamic" if using LACP, "static" for LAG
+ - bond mode, "dynamic" if using LACP, "static" for LAG
type: str
choices:
- dynamic
@@ -501,7 +501,7 @@
- UCSM Managed mode
type: str
required: false
- exlude_boot_serial:
+ exclude_boot_serial:
description:
- serial of boot device to be excluded, used by NX G6 platforms
type: bool
@@ -1054,7 +1054,7 @@ def get_module_spec():
ucsm_node_serial=dict(type="str", required=False),
image_successful=dict(type="bool", required=False),
ucsm_managed_mode=dict(type="str", required=False),
- exlude_boot_serial=dict(type="bool", required=False),
+ exclude_boot_serial=dict(type="bool", required=False),
mitigate_low_boot_space=dict(type="bool", required=False),
vswitches=dict(type="list", elements="dict", options=vswitches, required=False),
ucsm_params=dict(type="dict", options=ucsm_params, required=False),
@@ -1094,7 +1094,7 @@ def get_module_spec():
rdma_passthrough=dict(type="bool", required=False),
ucsm_node_serial=dict(type="str", required=False),
ucsm_managed_mode=dict(type="str", required=False),
- exlude_boot_serial=dict(type="bool", required=False),
+ exclude_boot_serial=dict(type="bool", required=False),
mitigate_low_boot_space=dict(type="bool", required=False),
bond_uplinks=dict(type="list", elements="str", required=False),
vswitches=dict(type="list", elements="dict", options=vswitches, required=False),
diff --git a/plugins/modules/ntnx_foundation_central_imaged_clusters_info.py b/plugins/modules/ntnx_foundation_central_imaged_clusters_info.py
index deae2d704..a371db69e 100644
--- a/plugins/modules/ntnx_foundation_central_imaged_clusters_info.py
+++ b/plugins/modules/ntnx_foundation_central_imaged_clusters_info.py
@@ -10,7 +10,7 @@
DOCUMENTATION = r"""
---
module: ntnx_foundation_central_imaged_clusters_info
-short_description: Nutanix module which returns the imaged clusters within the Foudation Central
+short_description: Nutanix module which returns the imaged clusters within the Foundation Central
version_added: 1.1.0
description: 'List all the imaged clusters created in Foundation Central.'
options:
diff --git a/plugins/modules/ntnx_foundation_central_imaged_nodes_info.py b/plugins/modules/ntnx_foundation_central_imaged_nodes_info.py
index 4a62ec521..33bdc2e8a 100644
--- a/plugins/modules/ntnx_foundation_central_imaged_nodes_info.py
+++ b/plugins/modules/ntnx_foundation_central_imaged_nodes_info.py
@@ -10,7 +10,7 @@
DOCUMENTATION = r"""
---
module: ntnx_foundation_central_imaged_nodes_info
-short_description: Nutanix module which returns the imaged nodes within the Foudation Central
+short_description: Nutanix module which returns the imaged nodes within the Foundation Central
version_added: 1.1.0
description: 'List all the imaged nodes created in Foundation Central.'
options:
diff --git a/plugins/modules/ntnx_image_placement_policy.py b/plugins/modules/ntnx_image_placement_policy.py
index 2d7da6e3e..ae276f4c5 100644
--- a/plugins/modules/ntnx_image_placement_policy.py
+++ b/plugins/modules/ntnx_image_placement_policy.py
@@ -146,7 +146,7 @@
nutanix_username: "{{ username }}"
nutanix_password: "{{ password }}"
validate_certs: False
- name: "test_policy_2-uodated"
+ name: "test_policy_2-updated"
desc: "test_policy_2_desc-updated"
placement_type: hard
categories:
diff --git a/plugins/modules/ntnx_images_info.py b/plugins/modules/ntnx_images_info.py
index 96fbd78f2..bd9ac57d1 100644
--- a/plugins/modules/ntnx_images_info.py
+++ b/plugins/modules/ntnx_images_info.py
@@ -136,7 +136,7 @@
"resources": {
"architecture": "X86_64",
"image_type": "DISK_IMAGE",
- "source_uri": ""
+ "source_uri": ""
}
},
"status": {
diff --git a/plugins/modules/ntnx_ndb_authorize_db_server_vms.py b/plugins/modules/ntnx_ndb_authorize_db_server_vms.py
index 17b47061a..c9816b4b2 100644
--- a/plugins/modules/ntnx_ndb_authorize_db_server_vms.py
+++ b/plugins/modules/ntnx_ndb_authorize_db_server_vms.py
@@ -74,7 +74,7 @@
"""
RETURN = r"""
response:
- description: An intentful representation of a authorizisation status
+ description: An intentful representation of a authorization status
returned: always
type: dict
sample: {
diff --git a/plugins/modules/ntnx_ndb_clusters.py b/plugins/modules/ntnx_ndb_clusters.py
index 24a3b6a4e..964906722 100644
--- a/plugins/modules/ntnx_ndb_clusters.py
+++ b/plugins/modules/ntnx_ndb_clusters.py
@@ -141,7 +141,7 @@
"""
EXAMPLES = r"""
- - name: Register Cluster with prisim_vlan
+ - name: Register Cluster with prism_vlan
ntnx_ndb_clusters:
nutanix_host: ""
nutanix_username: ""
diff --git a/plugins/modules/ntnx_ndb_database_clones.py b/plugins/modules/ntnx_ndb_database_clones.py
index c7277b46e..56276b9a7 100644
--- a/plugins/modules/ntnx_ndb_database_clones.py
+++ b/plugins/modules/ntnx_ndb_database_clones.py
@@ -749,13 +749,13 @@ def get_clone_spec(module, result, time_machine_uuid):
provision_new_server = (
True if module.params.get("db_vm", {}).get("create_new_server") else False
)
- use_athorized_server = not provision_new_server
+ use_authorized_server = not provision_new_server
kwargs = {
"time_machine_uuid": time_machine_uuid,
"db_clone": True,
"provision_new_server": provision_new_server,
- "use_authorized_server": use_athorized_server,
+ "use_authorized_server": use_authorized_server,
}
spec, err = db_server_vms.get_spec(old_spec=spec, **kwargs)
diff --git a/plugins/modules/ntnx_ndb_databases_info.py b/plugins/modules/ntnx_ndb_databases_info.py
index a9e5430b7..c3644ed08 100644
--- a/plugins/modules/ntnx_ndb_databases_info.py
+++ b/plugins/modules/ntnx_ndb_databases_info.py
@@ -35,7 +35,7 @@
type: bool
load_dbserver_cluster:
description:
- - load db serverv cluster in response
+ - load db server cluster in response
type: bool
order_by_dbserver_cluster:
description:
diff --git a/plugins/modules/ntnx_ndb_db_server_vms.py b/plugins/modules/ntnx_ndb_db_server_vms.py
index adaf14702..2bdec4e48 100644
--- a/plugins/modules/ntnx_ndb_db_server_vms.py
+++ b/plugins/modules/ntnx_ndb_db_server_vms.py
@@ -111,7 +111,7 @@
type: str
version_uuid:
description:
- - version UUID for softwware profile
+ - version UUID for software profile
- if not given then latest version will be used
type: str
time_machine:
diff --git a/plugins/modules/ntnx_ndb_maintenance_tasks.py b/plugins/modules/ntnx_ndb_maintenance_tasks.py
index 8e9c3f111..b157f29ed 100644
--- a/plugins/modules/ntnx_ndb_maintenance_tasks.py
+++ b/plugins/modules/ntnx_ndb_maintenance_tasks.py
@@ -125,7 +125,7 @@
"accessLevel": null,
"dateCreated": "2023-02-25 06:34:44",
"dateModified": "2023-02-28 00:00:00",
- "description": "anisble-created-window",
+ "description": "ansible-created-window",
"entityTaskAssoc": [
{
"accessLevel": null,
diff --git a/plugins/modules/ntnx_ndb_maintenance_window.py b/plugins/modules/ntnx_ndb_maintenance_window.py
index f8299636b..12a18e40c 100644
--- a/plugins/modules/ntnx_ndb_maintenance_window.py
+++ b/plugins/modules/ntnx_ndb_maintenance_window.py
@@ -9,9 +9,9 @@
DOCUMENTATION = r"""
---
module: ntnx_ndb_maintenance_window
-short_description: module to create, update and delete mainetance window
+short_description: module to create, update and delete maintenance window
version_added: 1.8.0
-description: module to create, update and delete mainetance window
+description: module to create, update and delete maintenance window
options:
name:
description:
@@ -71,7 +71,7 @@
- name: create window with weekly schedule
ntnx_ndb_maintenance_window:
name: "{{window1_name}}"
- desc: "anisble-created-window"
+ desc: "ansible-created-window"
schedule:
recurrence: "weekly"
duration: 2
@@ -83,7 +83,7 @@
- name: create window with monthly schedule
ntnx_ndb_maintenance_window:
name: "{{window2_name}}"
- desc: "anisble-created-window"
+ desc: "ansible-created-window"
schedule:
recurrence: "monthly"
duration: 2
diff --git a/plugins/modules/ntnx_ndb_maintenance_windows_info.py b/plugins/modules/ntnx_ndb_maintenance_windows_info.py
index b2d0c6b61..b7ce07a34 100644
--- a/plugins/modules/ntnx_ndb_maintenance_windows_info.py
+++ b/plugins/modules/ntnx_ndb_maintenance_windows_info.py
@@ -49,7 +49,7 @@
"accessLevel": null,
"dateCreated": "2023-02-25 06:34:44",
"dateModified": "2023-02-28 00:00:00",
- "description": "anisble-created-window",
+ "description": "ansible-created-window",
"entityTaskAssoc": [
{
"accessLevel": null,
diff --git a/plugins/modules/ntnx_ndb_profiles.py b/plugins/modules/ntnx_ndb_profiles.py
index d0a3de9e5..d842cd790 100644
--- a/plugins/modules/ntnx_ndb_profiles.py
+++ b/plugins/modules/ntnx_ndb_profiles.py
@@ -279,7 +279,7 @@
type: int
autovacuum:
description:
- - on/off autovaccum
+ - on/off autovacuum
- default is on
type: str
choices: ["on", "off"]
@@ -305,7 +305,7 @@
type: float
autovacuum_work_mem:
description:
- - autovacum work memory in KB
+ - autovacuum work memory in KB
- default is -1
type: int
autovacuum_max_workers:
diff --git a/plugins/modules/ntnx_ndb_register_database.py b/plugins/modules/ntnx_ndb_register_database.py
index 0c8b963c9..bb69f92f2 100644
--- a/plugins/modules/ntnx_ndb_register_database.py
+++ b/plugins/modules/ntnx_ndb_register_database.py
@@ -182,7 +182,7 @@
default: true
postgres:
description:
- - potgres related configuration
+ - postgres related configuration
type: dict
suboptions:
listener_port:
diff --git a/plugins/modules/ntnx_ndb_register_db_server_vm.py b/plugins/modules/ntnx_ndb_register_db_server_vm.py
index 3c5a9c240..ad49e4ebb 100644
--- a/plugins/modules/ntnx_ndb_register_db_server_vm.py
+++ b/plugins/modules/ntnx_ndb_register_db_server_vm.py
@@ -45,7 +45,7 @@
type: str
postgres:
description:
- - potgres related configuration
+ - postgres related configuration
type: dict
suboptions:
listener_port:
diff --git a/plugins/modules/ntnx_ndb_vlans.py b/plugins/modules/ntnx_ndb_vlans.py
index 709844912..e020c9f0f 100644
--- a/plugins/modules/ntnx_ndb_vlans.py
+++ b/plugins/modules/ntnx_ndb_vlans.py
@@ -187,12 +187,12 @@
type: str
sample: "Static"
managed:
- description: mannaged or unmannged vlan
+ description: managed or unmanaged vlan
returned: always
type: bool
propertiesMap:
- description: confiuration of static vlan
+ description: configuration of static vlan
type: dict
returned: always
sample:
@@ -232,7 +232,7 @@
]
properties:
- description: list of confiuration of static vlan
+ description: list of configuration of static vlan
type: list
returned: always
sample:
diff --git a/plugins/modules/ntnx_projects.py b/plugins/modules/ntnx_projects.py
index 935917f8a..e2de402b9 100644
--- a/plugins/modules/ntnx_projects.py
+++ b/plugins/modules/ntnx_projects.py
@@ -357,7 +357,7 @@
from ..module_utils.base_module import BaseModule # noqa: E402
from ..module_utils.prism.idempotence_identifiers import ( # noqa: E402
- IdempotenceIdenitifiers,
+ IdempotenceIdentifiers,
)
from ..module_utils.prism.projects import Project # noqa: E402
from ..module_utils.prism.projects_internal import ProjectsInternal # noqa: E402
@@ -485,7 +485,7 @@ def create_project(module, result):
if module.params.get("role_mappings"):
# generate new uuid for project
- ii = IdempotenceIdenitifiers(module)
+ ii = IdempotenceIdentifiers(module)
uuids = ii.get_idempotent_uuids()
projects = ProjectsInternal(module, uuid=uuids[0])
diff --git a/plugins/modules/ntnx_recovery_plans.py b/plugins/modules/ntnx_recovery_plans.py
index ed9e1afaa..5b5eb9910 100644
--- a/plugins/modules/ntnx_recovery_plans.py
+++ b/plugins/modules/ntnx_recovery_plans.py
@@ -567,7 +567,7 @@
{
"ip_config_list": [
{
- "ip_address": "cutom_ip_1"
+ "ip_address": "custom_ip_1"
}
],
"vm_reference": {
diff --git a/plugins/modules/ntnx_roles.py b/plugins/modules/ntnx_roles.py
index 72315cae0..b400d425a 100644
--- a/plugins/modules/ntnx_roles.py
+++ b/plugins/modules/ntnx_roles.py
@@ -80,7 +80,7 @@
name: test-ansible-role-1
desc: test-ansible-role-1-desc
permissions:
- - name: ""
+ - name: ""
- uuid: ""
- uuid: ""
wait: true
diff --git a/plugins/modules/ntnx_service_groups.py b/plugins/modules/ntnx_service_groups.py
index 855288373..532608f24 100644
--- a/plugins/modules/ntnx_service_groups.py
+++ b/plugins/modules/ntnx_service_groups.py
@@ -84,7 +84,7 @@
nutanix_username: "{{ username }}"
nutanix_password: "{{ password }}"
validate_certs: False
- name: app_srvive_group
+ name: app_service_group
desc: desc
service_details:
tcp:
@@ -102,7 +102,7 @@
nutanix_username: "{{ username }}"
nutanix_password: "{{ password }}"
validate_certs: False
- name: icmp_srvive_group
+ name: icmp_service_group
desc: desc
service_details:
icmp:
diff --git a/plugins/modules/ntnx_user_groups_info.py b/plugins/modules/ntnx_user_groups_info.py
index 0c0875e31..a4149189e 100644
--- a/plugins/modules/ntnx_user_groups_info.py
+++ b/plugins/modules/ntnx_user_groups_info.py
@@ -154,7 +154,7 @@
"name": "qanucalm",
"uuid": "00000000-0000-0000-0000-000000000000"
},
- "distinguished_name": ""
+ "distinguished_name": ""
},
"display_name": "name1",
"projects_reference_list": [],
diff --git a/plugins/modules/ntnx_vms.py b/plugins/modules/ntnx_vms.py
index 0cb748498..d495a6a22 100644
--- a/plugins/modules/ntnx_vms.py
+++ b/plugins/modules/ntnx_vms.py
@@ -444,14 +444,14 @@
- name: Create VM with minimum requirements with hard_poweroff operation
ntnx_vms:
state: hard_poweroff
- name: integration_test_opperations_vm
+ name: integration_test_operations_vm
cluster:
name: "{{ cluster.name }}"
- name: Create VM with minimum requirements with poweroff operation
ntnx_vms:
state: power_off
- name: integration_test_opperations_vm
+ name: integration_test_operations_vm
cluster:
name: "{{ cluster.name }}"
"""
@@ -928,9 +928,9 @@ def update_vm(module, result):
wait_for_task_completion(module, result, False)
response_state = result["response"].get("status")
if response_state == "FAILED":
- result[
- "warning"
- ] = "VM 'soft_shutdown' operation failed, use 'hard_poweroff' instead"
+ result["warning"] = (
+ "VM 'soft_shutdown' operation failed, use 'hard_poweroff' instead"
+ )
resp = vm.read(vm_uuid)
result["response"] = resp
diff --git a/plugins/modules/ntnx_vpcs.py b/plugins/modules/ntnx_vpcs.py
index 8f7d0c658..3bbf3fea3 100644
--- a/plugins/modules/ntnx_vpcs.py
+++ b/plugins/modules/ntnx_vpcs.py
@@ -84,14 +84,14 @@
name: vpc_with_dns_servers
dns_servers: "{{ dns_servers }}"
- - name: Create VPC with all specfactions
+ - name: Create VPC with all specifications
ntnx_vpcs:
validate_certs: False
state: present
nutanix_host: "{{ ip }}"
nutanix_username: "{{ username }}"
nutanix_password: "{{ password }}"
- name: vpc_with_add_specfactions
+ name: vpc_with_add_specifications
external_subnets:
- subnet_name: "{{ external_subnet.name }}"
dns_servers: "{{ dns_servers }}"
diff --git a/scripts/codegen.py b/scripts/codegen.py
index c182b97f1..747af2d9c 100644
--- a/scripts/codegen.py
+++ b/scripts/codegen.py
@@ -14,7 +14,7 @@
DOCUMENTATION = r"""
---
module: ntnx_MNAME
-short_description: MNAME module which suports INAME CRUD operations
+short_description: MNAME module which supports INAME CRUD operations
version_added: 1.0.0
description: 'Create, Update, Delete MNAME'
options:
@@ -192,7 +192,7 @@ def __init__(self, module):
super(CNAME, self).__init__(module, resource_type=resource_type)
self.build_spec_methods = {
# Step 2. This is a Map of
- # ansible attirbute and corresponding API spec generation method
+ # ansible attribute and corresponding API spec generation method
# Example: method name should start with _build_spec_
# name: _build_spec_name
}
diff --git a/tests/integration/targets/ntnx_acps/tasks/create_acps.yml b/tests/integration/targets/ntnx_acps/tasks/create_acps.yml
index 06125d5b6..fc1490dc0 100644
--- a/tests/integration/targets/ntnx_acps/tasks/create_acps.yml
+++ b/tests/integration/targets/ntnx_acps/tasks/create_acps.yml
@@ -10,7 +10,6 @@
acp4_name: "{{random_name[0]}}4"
acp5_name: "{{random_name[0]}}5"
-
- name: Create min ACP
ntnx_acps:
state: present
@@ -38,7 +37,7 @@
wait: true
name: "{{acp2_name}}"
role:
- uuid: '{{ acp.role.uuid }}'
+ uuid: "{{ acp.role.uuid }}"
check_mode: false
register: result
ignore_errors: True
@@ -108,7 +107,7 @@
- set_fact:
todelete: "{{ todelete + [ result.acp_uuid ] }}"
##########################################################
-- name: Create ACP with all specfactions
+- name: Create ACP with all specifications
ntnx_acps:
state: present
name: "{{acp4_name}}"
@@ -134,7 +133,7 @@
operator: IN
rhs:
uuid_list:
- - "{{ network.dhcp.uuid }}"
+ - "{{ network.dhcp.uuid }}"
- scope_filter:
- lhs: CATEGORY
operator: IN
@@ -172,8 +171,8 @@
- result.response.status.resources.filter_list.context_list.1.entity_filter_expression_list.0.right_hand_side.collection == "ALL"
- result.response.status.resources.filter_list.context_list.1.scope_filter_expression_list.0.operator == "IN"
- result.response.status.resources.filter_list.context_list.1.scope_filter_expression_list.0.left_hand_side == "CATEGORY"
- fail_msg: " Unable to Create ACP all specfactions "
- success_msg: " ACP with all specfactions created successfully "
+ fail_msg: " Unable to Create ACP all specifications "
+ success_msg: " ACP with all specifications created successfully "
- set_fact:
todelete: "{{ todelete + [ result.acp_uuid ] }}"
diff --git a/tests/integration/targets/ntnx_acps/tasks/delete_acp.yml b/tests/integration/targets/ntnx_acps/tasks/delete_acp.yml
index f988ef708..ed8eb9306 100644
--- a/tests/integration/targets/ntnx_acps/tasks/delete_acp.yml
+++ b/tests/integration/targets/ntnx_acps/tasks/delete_acp.yml
@@ -6,7 +6,7 @@
- set_fact:
acp1_name: "{{random_name[0]}}1"
-- name: Create ACP with all specfactions
+- name: Create ACP with all specifications
ntnx_acps:
state: present
name: "{{acp1_name}}"
@@ -18,15 +18,13 @@
- "{{ acp.user_group_uuid }}"
filters:
- scope_filter:
- -
- lhs: PROJECT
+ - lhs: PROJECT
operator: IN
rhs:
uuid_list:
- "{{ project.uuid }}"
entity_filter:
- -
- lhs: ALL
+ - lhs: ALL
operator: IN
rhs:
collection: ALL
@@ -47,9 +45,8 @@
- result.response.status.resources.filter_list.context_list.0.scope_filter_expression_list.0.operator == "IN"
- result.response.status.resources.filter_list.context_list.0.scope_filter_expression_list.0.left_hand_side == "PROJECT"
- result.response.status.resources.filter_list.context_list.0.scope_filter_expression_list.0.right_hand_side.uuid_list.0 == "{{ project.uuid }}"
- fail_msg: " Unable to Create ACP all specfactions "
- success_msg: " ACP with all specfactions created successfully "
-
+ fail_msg: " Unable to Create ACP all specifications "
+ success_msg: " ACP with all specifications created successfully "
- name: Delete acp
ntnx_acps:
@@ -65,5 +62,5 @@
- result.response.status == 'SUCCEEDED'
- result.failed == false
- result.changed == true
- fail_msg: " Unable to delete ACP with all specfactions "
+ fail_msg: " Unable to delete ACP with all specifications "
success_msg: " ACP has been deleted successfully "
diff --git a/tests/integration/targets/ntnx_address_groups/tasks/create.yml b/tests/integration/targets/ntnx_address_groups/tasks/create.yml
index 59a2e0cef..b9705da90 100644
--- a/tests/integration/targets/ntnx_address_groups/tasks/create.yml
+++ b/tests/integration/targets/ntnx_address_groups/tasks/create.yml
@@ -13,7 +13,6 @@
ag1: "{{random_name}}{{suffix_name}}1"
ag2: "{{random_name}}{{suffix_name}}2"
-
- name: Create address group
ntnx_address_groups:
state: present
@@ -40,7 +39,7 @@
- result.response.ip_address_block_list[1].prefix_length == 32
fail_msg: "Unable to create address group"
- success_msg: "Address group created susccessfully"
+ success_msg: "Address group created successfully"
- set_fact:
todelete: '{{ result["address_group_uuid"] }}'
@@ -97,7 +96,6 @@
###################################################################################################
-
- name: cleanup created entities
ntnx_address_groups:
state: absent
diff --git a/tests/integration/targets/ntnx_address_groups/tasks/delete.yml b/tests/integration/targets/ntnx_address_groups/tasks/delete.yml
index 1c707f087..520ef13ef 100644
--- a/tests/integration/targets/ntnx_address_groups/tasks/delete.yml
+++ b/tests/integration/targets/ntnx_address_groups/tasks/delete.yml
@@ -12,7 +12,6 @@
- set_fact:
ag1: "{{random_name}}{{suffix_name}}1"
-
- name: Create address group
ntnx_address_groups:
state: present
@@ -29,7 +28,7 @@
- test_ag.response is defined
- test_ag.changed == True
fail_msg: "Unable to create address group"
- success_msg: "address group created susccessfully"
+ success_msg: "address group created successfully"
###################################################################################################
diff --git a/tests/integration/targets/ntnx_address_groups/tasks/update.yml b/tests/integration/targets/ntnx_address_groups/tasks/update.yml
index 6107bc286..9bd074121 100644
--- a/tests/integration/targets/ntnx_address_groups/tasks/update.yml
+++ b/tests/integration/targets/ntnx_address_groups/tasks/update.yml
@@ -2,7 +2,6 @@
- debug:
msg: start ntnx_address_groups update tests
-
- name: Generate random project_name
set_fact:
random_name: "{{query('community.general.random_string',numbers=false, special=false,length=12)[0]}}"
@@ -14,7 +13,6 @@
ag1: "{{random_name}}{{suffix_name}}1"
ag2: "{{random_name}}{{suffix_name}}2"
-
##############################################################################################
- name: Create address group with
@@ -35,8 +33,7 @@
- test_ag.response is defined
- test_ag.changed == True
fail_msg: "Unable to create address group"
- success_msg: "Address group created susccessfully"
-
+ success_msg: "Address group created successfully"
###################################################################################################
@@ -64,7 +61,7 @@
- result.response.ip_address_block_list | length == 1
fail_msg: "Unable to update address group"
- success_msg: "Address group updated susccessfully"
+ success_msg: "Address group updated successfully"
###################################################################################################
diff --git a/tests/integration/targets/ntnx_foundation/tasks/image_nodes.yml b/tests/integration/targets/ntnx_foundation/tasks/image_nodes.yml
index 65560e6cc..5c26f500c 100644
--- a/tests/integration/targets/ntnx_foundation/tasks/image_nodes.yml
+++ b/tests/integration/targets/ntnx_foundation/tasks/image_nodes.yml
@@ -1,68 +1,67 @@
---
- - debug:
- msg: start testing ntnx_foundation
+- debug:
+ msg: start testing ntnx_foundation
- - name: Image nodes using manual and discovery modes. Create cluster
- ntnx_foundation:
- timeout: 4500
- nutanix_host: "{{foundation_host}}"
- cvm_gateway: "{{cvm_gateway}}"
- cvm_netmask: "{{cvm_netmask}}"
- hypervisor_gateway: "{{hypervisor_gateway}}"
- hypervisor_netmask: "{{hypervisor_netmask}}"
- default_ipmi_user: "{{default_ipmi_user}}"
- current_cvm_vlan_tag: "0"
- nos_package: "{{nos_package}}"
- blocks:
- - block_id: "{{IBIS_node.block_id}}"
- nodes:
- - manual_mode:
- cvm_ip: "{{IBIS_node.node1.cvm_ip}}"
- cvm_gb_ram: 50
- hypervisor_hostname: "{{IBIS_node.node1.hypervisor_hostname}}"
- ipmi_netmask: "{{IBIS_node.node1.ipmi_netmask}}"
- ipmi_gateway: "{{IBIS_node.node1.ipmi_gateway}}"
- ipmi_ip: "{{IBIS_node.node1.ipmi_ip}}"
- ipmi_password: "{{IBIS_node.node1.ipmi_password}}"
- hypervisor: "{{IBIS_node.node1.hypervisor}}"
- hypervisor_ip: "{{IBIS_node.node1.hypervisor_ip}}"
- node_position: "{{IBIS_node.node1.node_position}}"
- - discovery_mode: #dos mode using cvm
- cvm_gb_ram: 50
- node_serial: "{{IBIS_node.node3.node_serial}}"
- device_hint: "vm_installer"
- discovery_override:
- hypervisor_hostname: "{{IBIS_node.node3.hypervisor_hostname}}"
- hypervisor_ip: "{{IBIS_node.node3.hypervisor_ip}}"
- cvm_ip: "{{IBIS_node.node3.cvm_ip}}"
- hypervisor: "{{IBIS_node.node3.hypervisor}}"
- - discovery_mode: # aos node using ipmi
- cvm_gb_ram: 50
- ipmi_password: "{{IBIS_node.node2.ipmi_password}}"
- node_serial: "{{IBIS_node.node2.node_serial}}"
- discovery_override:
- hypervisor_hostname: "IBIS2"
- clusters:
- - redundancy_factor: 2
- cluster_members:
- - "{{IBIS_node.node1.cvm_ip}}"
- - "{{IBIS_node.node3.cvm_ip}}"
- - "{{IBIS_node.node2.cvm_ip}}"
- name: "test-cluster"
- register: first_cluster
- ignore_errors: True
- # when: false # make it true or remove to unskip task
-
- - name: Creation Status
- assert:
- that:
- - first_cluster.response is defined
- - first_cluster.failed==false
- - first_cluster.changed==true
- - first_cluster.response.cluster_urls is defined
- - first_cluster.response.cluster_urls.0.name=="test-cluster"
- fail_msg: " Fail : unable to create cluster with three node"
- success_msg: "Success: cluster with three node created successfully "
- # when: false # make it true or remove to unskip task
+- name: Image nodes using manual and discovery modes. Create cluster
+ ntnx_foundation:
+ timeout: 4500
+ nutanix_host: "{{foundation_host}}"
+ cvm_gateway: "{{cvm_gateway}}"
+ cvm_netmask: "{{cvm_netmask}}"
+ hypervisor_gateway: "{{hypervisor_gateway}}"
+ hypervisor_netmask: "{{hypervisor_netmask}}"
+ default_ipmi_user: "{{default_ipmi_user}}"
+ current_cvm_vlan_tag: "0"
+ nos_package: "{{nos_package}}"
+ blocks:
+ - block_id: "{{IBIS_node.block_id}}"
+ nodes:
+ - manual_mode:
+ cvm_ip: "{{IBIS_node.node1.cvm_ip}}"
+ cvm_gb_ram: 50
+ hypervisor_hostname: "{{IBIS_node.node1.hypervisor_hostname}}"
+ ipmi_netmask: "{{IBIS_node.node1.ipmi_netmask}}"
+ ipmi_gateway: "{{IBIS_node.node1.ipmi_gateway}}"
+ ipmi_ip: "{{IBIS_node.node1.ipmi_ip}}"
+ ipmi_password: "{{IBIS_node.node1.ipmi_password}}"
+ hypervisor: "{{IBIS_node.node1.hypervisor}}"
+ hypervisor_ip: "{{IBIS_node.node1.hypervisor_ip}}"
+ node_position: "{{IBIS_node.node1.node_position}}"
+ - discovery_mode: #dos mode using cvm
+ cvm_gb_ram: 50
+ node_serial: "{{IBIS_node.node3.node_serial}}"
+ device_hint: "vm_installer"
+ discovery_override:
+ hypervisor_hostname: "{{IBIS_node.node3.hypervisor_hostname}}"
+ hypervisor_ip: "{{IBIS_node.node3.hypervisor_ip}}"
+ cvm_ip: "{{IBIS_node.node3.cvm_ip}}"
+ hypervisor: "{{IBIS_node.node3.hypervisor}}"
+ - discovery_mode: # aos node using ipmi
+ cvm_gb_ram: 50
+ ipmi_password: "{{IBIS_node.node2.ipmi_password}}"
+ node_serial: "{{IBIS_node.node2.node_serial}}"
+ discovery_override:
+ hypervisor_hostname: "IBIS2"
+ clusters:
+ - redundancy_factor: 2
+ cluster_members:
+ - "{{IBIS_node.node1.cvm_ip}}"
+ - "{{IBIS_node.node3.cvm_ip}}"
+ - "{{IBIS_node.node2.cvm_ip}}"
+ name: "test-cluster"
+ register: first_cluster
+ ignore_errors: True
+ # when: false # make it true or remove to resume task
+- name: Creation Status
+ assert:
+ that:
+ - first_cluster.response is defined
+ - first_cluster.failed==false
+ - first_cluster.changed==true
+ - first_cluster.response.cluster_urls is defined
+ - first_cluster.response.cluster_urls.0.name=="test-cluster"
+ fail_msg: " Fail : unable to create cluster with three node"
+ success_msg: "Success: cluster with three node created successfully "
+ # when: false # make it true or remove to resume task
######################################################
diff --git a/tests/integration/targets/ntnx_foundation/tasks/negative_scenarios.yml b/tests/integration/targets/ntnx_foundation/tasks/negative_scenarios.yml
index 08555d845..087ed1333 100644
--- a/tests/integration/targets/ntnx_foundation/tasks/negative_scenarios.yml
+++ b/tests/integration/targets/ntnx_foundation/tasks/negative_scenarios.yml
@@ -1,119 +1,119 @@
- - name: Image nodes with check mode
- check_mode: yes
- ntnx_foundation:
- timeout: 3660
- nutanix_host: "{{foundation_host}}"
- cvm_gateway: "{{cvm_gateway}}"
- cvm_netmask: "{{cvm_netmask}}"
- hypervisor_gateway: "{{hypervisor_gateway}}"
- hypervisor_netmask: "{{hypervisor_netmask}}"
- default_ipmi_user: "{{default_ipmi_user}}"
- current_cvm_vlan_tag: "0"
- nos_package: "{{nos_package}}"
- blocks:
- - block_id: "{{IBIS_node.block_id}}"
- nodes:
- - manual_mode:
- cvm_gb_ram: 50
- cvm_ip: "{{IBIS_node.node1.cvm_ip}}"
- hypervisor_hostname: "{{IBIS_node.node1.hypervisor_hostname}}"
- ipmi_ip: "{{IBIS_node.node1.ipmi_ip}}"
- ipmi_password: "{{IBIS_node.node1.ipmi_password}}"
- hypervisor: "{{IBIS_node.node1.hypervisor}}"
- hypervisor_ip: "{{IBIS_node.node1.hypervisor_ip}}"
- node_position: "{{IBIS_node.node1.node_position}}"
- clusters:
- - redundancy_factor: 2
- cluster_members:
- - "{{IBIS_node.node1.cvm_ip}}"
- - "{{IBIS_node.node3.cvm_ip}}"
- - "{{IBIS_node.node2.cvm_ip}}"
- name: "test-cluster"
- register: result
+- name: Image nodes with check mode
+ check_mode: yes
+ ntnx_foundation:
+ timeout: 3660
+ nutanix_host: "{{foundation_host}}"
+ cvm_gateway: "{{cvm_gateway}}"
+ cvm_netmask: "{{cvm_netmask}}"
+ hypervisor_gateway: "{{hypervisor_gateway}}"
+ hypervisor_netmask: "{{hypervisor_netmask}}"
+ default_ipmi_user: "{{default_ipmi_user}}"
+ current_cvm_vlan_tag: "0"
+ nos_package: "{{nos_package}}"
+ blocks:
+ - block_id: "{{IBIS_node.block_id}}"
+ nodes:
+ - manual_mode:
+ cvm_gb_ram: 50
+ cvm_ip: "{{IBIS_node.node1.cvm_ip}}"
+ hypervisor_hostname: "{{IBIS_node.node1.hypervisor_hostname}}"
+ ipmi_ip: "{{IBIS_node.node1.ipmi_ip}}"
+ ipmi_password: "{{IBIS_node.node1.ipmi_password}}"
+ hypervisor: "{{IBIS_node.node1.hypervisor}}"
+ hypervisor_ip: "{{IBIS_node.node1.hypervisor_ip}}"
+ node_position: "{{IBIS_node.node1.node_position}}"
+ clusters:
+ - redundancy_factor: 2
+ cluster_members:
+ - "{{IBIS_node.node1.cvm_ip}}"
+ - "{{IBIS_node.node3.cvm_ip}}"
+ - "{{IBIS_node.node2.cvm_ip}}"
+ name: "test-cluster"
+ register: result
- - name: Creation Status
- assert:
- that:
- - result.response is defined
- - result.failed==false
- - result.changed==false
- - result.response.blocks.0.nodes.0.cvm_ip=="{{IBIS_node.node1.cvm_ip}}"
- - result.response.blocks.0.nodes.0.hypervisor_hostname=="{{IBIS_node.node1.hypervisor_hostname}}"
- - result.response.blocks.0.nodes.0.ipmi_ip=="{{IBIS_node.node1.ipmi_ip}}"
- - result.response.blocks.0.nodes.0.hypervisor=="{{IBIS_node.node1.hypervisor}}"
- - result.response.blocks.0.nodes.0.node_position=="{{IBIS_node.node1.node_position}}"
- - result.response.clusters.0.cluster_name=="test-cluster"
- fail_msg: " Fail : check_mode fail"
- success_msg: "Success: returned response as expected"
+- name: Creation Status
+ assert:
+ that:
+ - result.response is defined
+ - result.failed==false
+ - result.changed==false
+ - result.response.blocks.0.nodes.0.cvm_ip=="{{IBIS_node.node1.cvm_ip}}"
+ - result.response.blocks.0.nodes.0.hypervisor_hostname=="{{IBIS_node.node1.hypervisor_hostname}}"
+ - result.response.blocks.0.nodes.0.ipmi_ip=="{{IBIS_node.node1.ipmi_ip}}"
+ - result.response.blocks.0.nodes.0.hypervisor=="{{IBIS_node.node1.hypervisor}}"
+ - result.response.blocks.0.nodes.0.node_position=="{{IBIS_node.node1.node_position}}"
+ - result.response.clusters.0.cluster_name=="test-cluster"
+ fail_msg: " Fail : check_mode fail"
+ success_msg: "Success: returned response as expected"
###################################
- - debug:
- msg: start negative_scenarios for ntnx_foundation
+- debug:
+ msg: start negative_scenarios for ntnx_foundation
###################################
- - name: Image nodes with wrong serial
- ntnx_foundation:
- timeout: 3660
- nutanix_host: "{{foundation_host}}"
- cvm_gateway: "{{cvm_gateway}}"
- cvm_netmask: "{{cvm_netmask}}"
- hypervisor_gateway: "{{hypervisor_gateway}}"
- hypervisor_netmask: "{{hypervisor_netmask}}"
- default_ipmi_user: "{{default_ipmi_user}}"
- current_cvm_vlan_tag: "0"
- nos_package: "{{nos_package}}"
- blocks:
- - block_id: "{{IBIS_node.block_id}}"
- nodes:
- - discovery_mode:
- cvm_gb_ram: 50
- node_serial: wrong_serial
- device_hint: "vm_installer"
- discovery_override:
- hypervisor_hostname: "{{IBIS_node.node3.hypervisor_hostname}}"
- hypervisor_ip: "{{IBIS_node.node3.hypervisor_ip}}"
- cvm_ip: "{{IBIS_node.node3.cvm_ip}}"
- hypervisor: "{{IBIS_node.node3.hypervisor}}"
- register: result
- ignore_errors: True
+- name: Image nodes with wrong serial
+ ntnx_foundation:
+ timeout: 3660
+ nutanix_host: "{{foundation_host}}"
+ cvm_gateway: "{{cvm_gateway}}"
+ cvm_netmask: "{{cvm_netmask}}"
+ hypervisor_gateway: "{{hypervisor_gateway}}"
+ hypervisor_netmask: "{{hypervisor_netmask}}"
+ default_ipmi_user: "{{default_ipmi_user}}"
+ current_cvm_vlan_tag: "0"
+ nos_package: "{{nos_package}}"
+ blocks:
+ - block_id: "{{IBIS_node.block_id}}"
+ nodes:
+ - discovery_mode:
+ cvm_gb_ram: 50
+ node_serial: wrong_serial
+ device_hint: "vm_installer"
+ discovery_override:
+ hypervisor_hostname: "{{IBIS_node.node3.hypervisor_hostname}}"
+ hypervisor_ip: "{{IBIS_node.node3.hypervisor_ip}}"
+ cvm_ip: "{{IBIS_node.node3.cvm_ip}}"
+ hypervisor: "{{IBIS_node.node3.hypervisor}}"
+ register: result
+ ignore_errors: True
- - name: Creation Status
- assert:
- that:
- - result.msg == "Failed generating Image Nodes Spec"
- - result.changed==false
- - result.failed==true
- fail_msg: " Fail : image node with wrong serial done successfully "
- success_msg: "Success: unable to image node with wrong serial "
+- name: Creation Status
+ assert:
+ that:
+ - result.msg == "Failed generating Image Nodes Spec"
+ - result.changed==false
+ - result.failed==true
+ fail_msg: " Fail : image node with wrong serial done successfully "
+ success_msg: "Success: unable to image node with wrong serial "
###################################
- - name: Image nodes with wrong hypervisor
- ntnx_foundation:
- timeout: 3660
- cvm_gateway: "{{cvm_gateway}}"
- cvm_netmask: "{{cvm_netmask}}"
- hypervisor_gateway: "{{hypervisor_gateway}}"
- hypervisor_netmask: "{{hypervisor_netmask}}"
- default_ipmi_user: "{{default_ipmi_user}}"
- current_cvm_vlan_tag: "0"
- nos_package: "{{nos_package}}"
- blocks:
- - block_id: "{{IBIS_node.block_id}}"
- nodes:
- - discovery_mode:
- cvm_gb_ram: 50
- node_serial: wrong_serial
- device_hint: "vm_installer"
- discovery_override:
- hypervisor_ip: "{{IBIS_node.node3.hypervisor_ip}}"
- cvm_ip: "{{IBIS_node.node3.cvm_ip}}"
- hypervisor_hostname: "{{IBIS_node.node3.hypervisor_hostname}}"
- hypervisor: "phoenix"
- register: result
- ignore_errors: True
+- name: Image nodes with wrong hypervisor
+ ntnx_foundation:
+ timeout: 3660
+ cvm_gateway: "{{cvm_gateway}}"
+ cvm_netmask: "{{cvm_netmask}}"
+ hypervisor_gateway: "{{hypervisor_gateway}}"
+ hypervisor_netmask: "{{hypervisor_netmask}}"
+ default_ipmi_user: "{{default_ipmi_user}}"
+ current_cvm_vlan_tag: "0"
+ nos_package: "{{nos_package}}"
+ blocks:
+ - block_id: "{{IBIS_node.block_id}}"
+ nodes:
+ - discovery_mode:
+ cvm_gb_ram: 50
+ node_serial: wrong_serial
+ device_hint: "vm_installer"
+ discovery_override:
+ hypervisor_ip: "{{IBIS_node.node3.hypervisor_ip}}"
+ cvm_ip: "{{IBIS_node.node3.cvm_ip}}"
+ hypervisor_hostname: "{{IBIS_node.node3.hypervisor_hostname}}"
+ hypervisor: "phoenix"
+ register: result
+ ignore_errors: True
- - name: Creation Status
- assert:
- that:
- - result.changed==false
- - result.failed==true
- - "result.msg=='value of hypervisor must be one of: kvm, hyperv, xen, esx, ahv, got: phoenix found in blocks -> nodes -> discovery_mode -> discovery_override'"
- fail_msg: " Fail : Image nodes with wrong hypervisor done successfully "
- success_msg: "Success: unable to image node with wrong hypervisor"
+- name: Creation Status
+ assert:
+ that:
+ - result.changed==false
+ - result.failed==true
+ - "result.msg=='value of hypervisor must be one of: kvm, hyperv, xen, esx, ahv, got: phoenix found in blocks -> nodes -> discovery_mode -> discovery_override'"
+ fail_msg: " Fail : Image nodes with wrong hypervisor done successfully "
+ success_msg: "Success: unable to image node with wrong hypervisor"
diff --git a/tests/integration/targets/ntnx_foundation_central/tasks/image_nodes.yml b/tests/integration/targets/ntnx_foundation_central/tasks/image_nodes.yml
index 461982fcd..186d0bf2f 100644
--- a/tests/integration/targets/ntnx_foundation_central/tasks/image_nodes.yml
+++ b/tests/integration/targets/ntnx_foundation_central/tasks/image_nodes.yml
@@ -42,7 +42,7 @@
hypervisor_hostname: "{{node3.discovery_override.hypervisor_hostname}}"
register: result
ignore_errors: true
- # when: false # make it true or remove to unskip task
+ # when: false # make it true or remove to resume task
- name: Creation Status
assert:
@@ -52,4 +52,4 @@
- result.changed==true
fail_msg: "fail: Unable to image nodes or create cluster "
success_msg: "success: Imaging and cluster created successfully "
- # when: false # make it true or remove to unskip task
+ # when: false # make it true or remove to resume task
diff --git a/tests/integration/targets/ntnx_foundation_central_api_keys/tasks/create_key.yml b/tests/integration/targets/ntnx_foundation_central_api_keys/tasks/create_key.yml
index 08325b1b4..95e67fd4f 100644
--- a/tests/integration/targets/ntnx_foundation_central_api_keys/tasks/create_key.yml
+++ b/tests/integration/targets/ntnx_foundation_central_api_keys/tasks/create_key.yml
@@ -3,7 +3,7 @@
- name: create api key with check_mode
ntnx_foundation_central_api_keys:
- alias: test
+ alias: test
check_mode: true
register: result
ignore_errors: true
@@ -22,10 +22,9 @@
set_fact:
random_alias: "{{query('community.general.random_string',numbers=false, special=false,length=12)}}"
-
- name: create api key with random alias
ntnx_foundation_central_api_keys:
- alias: "{{random_alias.0}}"
+ alias: "{{random_alias.0}}"
register: result
ignore_errors: true
@@ -40,7 +39,7 @@
success_msg: "success: api key created successfully "
- ntnx_foundation_central_api_keys:
- alias: "{{random_alias.0}}"
+ alias: "{{random_alias.0}}"
register: result
ignore_errors: true
diff --git a/tests/integration/targets/ntnx_foundation_central_api_keys_info/tasks/key_info.yml b/tests/integration/targets/ntnx_foundation_central_api_keys_info/tasks/key_info.yml
index 7e369b81b..55a1b4870 100644
--- a/tests/integration/targets/ntnx_foundation_central_api_keys_info/tasks/key_info.yml
+++ b/tests/integration/targets/ntnx_foundation_central_api_keys_info/tasks/key_info.yml
@@ -8,7 +8,7 @@
- name: create api key with random alias
ntnx_foundation_central_api_keys:
- alias: "{{random_alias.0}}"
+ alias: "{{random_alias.0}}"
register: key
ignore_errors: true
@@ -53,9 +53,9 @@
- name: get api key with custom filter
ntnx_foundation_central_api_keys_info:
- custom_filter:
- created_timestamp: "{{key.response.created_timestamp}}"
- alias: "{{key.response.alias}}"
+ custom_filter:
+ created_timestamp: "{{key.response.created_timestamp}}"
+ alias: "{{key.response.alias}}"
register: result
ignore_errors: true
diff --git a/tests/integration/targets/ntnx_foundation_central_imaged_clusters_info/tasks/get_cluster_info.yml b/tests/integration/targets/ntnx_foundation_central_imaged_clusters_info/tasks/get_cluster_info.yml
index 294de071d..4388db092 100644
--- a/tests/integration/targets/ntnx_foundation_central_imaged_clusters_info/tasks/get_cluster_info.yml
+++ b/tests/integration/targets/ntnx_foundation_central_imaged_clusters_info/tasks/get_cluster_info.yml
@@ -1,7 +1,6 @@
- debug:
msg: start testing ntnx_foundation_central_imaged_clusters_info module
-
- name: get imaged cluster using image_cluster_uuid
ntnx_foundation_central_imaged_clusters_info:
filters:
@@ -18,7 +17,6 @@
fail_msg: "fail: unable to get all imaged,archived cluster "
success_msg: "success: get all imaged,archived cluster successfully "
-
- name: get imaged cluster using image_cluster_uuid
ntnx_foundation_central_imaged_clusters_info:
imaged_cluster_uuid: "{{clusters.response.imaged_clusters.0.imaged_cluster_uuid}}"
@@ -50,8 +48,6 @@
- result.response.imaged_clusters is defined
fail_msg: "fail: unable to get imaged cluster using custom filter "
success_msg: "success: get imaged cluster using custom filter successfully"
-
-
# still offset and length
# - debug:
# var: clusters.response
diff --git a/tests/integration/targets/ntnx_foundation_central_imaged_nodes_info/tasks/get_node_info.yml b/tests/integration/targets/ntnx_foundation_central_imaged_nodes_info/tasks/get_node_info.yml
index 4e30b3294..43a06ae13 100644
--- a/tests/integration/targets/ntnx_foundation_central_imaged_nodes_info/tasks/get_node_info.yml
+++ b/tests/integration/targets/ntnx_foundation_central_imaged_nodes_info/tasks/get_node_info.yml
@@ -1,7 +1,7 @@
- debug:
msg: start testing ntnx_foundation_central_imaged_nodes_info module
-- name: get all imaged nodes
+- name: get all imaged nodes
ntnx_foundation_central_imaged_nodes_info:
register: nodes
ignore_errors: true
@@ -49,5 +49,4 @@
- result.response.metadata.length <=1
fail_msg: "fail: unable to get imaged node using custom filter "
success_msg: "success: get imaged node using custom filter successfully"
-
# still offset and length and filter
diff --git a/tests/integration/targets/ntnx_foundation_discover_nodes_info/tasks/discover_nodes.yml b/tests/integration/targets/ntnx_foundation_discover_nodes_info/tasks/discover_nodes.yml
index aa1ffd92e..943538c9b 100644
--- a/tests/integration/targets/ntnx_foundation_discover_nodes_info/tasks/discover_nodes.yml
+++ b/tests/integration/targets/ntnx_foundation_discover_nodes_info/tasks/discover_nodes.yml
@@ -31,7 +31,6 @@
- result.blocks.0.nodes.0.ipv6_address is defined
fail_msg: " Fail : unable to discover all nodes "
success_msg: "Success: Discover all nodes finished successfully "
-
# - name: Discover nodes and include network info # api fail
# ntnx_foundation_discover_nodes_info:
# include_network_details: true
diff --git a/tests/integration/targets/ntnx_foundation_image_upload/tasks/negative_scenarios.yml b/tests/integration/targets/ntnx_foundation_image_upload/tasks/negative_scenarios.yml
index d3957ac6b..27407b7f7 100644
--- a/tests/integration/targets/ntnx_foundation_image_upload/tasks/negative_scenarios.yml
+++ b/tests/integration/targets/ntnx_foundation_image_upload/tasks/negative_scenarios.yml
@@ -3,7 +3,7 @@
state: present
source: "{{ source }}"
filename: "integration-test-ntnx-package.tar.gz"
- installer_type: wrong installler type
+ installer_type: wrong installer type
timeout: 3600
register: result
ignore_errors: true
@@ -13,6 +13,6 @@
that:
- result.failed==true
- result.changed==false
- - "result.msg == 'value of installer_type must be one of: kvm, esx, hyperv, xen, nos, got: wrong installler type'"
+ - "result.msg == 'value of installer_type must be one of: kvm, esx, hyperv, xen, nos, got: wrong installer type'"
fail_msg: " Fail : image uploaded with wrong installer type"
success_msg: "Success: returned error as expected "
diff --git a/tests/integration/targets/ntnx_foundation_sanity/tasks/image_nodes.yml b/tests/integration/targets/ntnx_foundation_sanity/tasks/image_nodes.yml
index 8ed9ee396..bc85994cb 100644
--- a/tests/integration/targets/ntnx_foundation_sanity/tasks/image_nodes.yml
+++ b/tests/integration/targets/ntnx_foundation_sanity/tasks/image_nodes.yml
@@ -1,214 +1,214 @@
---
- - debug:
- msg: start testing ntnx_foundation test for bare metal imaging and cluster creation
+- debug:
+ msg: start testing ntnx_foundation test for bare metal imaging and cluster creation
+- name: get aos_packages_info from foundation
+ ntnx_foundation_aos_packages_info:
+ register: images
- - name: get aos_packages_info from foundation
- ntnx_foundation_aos_packages_info:
- register: images
+- name: Create spec for imaging and creating cluster out of bare metal nodes
+ check_mode: yes
+ ntnx_foundation:
+ timeout: 4500
+ cvm_gateway: "{{cvm_gateway}}"
+ cvm_netmask: "{{cvm_netmask}}"
+ hypervisor_gateway: "{{hypervisor_gateway}}"
+ hypervisor_netmask: "{{hypervisor_netmask}}"
+ default_ipmi_user: "{{default_ipmi_user}}"
+ current_cvm_vlan_tag: "{{nodes.current_cvm_vlan_tag}}"
+ nos_package: "{{images.aos_packages[0]}}"
+ blocks:
+ - block_id: "{{nodes.block_id}}"
+ nodes:
+ - manual_mode:
+ cvm_ip: "{{nodes.node1.cvm_ip}}"
+ cvm_gb_ram: 50
+ hypervisor_hostname: "{{nodes.node1.hypervisor_hostname}}"
+ ipmi_netmask: "{{nodes.node1.ipmi_netmask}}"
+ ipmi_gateway: "{{nodes.node1.ipmi_gateway}}"
+ ipmi_ip: "{{nodes.node1.ipmi_ip}}"
+ ipmi_password: "{{nodes.node1.ipmi_password}}"
+ hypervisor: "{{nodes.node1.hypervisor}}"
+ hypervisor_ip: "{{nodes.node1.hypervisor_ip}}"
+ node_position: "{{nodes.node1.node_position}}"
+ clusters:
+ - redundancy_factor: 1
+ cluster_members:
+ - "{{nodes.node1.cvm_ip}}"
+ name: "test-cluster"
+ timezone: "Asia/Calcutta"
+ cvm_ntp_servers:
+ - "{{nodes.ntp_servers[0]}}"
+ - "{{nodes.ntp_servers[1]}}"
+ cvm_dns_servers:
+ - "{{nodes.dns_servers[0]}}"
+ - "{{nodes.dns_servers[1]}}"
+ hypervisor_ntp_servers:
+ - "{{nodes.ntp_servers[0]}}"
+ - "{{nodes.ntp_servers[1]}}"
+ enable_ns: true
+ backplane_vlan: "{{nodes.backplane_vlan}}"
+ backplane_subnet: "{{nodes.backplane_subnet}}"
+ backplane_netmask: "{{nodes.backplane_netmask}}"
+ register: spec
+ ignore_errors: True
- - name: Create spec for imaging and creating cluster out of bare metal nodes
- check_mode: yes
- ntnx_foundation:
- timeout: 4500
- cvm_gateway: "{{cvm_gateway}}"
- cvm_netmask: "{{cvm_netmask}}"
- hypervisor_gateway: "{{hypervisor_gateway}}"
- hypervisor_netmask: "{{hypervisor_netmask}}"
- default_ipmi_user: "{{default_ipmi_user}}"
- current_cvm_vlan_tag: "{{nodes.current_cvm_vlan_tag}}"
- nos_package: "{{images.aos_packages[0]}}"
- blocks:
- - block_id: "{{nodes.block_id}}"
- nodes:
- - manual_mode:
- cvm_ip: "{{nodes.node1.cvm_ip}}"
- cvm_gb_ram: 50
- hypervisor_hostname: "{{nodes.node1.hypervisor_hostname}}"
- ipmi_netmask: "{{nodes.node1.ipmi_netmask}}"
- ipmi_gateway: "{{nodes.node1.ipmi_gateway}}"
- ipmi_ip: "{{nodes.node1.ipmi_ip}}"
- ipmi_password: "{{nodes.node1.ipmi_password}}"
- hypervisor: "{{nodes.node1.hypervisor}}"
- hypervisor_ip: "{{nodes.node1.hypervisor_ip}}"
- node_position: "{{nodes.node1.node_position}}"
- clusters:
- - redundancy_factor: 1
- cluster_members:
- - "{{nodes.node1.cvm_ip}}"
- name: "test-cluster"
- timezone: "Asia/Calcutta"
- cvm_ntp_servers:
- - "{{nodes.ntp_servers[0]}}"
- - "{{nodes.ntp_servers[1]}}"
- cvm_dns_servers:
- - "{{nodes.dns_servers[0]}}"
- - "{{nodes.dns_servers[1]}}"
- hypervisor_ntp_servers:
- - "{{nodes.ntp_servers[0]}}"
- - "{{nodes.ntp_servers[1]}}"
- enable_ns: true
- backplane_vlan: "{{nodes.backplane_vlan}}"
- backplane_subnet: "{{nodes.backplane_subnet}}"
- backplane_netmask: "{{nodes.backplane_netmask}}"
- register: spec
- ignore_errors: True
+- set_fact:
+ expected_spec:
+ {
+ "blocks":
+ [
+ {
+ "block_id": "{{nodes.block_id}}",
+ "nodes":
+ [
+ {
+ "cvm_gb_ram": 50,
+ "cvm_ip": "{{nodes.node1.cvm_ip}}",
+ "hypervisor": "{{nodes.node1.hypervisor}}",
+ "hypervisor_hostname": "{{nodes.node1.hypervisor_hostname}}",
+ "hypervisor_ip": "{{nodes.node1.hypervisor_ip}}",
+ "image_now": true,
+ "ipmi_gateway": "{{nodes.node1.ipmi_gateway}}",
+ "ipmi_ip": "{{nodes.node1.ipmi_ip}}",
+ "ipmi_netmask": "{{nodes.node1.ipmi_netmask}}",
+ "ipmi_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
+ "node_position": "{{nodes.node1.node_position}}",
+ },
+ ],
+ },
+ ],
+ "clusters":
+ [
+ {
+ "backplane_netmask": "{{nodes.backplane_netmask}}",
+ "backplane_subnet": "{{nodes.backplane_subnet}}",
+ "backplane_vlan": "{{nodes.backplane_vlan}}",
+ "cluster_external_ip": null,
+ "cluster_init_now": true,
+ "cluster_members": ["{{nodes.node1.cvm_ip}}"],
+ "cluster_name": "test-cluster",
+ "cvm_dns_servers": "{{nodes.dns_servers[0]}},{{nodes.dns_servers[1]}}",
+ "cvm_ntp_servers": "{{nodes.ntp_servers[0]}},{{nodes.ntp_servers[1]}}",
+ "enable_ns": true,
+ "hypervisor_ntp_servers": "{{nodes.ntp_servers[0]}},{{nodes.ntp_servers[1]}}",
+ "redundancy_factor": 1,
+ "single_node_cluster": true,
+ "timezone": "Asia/Calcutta",
+ },
+ ],
+ "current_cvm_vlan_tag": "{{nodes.current_cvm_vlan_tag}}",
+ "cvm_gateway": "{{cvm_gateway}}",
+ "cvm_netmask": "{{cvm_netmask}}",
+ "hypervisor_gateway": "{{hypervisor_gateway}}",
+ "hypervisor_iso": {},
+ "hypervisor_netmask": "{{hypervisor_netmask}}",
+ "ipmi_user": "{{default_ipmi_user}}",
+ "nos_package": "{{images.aos_packages[0]}}",
+ }
- - set_fact:
- expected_spec: {
- "blocks": [
- {
- "block_id": "{{nodes.block_id}}",
- "nodes": [
- {
- "cvm_gb_ram": 50,
- "cvm_ip": "{{nodes.node1.cvm_ip}}",
- "hypervisor": "{{nodes.node1.hypervisor}}",
- "hypervisor_hostname": "{{nodes.node1.hypervisor_hostname}}",
- "hypervisor_ip": "{{nodes.node1.hypervisor_ip}}",
- "image_now": true,
- "ipmi_gateway": "{{nodes.node1.ipmi_gateway}}",
- "ipmi_ip": "{{nodes.node1.ipmi_ip}}",
- "ipmi_netmask": "{{nodes.node1.ipmi_netmask}}",
- "ipmi_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
- "node_position": "{{nodes.node1.node_position}}"
- }
- ]
- }
- ],
- "clusters": [
- {
- "backplane_netmask": "{{nodes.backplane_netmask}}",
- "backplane_subnet": "{{nodes.backplane_subnet}}",
- "backplane_vlan": "{{nodes.backplane_vlan}}",
- "cluster_external_ip": null,
- "cluster_init_now": true,
- "cluster_members": [
- "{{nodes.node1.cvm_ip}}"
- ],
- "cluster_name": "test-cluster",
- "cvm_dns_servers": "{{nodes.dns_servers[0]}},{{nodes.dns_servers[1]}}",
- "cvm_ntp_servers": "{{nodes.ntp_servers[0]}},{{nodes.ntp_servers[1]}}",
- "enable_ns": true,
- "hypervisor_ntp_servers": "{{nodes.ntp_servers[0]}},{{nodes.ntp_servers[1]}}",
- "redundancy_factor": 1,
- "single_node_cluster": true,
- "timezone": "Asia/Calcutta"
- }
- ],
- "current_cvm_vlan_tag": "{{nodes.current_cvm_vlan_tag}}",
- "cvm_gateway": "{{cvm_gateway}}",
- "cvm_netmask": "{{cvm_netmask}}",
- "hypervisor_gateway": "{{hypervisor_gateway}}",
- "hypervisor_iso": {},
- "hypervisor_netmask": "{{hypervisor_netmask}}",
- "ipmi_user": "{{default_ipmi_user}}",
- "nos_package": "{{images.aos_packages[0]}}"
- }
+- name: Verify spec
+ assert:
+ that:
+ - spec.response is defined
+ - spec.failed==false
+ - spec.changed==false
+ - spec.response == expected_spec
+ fail_msg: " Fail : unable to create spec for imaging nodes"
+ success_msg: "Success: spec generated successfully"
- - name: Verify spec
- assert:
- that:
- - spec.response is defined
- - spec.failed==false
- - spec.changed==false
- - spec.response == expected_spec
- fail_msg: " Fail : unable to create spec for imaging nodes"
- success_msg: "Success: spec generated successfully"
+- name: Image nodes without cluster creation
+ ntnx_foundation:
+ timeout: 4500
+ cvm_gateway: "{{cvm_gateway}}"
+ cvm_netmask: "{{cvm_netmask}}"
+ hypervisor_gateway: "{{hypervisor_gateway}}"
+ hypervisor_netmask: "{{hypervisor_netmask}}"
+ default_ipmi_user: "{{default_ipmi_user}}"
+ current_cvm_vlan_tag: "{{nodes.current_cvm_vlan_tag}}"
+ nos_package: "{{images.aos_packages[0]}}"
+ blocks:
+ - block_id: "{{nodes.block_id}}"
+ nodes:
+ - manual_mode:
+ cvm_ip: "{{nodes.node1.cvm_ip}}"
+ cvm_gb_ram: 50
+ hypervisor_hostname: "{{nodes.node1.hypervisor_hostname}}"
+ ipmi_netmask: "{{nodes.node1.ipmi_netmask}}"
+ ipmi_gateway: "{{nodes.node1.ipmi_gateway}}"
+ ipmi_ip: "{{nodes.node1.ipmi_ip}}"
+ ipmi_password: "{{nodes.node1.ipmi_password}}"
+ hypervisor: "{{nodes.node1.hypervisor}}"
+ hypervisor_ip: "{{nodes.node1.hypervisor_ip}}"
+ node_position: "{{nodes.node1.node_position}}"
+ bond_lacp_rate: "{{nodes.node1.bond_lacp_rate}}"
+ bond_mode: "{{nodes.node1.bond_mode}}"
- - name: Image nodes without cluster creation
- ntnx_foundation:
- timeout: 4500
- cvm_gateway: "{{cvm_gateway}}"
- cvm_netmask: "{{cvm_netmask}}"
- hypervisor_gateway: "{{hypervisor_gateway}}"
- hypervisor_netmask: "{{hypervisor_netmask}}"
- default_ipmi_user: "{{default_ipmi_user}}"
- current_cvm_vlan_tag: "{{nodes.current_cvm_vlan_tag}}"
- nos_package: "{{images.aos_packages[0]}}"
- blocks:
- - block_id: "{{nodes.block_id}}"
- nodes:
- - manual_mode:
- cvm_ip: "{{nodes.node1.cvm_ip}}"
- cvm_gb_ram: 50
- hypervisor_hostname: "{{nodes.node1.hypervisor_hostname}}"
- ipmi_netmask: "{{nodes.node1.ipmi_netmask}}"
- ipmi_gateway: "{{nodes.node1.ipmi_gateway}}"
- ipmi_ip: "{{nodes.node1.ipmi_ip}}"
- ipmi_password: "{{nodes.node1.ipmi_password}}"
- hypervisor: "{{nodes.node1.hypervisor}}"
- hypervisor_ip: "{{nodes.node1.hypervisor_ip}}"
- node_position: "{{nodes.node1.node_position}}"
- bond_lacp_rate: "{{nodes.node1.bond_lacp_rate}}"
- bond_mode: "{{nodes.node1.bond_mode}}"
+ register: result
+ no_log: true
+ ignore_errors: True
- register: result
- no_log: true
- ignore_errors: True
+- name: Creation Status
+ assert:
+ that:
+ - result.response is defined
+ - result.failed==false
+ - result.changed==true
+ fail_msg: " Fail : unable to image nodes"
+ success_msg: "Success: node imaging done successfully"
- - name: Creation Status
- assert:
- that:
- - result.response is defined
- - result.failed==false
- - result.changed==true
- fail_msg: " Fail : unable to image nodes"
- success_msg: "Success: node imaging done successfully"
-
- - name: Image nodes and create cluster out of it
- ntnx_foundation:
- timeout: 4500
- cvm_gateway: "{{cvm_gateway}}"
- cvm_netmask: "{{cvm_netmask}}"
- hypervisor_gateway: "{{hypervisor_gateway}}"
- hypervisor_netmask: "{{hypervisor_netmask}}"
- default_ipmi_user: "{{default_ipmi_user}}"
- current_cvm_vlan_tag: "{{nodes.current_cvm_vlan_tag}}"
- nos_package: "{{images.aos_packages[0]}}"
- blocks:
- - block_id: "{{nodes.block_id}}"
- nodes:
- - manual_mode:
- cvm_ip: "{{nodes.node1.cvm_ip}}"
- cvm_gb_ram: 50
- hypervisor_hostname: "{{nodes.node1.hypervisor_hostname}}"
- ipmi_netmask: "{{nodes.node1.ipmi_netmask}}"
- ipmi_gateway: "{{nodes.node1.ipmi_gateway}}"
- ipmi_ip: "{{nodes.node1.ipmi_ip}}"
- ipmi_password: "{{nodes.node1.ipmi_password}}"
- hypervisor: "{{nodes.node1.hypervisor}}"
- hypervisor_ip: "{{nodes.node1.hypervisor_ip}}"
- node_position: "{{nodes.node1.node_position}}"
- bond_lacp_rate: "{{nodes.node1.bond_lacp_rate}}"
- bond_mode: "{{nodes.node1.bond_mode}}"
- clusters:
- - redundancy_factor: 1
- cluster_members:
- - "{{nodes.node1.cvm_ip}}"
- name: "test-cluster"
- timezone: "Asia/Calcutta"
- cvm_ntp_servers:
- - "{{nodes.ntp_servers[0]}}"
- - "{{nodes.ntp_servers[1]}}"
- cvm_dns_servers:
- - "{{nodes.dns_servers[0]}}"
- - "{{nodes.dns_servers[1]}}"
- hypervisor_ntp_servers:
- - "{{nodes.ntp_servers[0]}}"
- - "{{nodes.ntp_servers[1]}}"
- register: result
- no_log: true
- ignore_errors: True
-
- - name: Creation Status
- assert:
- that:
- - result.response is defined
- - result.failed==false
- - result.changed==true
- - result.response.cluster_urls is defined
- fail_msg: " Fail : unable to image nodes and create cluster"
- success_msg: "Success: cluster and node imaging done successfully"
+- name: Image nodes and create cluster out of it
+ ntnx_foundation:
+ timeout: 4500
+ cvm_gateway: "{{cvm_gateway}}"
+ cvm_netmask: "{{cvm_netmask}}"
+ hypervisor_gateway: "{{hypervisor_gateway}}"
+ hypervisor_netmask: "{{hypervisor_netmask}}"
+ default_ipmi_user: "{{default_ipmi_user}}"
+ current_cvm_vlan_tag: "{{nodes.current_cvm_vlan_tag}}"
+ nos_package: "{{images.aos_packages[0]}}"
+ blocks:
+ - block_id: "{{nodes.block_id}}"
+ nodes:
+ - manual_mode:
+ cvm_ip: "{{nodes.node1.cvm_ip}}"
+ cvm_gb_ram: 50
+ hypervisor_hostname: "{{nodes.node1.hypervisor_hostname}}"
+ ipmi_netmask: "{{nodes.node1.ipmi_netmask}}"
+ ipmi_gateway: "{{nodes.node1.ipmi_gateway}}"
+ ipmi_ip: "{{nodes.node1.ipmi_ip}}"
+ ipmi_password: "{{nodes.node1.ipmi_password}}"
+ hypervisor: "{{nodes.node1.hypervisor}}"
+ hypervisor_ip: "{{nodes.node1.hypervisor_ip}}"
+ node_position: "{{nodes.node1.node_position}}"
+ bond_lacp_rate: "{{nodes.node1.bond_lacp_rate}}"
+ bond_mode: "{{nodes.node1.bond_mode}}"
+ clusters:
+ - redundancy_factor: 1
+ cluster_members:
+ - "{{nodes.node1.cvm_ip}}"
+ name: "test-cluster"
+ timezone: "Asia/Calcutta"
+ cvm_ntp_servers:
+ - "{{nodes.ntp_servers[0]}}"
+ - "{{nodes.ntp_servers[1]}}"
+ cvm_dns_servers:
+ - "{{nodes.dns_servers[0]}}"
+ - "{{nodes.dns_servers[1]}}"
+ hypervisor_ntp_servers:
+ - "{{nodes.ntp_servers[0]}}"
+ - "{{nodes.ntp_servers[1]}}"
+ register: result
+ no_log: true
+ ignore_errors: True
+- name: Creation Status
+ assert:
+ that:
+ - result.response is defined
+ - result.failed==false
+ - result.changed==true
+ - result.response.cluster_urls is defined
+ fail_msg: " Fail : unable to image nodes and create cluster"
+ success_msg: "Success: cluster and node imaging done successfully"
######################################################
diff --git a/tests/integration/targets/ntnx_image_placement_policy/tasks/update.yml b/tests/integration/targets/ntnx_image_placement_policy/tasks/update.yml
index 3f0087324..b33fd6a04 100644
--- a/tests/integration/targets/ntnx_image_placement_policy/tasks/update.yml
+++ b/tests/integration/targets/ntnx_image_placement_policy/tasks/update.yml
@@ -33,7 +33,7 @@
#############################################################################################
-- name: test idempotency by definig same spec as before
+- name: test idempotency by defining same spec as before
ntnx_image_placement_policy:
state: present
policy_uuid: "{{ setup_policy.response.metadata.uuid }}"
diff --git a/tests/integration/targets/ntnx_karbon_clusters_and_info/tasks/crud.yml b/tests/integration/targets/ntnx_karbon_clusters_and_info/tasks/crud.yml
index 5afafa993..c795d674a 100644
--- a/tests/integration/targets/ntnx_karbon_clusters_and_info/tasks/crud.yml
+++ b/tests/integration/targets/ntnx_karbon_clusters_and_info/tasks/crud.yml
@@ -332,7 +332,7 @@
- result.changed == false
- result.failed == false
- result.msg == "Nothing to change."
- fail_msg: "Fail: idempotecy check fail "
+ fail_msg: "Fail: idempotency check fail "
success_msg: "Passed: Returned as expected "
#################################
- name: try to update node pool config with wrong labels
diff --git a/tests/integration/targets/ntnx_karbon_registries/tasks/create.yml b/tests/integration/targets/ntnx_karbon_registries/tasks/create.yml
index d0d3afc6e..fa0fa595e 100644
--- a/tests/integration/targets/ntnx_karbon_registries/tasks/create.yml
+++ b/tests/integration/targets/ntnx_karbon_registries/tasks/create.yml
@@ -1,5 +1,4 @@
---
-
- debug:
msg: "start ntnx_karbon_registries tests"
@@ -10,7 +9,6 @@
- set_fact:
registry_name: "{{random_name[0]}}"
-
- name: create registry with check_mode
ntnx_karbon_registries:
name: "{{registry_name}}"
@@ -27,8 +25,8 @@
- result.changed == false
- result.response.name == "{{registry_name}}"
- result.response.url == "{{url}}"
- success_msg: ' Success: returned response as expected '
- fail_msg: ' Fail: create registry with check_mode '
+ success_msg: " Success: returned response as expected "
+ fail_msg: " Fail: create registry with check_mode "
################################################################
- name: create registry
ntnx_karbon_registries:
diff --git a/tests/integration/targets/ntnx_karbon_registries/tasks/negativ_scenarios.yml b/tests/integration/targets/ntnx_karbon_registries/tasks/negativ_scenarios.yml
index 705149710..cb1a4ae1a 100644
--- a/tests/integration/targets/ntnx_karbon_registries/tasks/negativ_scenarios.yml
+++ b/tests/integration/targets/ntnx_karbon_registries/tasks/negativ_scenarios.yml
@@ -4,7 +4,7 @@
- name: create registry with wrong port number
ntnx_karbon_registries:
- name: test_regitry
+ name: test_registry
url: "{{url}}"
port: 501
register: result
diff --git a/tests/integration/targets/ntnx_ndb_clusters/tasks/CRUD.yml b/tests/integration/targets/ntnx_ndb_clusters/tasks/CRUD.yml
index 859075c8d..cba6dfeb2 100644
--- a/tests/integration/targets/ntnx_ndb_clusters/tasks/CRUD.yml
+++ b/tests/integration/targets/ntnx_ndb_clusters/tasks/CRUD.yml
@@ -4,35 +4,34 @@
- name: Register cluster with prism_vlan in check mode
ntnx_ndb_clusters:
- name: "{{cluster.cluster3.name}}"
- desc: "{{cluster.cluster3.desc}}"
- name_prefix: "{{cluster.cluster3.name_prefix}}"
- cluster_ip: "{{cluster.cluster3.cluster_ip}}"
- cluster_credentials:
- username: "{{cluster.cluster3.cluster_credentials.username}}"
- password: "{{cluster.cluster3.cluster_credentials.password}}"
- agent_network:
- dns_servers:
- - "{{cluster.cluster3.agent_network.dns_servers[0]}}"
- - "{{cluster.cluster3.agent_network.dns_servers[1]}}"
- ntp_servers:
- - "{{cluster.cluster3.agent_network.ntp_servers[0]}}"
- - "{{cluster.cluster3.agent_network.ntp_servers[1]}}"
- - "{{cluster.cluster3.agent_network.ntp_servers[2]}}"
- - "{{cluster.cluster3.agent_network.ntp_servers[3]}}"
- vlan_access:
- prism_vlan:
- vlan_name: "{{cluster.cluster3.vlan_access.prism_vlan.vlan_name}}"
- vlan_type: "{{cluster.cluster3.vlan_access.prism_vlan.vlan_type}}"
- static_ip: "{{cluster.cluster3.vlan_access.prism_vlan.static_ip}}"
- gateway: "{{cluster.cluster3.vlan_access.prism_vlan.gateway}}"
- subnet_mask: "{{cluster.cluster3.vlan_access.prism_vlan.subnet_mask}}"
- storage_container: "{{cluster.cluster3.storage_container}}"
+ name: "{{cluster.cluster3.name}}"
+ desc: "{{cluster.cluster3.desc}}"
+ name_prefix: "{{cluster.cluster3.name_prefix}}"
+ cluster_ip: "{{cluster.cluster3.cluster_ip}}"
+ cluster_credentials:
+ username: "{{cluster.cluster3.cluster_credentials.username}}"
+ password: "{{cluster.cluster3.cluster_credentials.password}}"
+ agent_network:
+ dns_servers:
+ - "{{cluster.cluster3.agent_network.dns_servers[0]}}"
+ - "{{cluster.cluster3.agent_network.dns_servers[1]}}"
+ ntp_servers:
+ - "{{cluster.cluster3.agent_network.ntp_servers[0]}}"
+ - "{{cluster.cluster3.agent_network.ntp_servers[1]}}"
+ - "{{cluster.cluster3.agent_network.ntp_servers[2]}}"
+ - "{{cluster.cluster3.agent_network.ntp_servers[3]}}"
+ vlan_access:
+ prism_vlan:
+ vlan_name: "{{cluster.cluster3.vlan_access.prism_vlan.vlan_name}}"
+ vlan_type: "{{cluster.cluster3.vlan_access.prism_vlan.vlan_type}}"
+ static_ip: "{{cluster.cluster3.vlan_access.prism_vlan.static_ip}}"
+ gateway: "{{cluster.cluster3.vlan_access.prism_vlan.gateway}}"
+ subnet_mask: "{{cluster.cluster3.vlan_access.prism_vlan.subnet_mask}}"
+ storage_container: "{{cluster.cluster3.storage_container}}"
register: result
ignore_errors: true
check_mode: true
-
- name: check listing status
assert:
that:
@@ -54,31 +53,31 @@
- name: Register cluster with prism_vlan
ntnx_ndb_clusters:
- wait: true
- name: "{{cluster.cluster3.name}}"
- desc: "{{cluster.cluster3.desc}}"
- name_prefix: "{{cluster.cluster3.name_prefix}}"
- cluster_ip: "{{cluster.cluster3.cluster_ip}}"
- cluster_credentials:
- username: "{{cluster.cluster3.cluster_credentials.username}}"
- password: "{{cluster.cluster3.cluster_credentials.password}}"
- agent_network:
- dns_servers:
- - "{{cluster.cluster3.agent_network.dns_servers[0]}}"
- - "{{cluster.cluster3.agent_network.dns_servers[1]}}"
- ntp_servers:
- - "{{cluster.cluster3.agent_network.ntp_servers[0]}}"
- - "{{cluster.cluster3.agent_network.ntp_servers[1]}}"
- - "{{cluster.cluster3.agent_network.ntp_servers[2]}}"
- - "{{cluster.cluster3.agent_network.ntp_servers[3]}}"
- vlan_access:
- prism_vlan:
- vlan_name: "{{cluster.cluster3.vlan_access.prism_vlan.vlan_name}}"
- vlan_type: "{{cluster.cluster3.vlan_access.prism_vlan.vlan_type}}"
- static_ip: "{{cluster.cluster3.vlan_access.prism_vlan.static_ip}}"
- gateway: "{{cluster.cluster3.vlan_access.prism_vlan.gateway}}"
- subnet_mask: "{{cluster.cluster3.vlan_access.prism_vlan.subnet_mask}}"
- storage_container: "{{cluster.cluster3.storage_container}}"
+ wait: true
+ name: "{{cluster.cluster3.name}}"
+ desc: "{{cluster.cluster3.desc}}"
+ name_prefix: "{{cluster.cluster3.name_prefix}}"
+ cluster_ip: "{{cluster.cluster3.cluster_ip}}"
+ cluster_credentials:
+ username: "{{cluster.cluster3.cluster_credentials.username}}"
+ password: "{{cluster.cluster3.cluster_credentials.password}}"
+ agent_network:
+ dns_servers:
+ - "{{cluster.cluster3.agent_network.dns_servers[0]}}"
+ - "{{cluster.cluster3.agent_network.dns_servers[1]}}"
+ ntp_servers:
+ - "{{cluster.cluster3.agent_network.ntp_servers[0]}}"
+ - "{{cluster.cluster3.agent_network.ntp_servers[1]}}"
+ - "{{cluster.cluster3.agent_network.ntp_servers[2]}}"
+ - "{{cluster.cluster3.agent_network.ntp_servers[3]}}"
+ vlan_access:
+ prism_vlan:
+ vlan_name: "{{cluster.cluster3.vlan_access.prism_vlan.vlan_name}}"
+ vlan_type: "{{cluster.cluster3.vlan_access.prism_vlan.vlan_type}}"
+ static_ip: "{{cluster.cluster3.vlan_access.prism_vlan.static_ip}}"
+ gateway: "{{cluster.cluster3.vlan_access.prism_vlan.gateway}}"
+ subnet_mask: "{{cluster.cluster3.vlan_access.prism_vlan.subnet_mask}}"
+ storage_container: "{{cluster.cluster3.storage_container}}"
register: result
ignore_errors: true
no_log: true
@@ -92,16 +91,16 @@
- result.response.name == "{{cluster.cluster3.name}}"
- result.response.description == "{{cluster.cluster3.desc}}"
- result.response.ipAddresses[0] == "{{cluster.cluster3.cluster_ip}}"
- fail_msg: "fail: Unable to Register cluster with prisim_vlan"
- success_msg: "pass: Register cluster with prisim_vlan finished successfully"
+ fail_msg: "fail: Unable to Register cluster with prism_vlan"
+ success_msg: "pass: Register cluster with prism_vlan finished successfully"
################################################################
- name: update cluster name , desc
ntnx_ndb_clusters:
- uuid: "{{result.cluster_uuid}}"
- name: newname
- desc: newdesc
+ uuid: "{{result.cluster_uuid}}"
+ name: newname
+ desc: newdesc
register: result
ignore_errors: true
no_log: true
@@ -115,14 +114,14 @@
fail_msg: "fail: Unable to update cluster name , desc"
success_msg: "pass: update cluster name , desc finished successfully"
- set_fact:
- todelete: "{{result.cluster_uuid}}"
+ todelete: "{{result.cluster_uuid}}"
################################################################
-- name: update cluster credeential in check_mode
+- name: update cluster credential in check_mode
ntnx_ndb_clusters:
- uuid: "{{result.cluster_uuid}}"
- cluster_credentials:
- username: test
- password: test
+ uuid: "{{result.cluster_uuid}}"
+ cluster_credentials:
+ username: test
+ password: test
register: result
ignore_errors: true
no_log: true
@@ -137,14 +136,14 @@
- result.response.username is defined
- result.response.password is defined
- result.cluster_uuid is defined
- fail_msg: "fail: update cluster credeential while check_mode"
+ fail_msg: "fail: update cluster credential while check_mode"
success_msg: "pass: Returned as expected"
################################################################
- name: Negative Scenarios update storage container
ntnx_ndb_clusters:
- uuid: "{{result.cluster_uuid}}"
- storage_container: "{{cluster.cluster3.storage_container}}"
+ uuid: "{{result.cluster_uuid}}"
+ storage_container: "{{cluster.cluster3.storage_container}}"
register: out
ignore_errors: true
no_log: true
@@ -155,21 +154,21 @@
- out.changed == false
- out.failed == true
- out.msg == "parameters are mutually exclusive: uuid|storage_container"
- fail_msg: "Fail: storage_continer updated "
+ fail_msg: "Fail: storage_container updated "
success_msg: " Success: returned error as expected "
################################################################
- name: Negative Scenarios update vlan access
ntnx_ndb_clusters:
- uuid: "{{result.cluster_uuid}}"
- vlan_access:
- prism_vlan:
- vlan_name: "{{cluster.cluster3.vlan_access.prism_vlan.vlan_name}}"
- vlan_type: "{{cluster.cluster3.vlan_access.prism_vlan.vlan_type}}"
- static_ip: "{{cluster.cluster3.vlan_access.prism_vlan.static_ip}}"
- gateway: "{{cluster.cluster3.vlan_access.prism_vlan.gateway}}"
- subnet_mask: "{{cluster.cluster3.vlan_access.prism_vlan.subnet_mask}}"
+ uuid: "{{result.cluster_uuid}}"
+ vlan_access:
+ prism_vlan:
+ vlan_name: "{{cluster.cluster3.vlan_access.prism_vlan.vlan_name}}"
+ vlan_type: "{{cluster.cluster3.vlan_access.prism_vlan.vlan_type}}"
+ static_ip: "{{cluster.cluster3.vlan_access.prism_vlan.static_ip}}"
+ gateway: "{{cluster.cluster3.vlan_access.prism_vlan.gateway}}"
+ subnet_mask: "{{cluster.cluster3.vlan_access.prism_vlan.subnet_mask}}"
register: out
ignore_errors: true
no_log: true
@@ -187,16 +186,16 @@
- name: Negative Scenarios update agent network
ntnx_ndb_clusters:
- uuid: "{{result.cluster_uuid}}"
- agent_network:
- dns_servers:
- - "{{cluster.cluster3.agent_network.dns_servers[0]}}"
- - "{{cluster.cluster3.agent_network.dns_servers[1]}}"
- ntp_servers:
- - "{{cluster.cluster3.agent_network.ntp_servers[0]}}"
- - "{{cluster.cluster3.agent_network.ntp_servers[1]}}"
- - "{{cluster.cluster3.agent_network.ntp_servers[2]}}"
- - "{{cluster.cluster3.agent_network.ntp_servers[3]}}"
+ uuid: "{{result.cluster_uuid}}"
+ agent_network:
+ dns_servers:
+ - "{{cluster.cluster3.agent_network.dns_servers[0]}}"
+ - "{{cluster.cluster3.agent_network.dns_servers[1]}}"
+ ntp_servers:
+ - "{{cluster.cluster3.agent_network.ntp_servers[0]}}"
+ - "{{cluster.cluster3.agent_network.ntp_servers[1]}}"
+ - "{{cluster.cluster3.agent_network.ntp_servers[2]}}"
+ - "{{cluster.cluster3.agent_network.ntp_servers[3]}}"
register: out
ignore_errors: true
no_log: true
@@ -214,8 +213,8 @@
- name: Negative Scenarios update agent network
ntnx_ndb_clusters:
- uuid: "{{result.cluster_uuid}}"
- name_prefix: "{{cluster.cluster3.name_prefix}}"
+ uuid: "{{result.cluster_uuid}}"
+ name_prefix: "{{cluster.cluster3.name_prefix}}"
register: out
ignore_errors: true
no_log: true
@@ -321,8 +320,8 @@
- name: delete cluster
ntnx_ndb_clusters:
- uuid: "{{todelete}}"
- state: absent
+ uuid: "{{todelete}}"
+ state: absent
register: result
ignore_errors: true
no_log: true
@@ -337,6 +336,5 @@
fail_msg: "Unable to delete custer"
success_msg: "cluster deleted successfully"
-
- set_fact:
- todelete: []
+ todelete: []
diff --git a/tests/integration/targets/ntnx_ndb_database_clones/tasks/clones.yml b/tests/integration/targets/ntnx_ndb_database_clones/tasks/clones.yml
index 882a78bb5..9d5b85193 100644
--- a/tests/integration/targets/ntnx_ndb_database_clones/tasks/clones.yml
+++ b/tests/integration/targets/ntnx_ndb_database_clones/tasks/clones.yml
@@ -16,14 +16,13 @@
- set_fact:
db1_name: "{{random_name[0]}}"
- clone_db1: "{{random_name[0]}}-clone"
+ clone_db1: "{{random_name[0]}}-clone"
vm1_name: "{{random_name[0]}}-vm"
tm1: "{{random_name[0]}}-time-machine"
snapshot_name: "{{random_name[0]}}-snapshot"
############################################ setup db and its snapshot for clone tests ###########################################
-
- name: create single instance postgres database on new db server vm
ntnx_ndb_databases:
wait: true
@@ -92,7 +91,7 @@
- name: create manual snapshot of database
ntnx_ndb_database_snapshots:
- time_machine_uuid: "{{time_machine_uuid}}"
+ time_machine_uuid: "{{time_machine_uuid}}"
name: "{{snapshot_name}}"
register: result
@@ -112,7 +111,6 @@
############################################ create clone on new db server vm tests ###########################################
-
- name: create spec for clone of database created above on new db server vm
check_mode: yes
ntnx_ndb_database_clones:
@@ -160,76 +158,73 @@
ansible-clones: ansible-test-db-clones
register: result
-
-
- set_fact:
- expected_response: {
- "actionArguments": [
- {
- "name": "db_password",
- "value": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
- },
- {
- "name": "pre_clone_cmd",
- "value": "ls"
- },
- {
- "name": "post_clone_cmd",
- "value": "ls -a"
- },
- {
- "name": "dbserver_description",
- "value": "vm for db server"
- }
- ],
- "clustered": false,
- "computeProfileId": "{{compute_profile.uuid}}",
- "createDbserver": true,
- "databaseParameterProfileId": "{{db_params_profile.uuid}}",
- "description": "ansible-created-clone",
- "latestSnapshot": false,
- "lcmConfig": {
- "databaseLCMConfig": {
- "expiryDetails": {
- "deleteDatabase": true,
- "expireInDays": 2,
- "expiryDateTimezone": "Asia/Calcutta",
- "remindBeforeInDays": 1
- },
- "refreshDetails": {
- "refreshDateTimezone": "Asia/Calcutta",
- "refreshInDays": 2,
- "refreshTime": "12:00:00"
- }
- }
+ expected_response:
+ {
+ "actionArguments":
+ [
+ {
+ "name": "db_password",
+ "value": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
+ },
+ { "name": "pre_clone_cmd", "value": "ls" },
+ { "name": "post_clone_cmd", "value": "ls -a" },
+ { "name": "dbserver_description", "value": "vm for db server" },
+ ],
+ "clustered": false,
+ "computeProfileId": "{{compute_profile.uuid}}",
+ "createDbserver": true,
+ "databaseParameterProfileId": "{{db_params_profile.uuid}}",
+ "description": "ansible-created-clone",
+ "latestSnapshot": false,
+ "lcmConfig":
+ {
+ "databaseLCMConfig":
+ {
+ "expiryDetails":
+ {
+ "deleteDatabase": true,
+ "expireInDays": 2,
+ "expiryDateTimezone": "Asia/Calcutta",
+ "remindBeforeInDays": 1,
+ },
+ "refreshDetails":
+ {
+ "refreshDateTimezone": "Asia/Calcutta",
+ "refreshInDays": 2,
+ "refreshTime": "12:00:00",
+ },
+ },
+ },
+ "name": "{{clone_db1}}",
+ "networkProfileId": "{{network_profile.uuid}}",
+ "nodeCount": 1,
+ "nodes":
+ [
+ {
+ "computeProfileId": "{{compute_profile.uuid}}",
+ "networkProfileId": "{{network_profile.uuid}}",
+ "nxClusterId": "{{cluster.cluster1.uuid}}",
+ "properties": [],
+ "vmName": "{{vm1_name}}",
+ },
+ ],
+ "nxClusterId": "{{cluster.cluster1.uuid}}",
+ "snapshotId": null,
+ "sshPublicKey": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
+ "tags":
+ [
+ {
+ "tagId": "{{tags.clones.uuid}}",
+ "tagName": "ansible-clones",
+ "value": "ansible-test-db-clones",
},
- "name": "{{clone_db1}}",
- "networkProfileId": "{{network_profile.uuid}}",
- "nodeCount": 1,
- "nodes": [
- {
- "computeProfileId": "{{compute_profile.uuid}}",
- "networkProfileId": "{{network_profile.uuid}}",
- "nxClusterId": "{{cluster.cluster1.uuid}}",
- "properties": [],
- "vmName": "{{vm1_name}}"
- }
- ],
- "nxClusterId": "{{cluster.cluster1.uuid}}",
- "snapshotId": null,
- "sshPublicKey": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
- "tags": [
- {
- "tagId": "{{tags.clones.uuid}}",
- "tagName": "ansible-clones",
- "value": "ansible-test-db-clones"
- }
- ],
- "timeMachineId": "{{time_machine_uuid}}",
- "timeZone": "UTC",
- "userPitrTimestamp": "2023-02-04 07:29:36",
- "vmPassword": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
- }
+ ],
+ "timeMachineId": "{{time_machine_uuid}}",
+ "timeZone": "UTC",
+ "userPitrTimestamp": "2023-02-04 07:29:36",
+ "vmPassword": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
+ }
- name: Check mode status
assert:
@@ -328,7 +323,6 @@
############################################ clone update and removal/refresh schedules related tests ###########################################
-
- name: update name, desc, tags and schedules
ntnx_ndb_database_clones:
uuid: "{{clone_uuid}}"
@@ -370,7 +364,7 @@
- result.response.tags[0].value == "ansible-test-db-clones-updated"
fail_msg: "Unable to update clone"
- success_msg: "Database clone updated succefully"
+ success_msg: "Database clone updated successfully"
- name: check idempotency
ntnx_ndb_database_clones:
@@ -393,8 +387,6 @@
ansible-clones: ansible-test-db-clones-updated
register: result
-
-
- name: check idempotency status
assert:
that:
@@ -404,7 +396,6 @@
fail_msg: "database clone got updated"
success_msg: "database clone update got skipped due to no state changes"
-
- name: remove schedules
ntnx_ndb_database_clones:
uuid: "{{clone_uuid}}"
@@ -417,8 +408,6 @@
register: result
-
-
- name: Check schedule remove status
assert:
that:
@@ -429,11 +418,10 @@
- result.response.lcmConfig.expiryDetails == None
- result.response.lcmConfig.refreshDetails == None
fail_msg: "schedules update failed"
- success_msg: "schedules removed succefully"
+ success_msg: "schedules removed successfully"
########################################### refresh clone ###########################################
-
- name: create spec for refresh clone to a pitr timestamp
check_mode: yes
ntnx_ndb_database_clone_refresh:
@@ -442,7 +430,6 @@
timezone: "UTC"
register: result
-
- name: Check refresh db with pitr spec
assert:
that:
@@ -453,15 +440,12 @@
fail_msg: "creation refresh db clone spec failed"
success_msg: "refresh db clone spec created successfully"
-
- name: refresh db clone
ntnx_ndb_database_clone_refresh:
uuid: "{{clone_uuid}}"
snapshot_uuid: "{{snapshot_uuid}}"
register: result
-
-
- name: Check database refresh status
assert:
that:
@@ -470,11 +454,10 @@
- result.uuid is defined
- result.response.status == "READY"
fail_msg: "database refresh failed"
- success_msg: "database refresh completed succefully"
+ success_msg: "database refresh completed successfully"
########################################### delete clone tests###########################################
-
- name: create soft remove spec
check_mode: yes
ntnx_ndb_database_clones:
@@ -483,8 +466,6 @@
soft_remove: true
register: result
-
-
- name: verify soft remove spec
assert:
that:
@@ -496,8 +477,6 @@
fail_msg: "creation of spec for soft remove failed"
success_msg: "spec for soft remove created successfully"
-
-
- name: create unregistration spec
check_mode: yes
ntnx_ndb_database_clones:
@@ -505,8 +484,6 @@
uuid: "{{clone_uuid}}"
register: result
-
-
- name: verify unregistration spec
assert:
that:
@@ -525,8 +502,6 @@
delete_from_vm: true
register: result
-
-
- name: verify status of db clone delete
assert:
that:
@@ -538,7 +513,6 @@
########################################### authorize and deauthorize db server vms###########################################
-
- name: authorize db server vms
ntnx_ndb_authorize_db_server_vms:
time_machine:
@@ -547,8 +521,6 @@
- name: "{{vm1_name}}"
register: result
-
-
- name: verify status of authorization of db server vms
assert:
that:
@@ -567,8 +539,6 @@
- name: "{{vm1_name}}"
register: result
-
-
- name: verify status of deauthorization of db server vms
assert:
that:
@@ -578,7 +548,6 @@
fail_msg: "database deauthorization with time machine failed"
success_msg: "database deauthorization with time machine went successfully"
-
- name: authorize db server vms for hosting clone
ntnx_ndb_authorize_db_server_vms:
time_machine:
@@ -587,7 +556,6 @@
- name: "{{vm1_name}}"
register: result
-
- name: verify status of authorization of db server vms
assert:
that:
@@ -599,7 +567,6 @@
############################################ clone on authorized db server vm ###########################################
-
- set_fact:
timestamp: "2123-11-08 12:36:15"
- name: create clone using snapshot on authorized server
@@ -636,8 +603,6 @@
ansible-clones: ansible-test-db-clones
register: result
-
-
- name: Clone create status on authorized db server vm
assert:
that:
@@ -652,7 +617,7 @@
- result.response.databaseNodes[0].dbserverId == db_server_uuid
- result.response.parentTimeMachineId == time_machine_uuid
fail_msg: "Unable to create clone"
- success_msg: "Database clone created succefully"
+ success_msg: "Database clone created successfully"
- set_fact:
delete_clone_uuid: "{{result.uuid}}"
@@ -683,8 +648,6 @@
- name: "{{vm1_name}}"
register: result
-
-
- name: verify status of authorization of db server vms
assert:
that:
@@ -728,8 +691,6 @@
ansible-clones: ansible-test-db-clones
register: result
-
-
- name: Clone create status on authorized db server vm
assert:
that:
@@ -746,8 +707,6 @@
fail_msg: "Unable to create clone from latest snapshot"
success_msg: "Database clone created from latest snapshot successfully"
-
-
- set_fact:
delete_clone_uuid: "{{result.uuid}}"
@@ -800,7 +759,6 @@
success_msg: "get era clones using it's id successfully"
################################################################
-
- name: get era clones with incorrect name
ntnx_ndb_clones_info:
name: "abcd"
@@ -825,7 +783,6 @@
delete_from_vm: true
register: result
-
- name: verify status of db clone delete
assert:
that:
@@ -835,7 +792,6 @@
fail_msg: "database delete failed"
success_msg: "database delete successfully"
-
- name: delete db server vm
ntnx_ndb_db_server_vms:
state: "absent"
@@ -852,7 +808,6 @@
fail_msg: "db server vm deleted failed"
success_msg: "db server vm deleted successfully"
-
- name: delete database created earlier
ntnx_ndb_databases:
state: "absent"
diff --git a/tests/integration/targets/ntnx_ndb_databases_actions/tasks/all_actions.yml b/tests/integration/targets/ntnx_ndb_databases_actions/tasks/all_actions.yml
index 273b84718..eaeea2156 100644
--- a/tests/integration/targets/ntnx_ndb_databases_actions/tasks/all_actions.yml
+++ b/tests/integration/targets/ntnx_ndb_databases_actions/tasks/all_actions.yml
@@ -24,7 +24,6 @@
############################################ setup db ###########################################
-
- name: create single instance postgres database on new db server vm
ntnx_ndb_databases:
wait: true
@@ -91,7 +90,6 @@
- set_fact:
db_server_uuid: "{{result.response.databaseNodes[0].dbserverId}}"
-
############################################ snapshots test ###########################################
- name: create snapshot create spec
@@ -107,27 +105,23 @@
register: result
- set_fact:
- expected_response: {
+ expected_response:
+ {
"changed": false,
"error": null,
"failed": false,
- "response": {
- "lcmConfig": {
- "snapshotLCMConfig": {
- "expiryDetails": {
- "expireInDays": 4,
- }
- }
- },
+ "response":
+ {
+ "lcmConfig":
+ {
+ "snapshotLCMConfig": { "expiryDetails": { "expireInDays": 4 } },
+ },
"name": "{{snapshot_name}}",
- "replicateToClusterIds": [
- "{{cluster.cluster1.uuid}}",
- "test_uuid2",
- "test_uuid3"
- ]
- },
- "snapshot_uuid": null
- }
+ "replicateToClusterIds":
+ ["{{cluster.cluster1.uuid}}", "test_uuid2", "test_uuid3"],
+ },
+ "snapshot_uuid": null,
+ }
- name: Check mode status
assert:
@@ -139,14 +133,12 @@
fail_msg: "Unable to create snapshot create spec"
success_msg: "Snapshot create spec generated successfully using check mode"
-
- name: create snapshot with minimal spec
ntnx_ndb_database_snapshots:
name: "{{snapshot_name}}1"
time_machine_uuid: "{{time_machine_uuid}}"
register: result
-
- name: snapshot create status
assert:
that:
@@ -165,7 +157,6 @@
expiry_days: 4
register: result
-
- set_fact:
snapshot_uuid: "{{result.snapshot_uuid}}"
@@ -181,8 +172,6 @@
fail_msg: "Unable to create snapshot with expiry config"
success_msg: "Snapshot with expiry config created successfully"
-
-
- name: rename snapshot
ntnx_ndb_database_snapshots:
snapshot_uuid: "{{snapshot_uuid}}"
@@ -200,8 +189,6 @@
fail_msg: "Unable to rename snapshot"
success_msg: "Snapshot renamed successfully"
-
-
- name: update expiry
ntnx_ndb_database_snapshots:
snapshot_uuid: "{{snapshot_uuid}}"
@@ -219,8 +206,6 @@
fail_msg: "Unable to update snapshot expiry"
success_msg: "snapshot expiry updated successfully"
-
-
- name: remove expiry schedule
ntnx_ndb_database_snapshots:
snapshot_uuid: "{{snapshot_uuid}}"
@@ -238,7 +223,6 @@
fail_msg: "Unable to remove snapshot expiry schedule"
success_msg: "snapshot expiry schedule removed successfully"
-
- name: Add expiry schedule and rename
ntnx_ndb_database_snapshots:
snapshot_uuid: "{{snapshot_uuid}}"
@@ -259,7 +243,6 @@
fail_msg: "Unable to add expiry schedule and rename it"
success_msg: "Snapshot updated successfully"
-
- name: Idempotency check
ntnx_ndb_database_snapshots:
snapshot_uuid: "{{snapshot_uuid}}"
@@ -275,7 +258,6 @@
fail_msg: "snapshot got updated"
success_msg: "snapshot update got skipped due to no state changes"
-
############################################ log catchup ######################################
- name: create spec for log catchup
@@ -285,35 +267,29 @@
register: result
- set_fact:
- expected_response: {
+ expected_response:
+ {
"changed": false,
"error": null,
"failed": false,
- "response": {
- "actionArguments": [
- {
- "name": "preRestoreLogCatchup",
- "value": false
- },
- {
- "name": "switch_log",
- "value": true
- }
- ],
- "forRestore": false
- }
- }
-
-
+ "response":
+ {
+ "actionArguments":
+ [
+ { "name": "preRestoreLogCatchup", "value": false },
+ { "name": "switch_log", "value": true },
+ ],
+ "forRestore": false,
+ },
+ }
- name: Check mode status
assert:
that:
- result == expected_response
- fail_msg: "Unable to create log catcup spec"
+ fail_msg: "Unable to create log catchup spec"
success_msg: "log catchup spec created successfully"
-
- name: create spec for log catchup for restore
check_mode: yes
ntnx_ndb_database_log_catchup:
@@ -322,34 +298,29 @@
register: result
- set_fact:
- expected_response: {
+ expected_response:
+ {
"changed": false,
"error": null,
"failed": false,
- "response": {
- "actionArguments": [
- {
- "name": "preRestoreLogCatchup",
- "value": True
- },
- {
- "name": "switch_log",
- "value": true
- }
- ],
- "forRestore": true
- }
- }
-
+ "response":
+ {
+ "actionArguments":
+ [
+ { "name": "preRestoreLogCatchup", "value": True },
+ { "name": "switch_log", "value": true },
+ ],
+ "forRestore": true,
+ },
+ }
- name: Check mode status
assert:
that:
- result == expected_response
- fail_msg: "Unable to create log catcup spec"
+ fail_msg: "Unable to create log catchup spec"
success_msg: "log catchup spec created successfully"
-
- name: perform log catchup
ntnx_ndb_database_log_catchup:
time_machine_uuid: "{{time_machine_uuid}}"
@@ -377,32 +348,28 @@
register: result
- set_fact:
- expected_result: {
+ expected_result:
+ {
"changed": false,
"db_uuid": null,
"error": null,
"failed": false,
- "response": {
- "actionArguments": [
- {
- "name": "sameLocation",
- "value": true
- }
- ],
+ "response":
+ {
+ "actionArguments": [{ "name": "sameLocation", "value": true }],
"latestSnapshot": null,
"snapshotId": null,
"timeZone": "UTC",
- "userPitrTimestamp": "2023-01-02 11:02:22"
- }
- }
+ "userPitrTimestamp": "2023-01-02 11:02:22",
+ },
+ }
- name: Check mode status
assert:
that:
- result == expected_result
fail_msg: "Unable to create restore using pitr timestamp spec"
- success_msg: "Spec for database restore using pitr timetsmap created successfully"
-
+ success_msg: "Spec for database restore using pitr timestamp created successfully"
- name: create restore database spec with latest snapshot
check_mode: yes
@@ -411,25 +378,21 @@
register: result
- set_fact:
- expected_result: {
+ expected_result:
+ {
"changed": false,
"db_uuid": null,
"error": null,
"failed": false,
- "response": {
- "actionArguments": [
- {
- "name": "sameLocation",
- "value": true
- }
- ],
+ "response":
+ {
+ "actionArguments": [{ "name": "sameLocation", "value": true }],
"latestSnapshot": true,
"snapshotId": null,
"timeZone": null,
- "userPitrTimestamp": null
- }
- }
-
+ "userPitrTimestamp": null,
+ },
+ }
- name: Check mode status
assert:
@@ -438,8 +401,6 @@
fail_msg: "Unable to create restore using latest snapshot spec"
success_msg: "Spec for database restore using latest snapshot created successfully"
-
-
- name: create restore database spec using snapshot uuid
check_mode: yes
ntnx_ndb_database_restore:
@@ -448,24 +409,21 @@
register: result
- set_fact:
- expected_result: {
+ expected_result:
+ {
"changed": false,
"db_uuid": null,
"error": null,
"failed": false,
- "response": {
- "actionArguments": [
- {
- "name": "sameLocation",
- "value": true
- }
- ],
+ "response":
+ {
+ "actionArguments": [{ "name": "sameLocation", "value": true }],
"latestSnapshot": null,
"snapshotId": "{{snapshot_uuid}}",
"timeZone": null,
- "userPitrTimestamp": null
- }
- }
+ "userPitrTimestamp": null,
+ },
+ }
- name: Check mode status
assert:
@@ -474,7 +432,6 @@
fail_msg: "Unable to create restore using snapshot uuid spec"
success_msg: "Spec for database restore using snapshot uuid created successfully"
-
- name: perform restore using latest snapshot
ntnx_ndb_database_restore:
db_uuid: "{{db_uuid}}"
@@ -490,7 +447,6 @@
fail_msg: "Unable to restore database using latest snapshot"
success_msg: "database restored successfully using latest snapshot"
-
- name: perform restore using snapshot uuid
ntnx_ndb_database_restore:
db_uuid: "{{db_uuid}}"
@@ -519,33 +475,24 @@
register: result
- set_fact:
- expected_result: {
+ expected_result:
+ {
"changed": false,
"db_uuid": null,
"error": null,
"failed": false,
- "response": {
- "actionArguments": [
- {
- "name": "working_dir",
- "value": "/tmp"
- },
- {
- "name": "data_storage_size",
- "value": 10
- },
- {
- "name": "pre_script_cmd",
- "value": "ls"
- },
- {
- "name": "post_script_cmd",
- "value": "ls -a"
- }
- ],
- "applicationType": "postgres_database"
- }
- }
+ "response":
+ {
+ "actionArguments":
+ [
+ { "name": "working_dir", "value": "/tmp" },
+ { "name": "data_storage_size", "value": 10 },
+ { "name": "pre_script_cmd", "value": "ls" },
+ { "name": "post_script_cmd", "value": "ls -a" },
+ ],
+ "applicationType": "postgres_database",
+ },
+ }
- name: Check mode status
assert:
@@ -554,7 +501,6 @@
fail_msg: "Unable to create database scaling spec"
success_msg: "Spec for database scaling with pre post commands created successfully"
-
- name: extend database storage for scaling database
ntnx_ndb_database_scale:
db_uuid: "{{db_uuid}}"
@@ -575,7 +521,6 @@
############################################ add / remove linked databases ###########################################
-
- name: create databases in database instance
check_mode: yes
ntnx_ndb_linked_databases:
@@ -587,25 +532,22 @@
register: result
- set_fact:
- expected_result: {
+ expected_result:
+ {
"changed": false,
"db_instance_uuid": "{{db_uuid}}",
"error": null,
"failed": false,
- "response": {
- "databases": [
- {
- "databaseName": "test1"
- },
- {
- "databaseName": "test2"
- },
- {
- "databaseName": "test3"
- }
- ]
- }
- }
+ "response":
+ {
+ "databases":
+ [
+ { "databaseName": "test1" },
+ { "databaseName": "test2" },
+ { "databaseName": "test3" },
+ ],
+ },
+ }
- name: Check mode status
assert:
@@ -614,7 +556,6 @@
fail_msg: "Unable to create spec for adding databases in database instance"
success_msg: "Spec for adding databases in database instance created successfully"
-
- name: add databases in database instance
ntnx_ndb_linked_databases:
db_instance_uuid: "{{db_uuid}}"
@@ -627,7 +568,7 @@
- name: create linked databases to its uuid map
set_fact:
- linked_databases: "{{ linked_databases | default({}) | combine ({ item['name'] : item['id'] }) }}"
+ linked_databases: "{{ linked_databases | default({}) | combine ({ item['name'] : item['id'] }) }}"
loop: "{{result.response}}"
no_log: true
@@ -643,7 +584,6 @@
fail_msg: "Unable to add database to database instance"
success_msg: "databases added to database instance successfully"
-
- name: remove databases in database instance
ntnx_ndb_linked_databases:
state: "absent"
@@ -655,7 +595,7 @@
- name: create linked database map
set_fact:
- linked_databases: "{{ linked_databases | default({}) | combine ({ item['name'] : item['id'] }) }}"
+ linked_databases: "{{ linked_databases | default({}) | combine ({ item['name'] : item['id'] }) }}"
loop: "{{result.response}}"
no_log: true
@@ -670,10 +610,8 @@
fail_msg: "Unable to remove database from database instance"
success_msg: "linked database from database instance removed successfully"
-
############################################ cleanup ###########################################
-
- name: delete database created earlier
ntnx_ndb_databases:
state: "absent"
diff --git a/tests/integration/targets/ntnx_ndb_databases_sanity/tasks/tests.yml b/tests/integration/targets/ntnx_ndb_databases_sanity/tasks/tests.yml
index 26cc67f06..64c82eade 100644
--- a/tests/integration/targets/ntnx_ndb_databases_sanity/tasks/tests.yml
+++ b/tests/integration/targets/ntnx_ndb_databases_sanity/tasks/tests.yml
@@ -2,7 +2,6 @@
# Summary:
# This playbook will test basic database flows
-
- debug:
msg: "start ndb databases crud tests"
@@ -17,7 +16,6 @@
################################### Single instance postgres database tests #############################
-
- name: create spec for single instance postgres database on new db server vm
check_mode: yes
ntnx_ndb_databases:
@@ -83,110 +81,71 @@
register: result
- set_fact:
- expected_action_arguments: [
- {
- "name": "dbserver_description",
- "value": "vm for db server"
- },
- {
- "name": "listener_port",
- "value": "9999"
- },
- {
- "name": "auto_tune_staging_drive",
- "value": false
- },
- {
- "name": "allocate_pg_hugepage",
- "value": True
- },
- {
- "name": "cluster_database",
- "value": false
- },
- {
- "name": "auth_method",
- "value": "md5"
- },
- {
- "name": "db_password",
- "value": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
- },
- {
- "name": "pre_create_script",
- "value": "ls"
- },
- {
- "name": "post_create_script",
- "value": "ls -a"
- },
- {
- "name": "database_names",
- "value": "testAnsible"
- },
- {
- "name": "database_size",
- "value": "200"
- }
- ]
+ expected_action_arguments:
+ [
+ { "name": "dbserver_description", "value": "vm for db server" },
+ { "name": "listener_port", "value": "9999" },
+ { "name": "auto_tune_staging_drive", "value": false },
+ { "name": "allocate_pg_hugepage", "value": True },
+ { "name": "cluster_database", "value": false },
+ { "name": "auth_method", "value": "md5" },
+ {
+ "name": "db_password",
+ "value": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
+ },
+ { "name": "pre_create_script", "value": "ls" },
+ { "name": "post_create_script", "value": "ls -a" },
+ { "name": "database_names", "value": "testAnsible" },
+ { "name": "database_size", "value": "200" },
+ ]
- set_fact:
- expected_time_machine_info: {
- "autoTuneLogDrive": true,
- "description": "TM-desc",
- "name": "TM1",
- "schedule": {
- "continuousSchedule": {
- "enabled": true,
- "logBackupInterval": 30,
- "snapshotsPerDay": 2
- },
- "monthlySchedule": {
- "dayOfMonth": 4,
- "enabled": true
- },
- "quartelySchedule": {
- "dayOfMonth": 4,
- "enabled": true,
- "startMonth": "JANUARY"
- },
- "snapshotTimeOfDay": {
- "hours": 11,
- "minutes": 10,
- "seconds": 2
- },
- "weeklySchedule": {
- "dayOfWeek": "WEDNESDAY",
- "enabled": true
- }
- },
- "slaId": "{{sla.uuid}}"
- }
+ expected_time_machine_info:
+ {
+ "autoTuneLogDrive": true,
+ "description": "TM-desc",
+ "name": "TM1",
+ "schedule":
+ {
+ "continuousSchedule":
+ {
+ "enabled": true,
+ "logBackupInterval": 30,
+ "snapshotsPerDay": 2,
+ },
+ "monthlySchedule": { "dayOfMonth": 4, "enabled": true },
+ "quartelySchedule":
+ { "dayOfMonth": 4, "enabled": true, "startMonth": "JANUARY" },
+ "snapshotTimeOfDay": { "hours": 11, "minutes": 10, "seconds": 2 },
+ "weeklySchedule": { "dayOfWeek": "WEDNESDAY", "enabled": true },
+ },
+ "slaId": "{{sla.uuid}}",
+ }
- set_fact:
- mainetance_tasks: {
- "maintenanceWindowId": "{{maintenance.window_uuid}}",
- "tasks": [
- {
- "payload": {
- "prePostCommand": {
- "postCommand": "ls -a",
- "preCommand": "ls"
- }
- },
- "taskType": "OS_PATCHING"
- },
- {
- "payload": {
- "prePostCommand": {
- "postCommand": "ls -F",
- "preCommand": "ls -l"
- }
- },
- "taskType": "DB_PATCHING"
- }
- ]
- }
+ maintenance_tasks:
+ {
+ "maintenanceWindowId": "{{maintenance.window_uuid}}",
+ "tasks":
+ [
+ {
+ "payload":
+ {
+ "prePostCommand":
+ { "postCommand": "ls -a", "preCommand": "ls" },
+ },
+ "taskType": "OS_PATCHING",
+ },
+ {
+ "payload":
+ {
+ "prePostCommand":
+ { "postCommand": "ls -F", "preCommand": "ls -l" },
+ },
+ "taskType": "DB_PATCHING",
+ },
+ ],
+ }
- name: Check mode status
assert:
@@ -205,13 +164,11 @@
- result.response.nodes | length == 1
- result.response.nodeCount == 1
- result.response.nodes[0].nxClusterId == "{{cluster.cluster1.uuid}}"
- - result.response.maintenanceTasks == mainetance_tasks
+ - result.response.maintenanceTasks == maintenance_tasks
- result.response.createDbserver == True
fail_msg: "Unable to create single instance postgres database provision spec"
success_msg: "single instance postgres database provision spec created successfully"
-
-
- name: create single instance postgres database on new db server vm
ntnx_ndb_databases:
wait: true
@@ -283,7 +240,7 @@
#
- name: create properties map
set_fact:
- properties: "{{ properties | combine ({ item['name'] : item['value'] }) }}"
+ properties: "{{ properties | combine ({ item['name'] : item['value'] }) }}"
loop: "{{result.response.properties}}"
no_log: true
#
@@ -332,7 +289,6 @@
################################### update tests #############################
-
- name: update database with check mode
check_mode: yes
ntnx_ndb_databases:
@@ -380,11 +336,9 @@
- result.response.tags[0].tagName == "{{tags.databases.name}}"
- result.response.tags[0].value == "single-instance-dbs-updated"
-
fail_msg: "Unable to update single instance postgres database"
success_msg: "single instance postgres database updated successfully"
-
- name: idempotency checks
ntnx_ndb_databases:
wait: true
@@ -427,8 +381,6 @@
fail_msg: "creation of spec for delete db from vm failed"
success_msg: "spec for delete db from vm created successfully"
-
-
- name: create spec for soft remove
check_mode: yes
ntnx_ndb_databases:
@@ -451,7 +403,6 @@
fail_msg: "creation of spec for soft remove with time machine delete failed"
success_msg: "spec for soft remove with time machine delete created successfully"
-
#####################################INFO Module tests#######################################################
- debug:
@@ -525,7 +476,6 @@
fail_msg: "Unable to Get era databases using its id"
success_msg: "Get era databases using its id finished successfully"
-
################################################################
- name: get era database with incorrect name
@@ -546,7 +496,6 @@
############################################################################################
-
- name: unregister db along with delete time machine
ntnx_ndb_databases:
db_uuid: "{{db_uuid}}"
@@ -564,7 +513,6 @@
fail_msg: "database delete failed"
success_msg: "database deleted successfully"
-
- name: delete db server vm
ntnx_ndb_db_server_vms:
state: "absent"
diff --git a/tests/integration/targets/ntnx_ndb_databases_single_instance_1/tasks/tests.yml b/tests/integration/targets/ntnx_ndb_databases_single_instance_1/tasks/tests.yml
index 73de26640..ac0bcaa97 100644
--- a/tests/integration/targets/ntnx_ndb_databases_single_instance_1/tasks/tests.yml
+++ b/tests/integration/targets/ntnx_ndb_databases_single_instance_1/tasks/tests.yml
@@ -20,7 +20,6 @@
################################### Single instance postgres database tests #############################
-
- name: create spec for single instance postgres database on new db server vm
check_mode: yes
ntnx_ndb_databases:
@@ -86,110 +85,71 @@
register: result
- set_fact:
- expected_action_arguments: [
- {
- "name": "dbserver_description",
- "value": "vm for db server"
- },
- {
- "name": "listener_port",
- "value": "9999"
- },
- {
- "name": "auto_tune_staging_drive",
- "value": false
- },
- {
- "name": "allocate_pg_hugepage",
- "value": True
- },
- {
- "name": "cluster_database",
- "value": false
- },
- {
- "name": "auth_method",
- "value": "md5"
- },
- {
- "name": "db_password",
- "value": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
- },
- {
- "name": "pre_create_script",
- "value": "ls"
- },
- {
- "name": "post_create_script",
- "value": "ls -a"
- },
- {
- "name": "database_names",
- "value": "testAnsible"
- },
- {
- "name": "database_size",
- "value": "200"
- }
- ]
+ expected_action_arguments:
+ [
+ { "name": "dbserver_description", "value": "vm for db server" },
+ { "name": "listener_port", "value": "9999" },
+ { "name": "auto_tune_staging_drive", "value": false },
+ { "name": "allocate_pg_hugepage", "value": True },
+ { "name": "cluster_database", "value": false },
+ { "name": "auth_method", "value": "md5" },
+ {
+ "name": "db_password",
+ "value": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
+ },
+ { "name": "pre_create_script", "value": "ls" },
+ { "name": "post_create_script", "value": "ls -a" },
+ { "name": "database_names", "value": "testAnsible" },
+ { "name": "database_size", "value": "200" },
+ ]
- set_fact:
- expected_time_machine_info: {
- "autoTuneLogDrive": true,
- "description": "TM-desc",
- "name": "TM1",
- "schedule": {
- "continuousSchedule": {
- "enabled": true,
- "logBackupInterval": 30,
- "snapshotsPerDay": 2
- },
- "monthlySchedule": {
- "dayOfMonth": 4,
- "enabled": true
- },
- "quartelySchedule": {
- "dayOfMonth": 4,
- "enabled": true,
- "startMonth": "JANUARY"
- },
- "snapshotTimeOfDay": {
- "hours": 11,
- "minutes": 10,
- "seconds": 2
- },
- "weeklySchedule": {
- "dayOfWeek": "WEDNESDAY",
- "enabled": true
- }
- },
- "slaId": "{{sla.uuid}}"
- }
+ expected_time_machine_info:
+ {
+ "autoTuneLogDrive": true,
+ "description": "TM-desc",
+ "name": "TM1",
+ "schedule":
+ {
+ "continuousSchedule":
+ {
+ "enabled": true,
+ "logBackupInterval": 30,
+ "snapshotsPerDay": 2,
+ },
+ "monthlySchedule": { "dayOfMonth": 4, "enabled": true },
+ "quartelySchedule":
+ { "dayOfMonth": 4, "enabled": true, "startMonth": "JANUARY" },
+ "snapshotTimeOfDay": { "hours": 11, "minutes": 10, "seconds": 2 },
+ "weeklySchedule": { "dayOfWeek": "WEDNESDAY", "enabled": true },
+ },
+ "slaId": "{{sla.uuid}}",
+ }
- set_fact:
- mainetance_tasks: {
- "maintenanceWindowId": "{{maintenance.window_uuid}}",
- "tasks": [
- {
- "payload": {
- "prePostCommand": {
- "postCommand": "ls -a",
- "preCommand": "ls"
- }
- },
- "taskType": "OS_PATCHING"
- },
- {
- "payload": {
- "prePostCommand": {
- "postCommand": "ls -F",
- "preCommand": "ls -l"
- }
- },
- "taskType": "DB_PATCHING"
- }
- ]
- }
+ maintenance_tasks:
+ {
+ "maintenanceWindowId": "{{maintenance.window_uuid}}",
+ "tasks":
+ [
+ {
+ "payload":
+ {
+ "prePostCommand":
+ { "postCommand": "ls -a", "preCommand": "ls" },
+ },
+ "taskType": "OS_PATCHING",
+ },
+ {
+ "payload":
+ {
+ "prePostCommand":
+ { "postCommand": "ls -F", "preCommand": "ls -l" },
+ },
+ "taskType": "DB_PATCHING",
+ },
+ ],
+ }
- name: Check mode status
assert:
@@ -208,13 +168,11 @@
- result.response.nodes | length == 1
- result.response.nodeCount == 1
- result.response.nodes[0].nxClusterId == "{{cluster.cluster1.uuid}}"
- - result.response.maintenanceTasks == mainetance_tasks
+ - result.response.maintenanceTasks == maintenance_tasks
- result.response.createDbserver == True
fail_msg: "Unable to create single instance postgres database provision spec"
success_msg: "single instance postgres database provision spec created successfully"
-
-
- name: create single instance postgres database on new db server vm
ntnx_ndb_databases:
wait: true
@@ -281,7 +239,7 @@
# {% raw %}
- name: create properties map
set_fact:
- properties: "{{ properties | default({}) | combine ({ item['name'] : item['value'] }) }}"
+ properties: "{{ properties | default({}) | combine ({ item['name'] : item['value'] }) }}"
loop: "{{result.response.properties}}"
no_log: true
# {% endraw %}
@@ -330,7 +288,6 @@
################################### update tests #############################
-
- name: update database with check mode
check_mode: yes
ntnx_ndb_databases:
@@ -378,11 +335,9 @@
- result.response.tags[0].tagName == "{{tags.databases.name}}"
- result.response.tags[0].value == "single-instance-dbs-updated"
-
fail_msg: "Unable to update single instance postgres database"
success_msg: "single instance postgres database updated successfully"
-
- name: idempotency checks
ntnx_ndb_databases:
wait: true
@@ -425,8 +380,6 @@
fail_msg: "creation of spec for delete db from vm failed"
success_msg: "spec for delete db from vm created successfully"
-
-
- name: create spec for soft remove
check_mode: yes
ntnx_ndb_databases:
@@ -449,7 +402,6 @@
fail_msg: "creation of spec for soft remove with time machine delete failed"
success_msg: "spec for soft remove with time machine delete created successfully"
-
- name: unregister db along with delete time machine
ntnx_ndb_databases:
state: "absent"
@@ -469,7 +421,6 @@
################################### single instance postgres database registration tests #############################
-
- name: create spec for registering previously unregistered database from previously created VM's ip
check_mode: yes
ntnx_ndb_register_database:
@@ -519,86 +470,68 @@
register: result
- set_fact:
- expected_action_arguments: [
- {
- "name": "listener_port",
- "value": "9999"
- },
- {
- "name": "db_name",
- "value": "testAnsible1"
- },
- {
- "name": "db_user",
- "value": "postgres"
- },
- {
- "name": "db_password",
- "value": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
- },
- {
- "name": "postgres_software_home",
- "value": "{{postgres.software_home}}"
- }
- ]
+ expected_action_arguments:
+ [
+ { "name": "listener_port", "value": "9999" },
+ { "name": "db_name", "value": "testAnsible1" },
+ { "name": "db_user", "value": "postgres" },
+ {
+ "name": "db_password",
+ "value": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
+ },
+ {
+ "name": "postgres_software_home",
+ "value": "{{postgres.software_home}}",
+ },
+ ]
- set_fact:
- expected_time_machine_info: {
- "autoTuneLogDrive": true,
- "description": "TM-desc",
- "name": "TM1",
- "schedule": {
- "continuousSchedule": {
- "enabled": true,
- "logBackupInterval": 30,
- "snapshotsPerDay": 2
- },
- "monthlySchedule": {
- "dayOfMonth": 4,
- "enabled": true
- },
- "quartelySchedule": {
- "dayOfMonth": 4,
- "enabled": true,
- "startMonth": "JANUARY"
- },
- "snapshotTimeOfDay": {
- "hours": 11,
- "minutes": 10,
- "seconds": 2
- },
- "weeklySchedule": {
- "dayOfWeek": "WEDNESDAY",
- "enabled": true
- }
- },
- "slaId": "{{sla.uuid}}"
- }
+ expected_time_machine_info:
+ {
+ "autoTuneLogDrive": true,
+ "description": "TM-desc",
+ "name": "TM1",
+ "schedule":
+ {
+ "continuousSchedule":
+ {
+ "enabled": true,
+ "logBackupInterval": 30,
+ "snapshotsPerDay": 2,
+ },
+ "monthlySchedule": { "dayOfMonth": 4, "enabled": true },
+ "quartelySchedule":
+ { "dayOfMonth": 4, "enabled": true, "startMonth": "JANUARY" },
+ "snapshotTimeOfDay": { "hours": 11, "minutes": 10, "seconds": 2 },
+ "weeklySchedule": { "dayOfWeek": "WEDNESDAY", "enabled": true },
+ },
+ "slaId": "{{sla.uuid}}",
+ }
- set_fact:
- mainetance_tasks: {
- "maintenanceWindowId": "{{maintenance.window_uuid}}",
- "tasks": [
- {
- "payload": {
- "prePostCommand": {
- "postCommand": "ls -a",
- "preCommand": "ls"
- }
- },
- "taskType": "OS_PATCHING"
- },
- {
- "payload": {
- "prePostCommand": {
- "postCommand": "ls -F",
- "preCommand": "ls -l"
- }
- },
- "taskType": "DB_PATCHING"
- }
- ]
- }
+ maintenance_tasks:
+ {
+ "maintenanceWindowId": "{{maintenance.window_uuid}}",
+ "tasks":
+ [
+ {
+ "payload":
+ {
+ "prePostCommand":
+ { "postCommand": "ls -a", "preCommand": "ls" },
+ },
+ "taskType": "OS_PATCHING",
+ },
+ {
+ "payload":
+ {
+ "prePostCommand":
+ { "postCommand": "ls -F", "preCommand": "ls -l" },
+ },
+ "taskType": "DB_PATCHING",
+ },
+ ],
+ }
- name: Check mode status
assert:
@@ -612,12 +545,11 @@
- result.response.autoTuneStagingDrive == False
- result.response.timeMachineInfo == expected_time_machine_info
- result.response.vmIp == "10.10.10.10"
- - result.response.maintenanceTasks == mainetance_tasks
+ - result.response.maintenanceTasks == maintenance_tasks
- result.response.workingDirectory == "/check"
fail_msg: "Unable to create register database spec"
success_msg: "single instance postgres database register spec created successfully"
-
- name: register previously unregistered database from previously created VM
ntnx_ndb_register_database:
wait: true
@@ -694,7 +626,6 @@
fail_msg: "Unable to register single instance postgres database"
success_msg: "single instance postgres database registered successfully"
-
- set_fact:
db_uuid: "{{result.db_uuid}}"
#####################################INFO Module tests#######################################################
@@ -770,7 +701,6 @@
fail_msg: "Unable to Get era databases using its id"
success_msg: "Get era databases using its id finished successfully"
-
################################################################
- name: get era database with incorrect name
@@ -791,7 +721,6 @@
############################################################################################
-
- name: unregister db along with delete time machine
ntnx_ndb_databases:
db_uuid: "{{db_uuid}}"
@@ -809,7 +738,6 @@
fail_msg: "database delete failed"
success_msg: "database deleted successfully"
-
- name: delete db server vm
ntnx_ndb_db_server_vms:
state: "absent"
diff --git a/tests/integration/targets/ntnx_ndb_databases_single_instance_2/tasks/tests.yml b/tests/integration/targets/ntnx_ndb_databases_single_instance_2/tasks/tests.yml
index f213c1b8d..43ae28849 100644
--- a/tests/integration/targets/ntnx_ndb_databases_single_instance_2/tasks/tests.yml
+++ b/tests/integration/targets/ntnx_ndb_databases_single_instance_2/tasks/tests.yml
@@ -53,7 +53,6 @@
- set_fact:
_vm_ip: "{{ result.response.ipAddresses[0] }}"
-
- name: create new single instance postgres database on vm created earlier
ntnx_ndb_databases:
wait: true
@@ -96,7 +95,7 @@
# {% raw %}
- name: create properties map
set_fact:
- properties: "{{ properties | default({}) | combine ({ item['name'] : item['value'] }) }}"
+ properties: "{{ properties | default({}) | combine ({ item['name'] : item['value'] }) }}"
loop: "{{result.response.properties}}"
no_log: true
@@ -128,8 +127,7 @@
fail_msg: "Unable to create single instance postgres database"
success_msg: "single instance postgres database created successfully"
-
-- name: unregister db along with delete time machine and unregister db servr vm
+- name: unregister db along with delete time machine and unregister db server vm
ntnx_ndb_databases:
state: "absent"
db_uuid: "{{db_uuid}}"
@@ -148,7 +146,6 @@
fail_msg: "database unregistration failed"
success_msg: "database unregistered successfully"
-
- name: create spec for registering previously unregistered DB from previously unregistered DB server vm
check_mode: yes
ntnx_ndb_register_database:
@@ -193,68 +190,57 @@
register: result
+- set_fact:
+ expected_action_arguments:
+ [
+ { "name": "vmIp", "value": "{{_vm_ip}}" },
+ { "name": "listener_port", "value": "5432" },
+ { "name": "db_name", "value": "testAnsible1" },
+ { "name": "db_user", "value": "postgres" },
+ {
+ "name": "db_password",
+ "value": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
+ },
+ {
+ "name": "postgres_software_home",
+ "value": "{{postgres.software_home}}",
+ },
+ ]
- set_fact:
- expected_action_arguments: [
- {
- "name": "vmIp",
- "value": "{{_vm_ip}}"
- },
- {
- "name": "listener_port",
- "value": "5432"
- },
- {
- "name": "db_name",
- "value": "testAnsible1"
- },
+ expected_time_machine_info:
+ {
+ "autoTuneLogDrive": true,
+ "description": "TM-desc",
+ "name": "TM1",
+ "schedule": {},
+ "slaId": "{{sla.uuid}}",
+ }
+
+- set_fact:
+ maintenance_tasks:
+ {
+ "maintenanceWindowId": "{{maintenance.window_uuid}}",
+ "tasks":
+ [
+ {
+ "payload":
{
- "name": "db_user",
- "value": "postgres"
+ "prePostCommand":
+ { "postCommand": "ls -a", "preCommand": "ls" },
},
+ "taskType": "OS_PATCHING",
+ },
+ {
+ "payload":
{
- "name": "db_password",
- "value": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
+ "prePostCommand":
+ { "postCommand": "ls -F", "preCommand": "ls -l" },
},
- {
- "name": "postgres_software_home",
- "value": "{{postgres.software_home}}"
- }
- ]
-
-- set_fact:
- expected_time_machine_info: {
- "autoTuneLogDrive": true,
- "description": "TM-desc",
- "name": "TM1",
- "schedule": {},
- "slaId": "{{sla.uuid}}"
- }
-
-- set_fact:
- mainetance_tasks: {
- "maintenanceWindowId": "{{maintenance.window_uuid}}",
- "tasks": [
- {
- "payload": {
- "prePostCommand": {
- "postCommand": "ls -a",
- "preCommand": "ls"
- }
- },
- "taskType": "OS_PATCHING"
- },
- {
- "payload": {
- "prePostCommand": {
- "postCommand": "ls -F",
- "preCommand": "ls -l"
- }
- },
- "taskType": "DB_PATCHING"
- }
- ]
- }
+ "taskType": "DB_PATCHING",
+ },
+ ],
+ }
- name: Check mode status
assert:
@@ -272,13 +258,11 @@
- result.response.databaseType == "postgres_database"
- result.response.timeMachineInfo == expected_time_machine_info
- result.response.nxClusterId == cluster.cluster1.uuid
- - result.response.maintenanceTasks == mainetance_tasks
+ - result.response.maintenanceTasks == maintenance_tasks
- result.response.workingDirectory == "/tmp"
fail_msg: "Unable to create register database spec"
success_msg: "single instance postgres database register spec created successfully"
-
-
- name: register previously unregistered DB from previously unregistered DB server vm
ntnx_ndb_register_database:
wait: true
diff --git a/tests/integration/targets/ntnx_ndb_db_server_vms/tasks/crud.yml b/tests/integration/targets/ntnx_ndb_db_server_vms/tasks/crud.yml
index 60c7fd4e8..afc6f1b53 100644
--- a/tests/integration/targets/ntnx_ndb_db_server_vms/tasks/crud.yml
+++ b/tests/integration/targets/ntnx_ndb_db_server_vms/tasks/crud.yml
@@ -1,5 +1,4 @@
---
-
- debug:
msg: "start ntnx_ndb_db_server_vms, ntnx_ndb_register_db_server_vm, ntnx_ndb_db_servers_info and ntnx_ndb_maintenance_tasks tests. Approx Time: < 30 mins"
@@ -53,73 +52,76 @@
# {% endraw %}
- set_fact:
- mainetance_tasks: {
- "maintenanceWindowId": "test_window_uuid",
- "tasks": [
- {
- "payload": {
- "prePostCommand": {
- "postCommand": "ls -a",
- "preCommand": "ls"
- }
- },
- "taskType": "OS_PATCHING"
- },
- {
- "payload": {
- "prePostCommand": {
- "postCommand": "ls -F",
- "preCommand": "ls -l"
- }
- },
- "taskType": "DB_PATCHING"
- }
- ]
- }
+ maintenance_tasks:
+ {
+ "maintenanceWindowId": "test_window_uuid",
+ "tasks":
+ [
+ {
+ "payload":
+ {
+ "prePostCommand":
+ { "postCommand": "ls -a", "preCommand": "ls" },
+ },
+ "taskType": "OS_PATCHING",
+ },
+ {
+ "payload":
+ {
+ "prePostCommand":
+ { "postCommand": "ls -F", "preCommand": "ls -l" },
+ },
+ "taskType": "DB_PATCHING",
+ },
+ ],
+ }
- set_fact:
- expected_result: {
+ expected_result:
+ {
"changed": false,
"error": null,
"failed": false,
- "response": {
- "actionArguments": [
+ "response":
+ {
+ "actionArguments":
+ [
{
- "name": "vm_name",
- "value": "ansible-created-vm1-from-time-machine"
+ "name": "vm_name",
+ "value": "ansible-created-vm1-from-time-machine",
},
{
- "name": "client_public_key",
- "value": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
- }
- ],
+ "name": "client_public_key",
+ "value": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
+ },
+ ],
"computeProfileId": "test_compute_uuid",
"databaseType": "postgres_database",
"description": "ansible-created-vm1-from-time-machine-time-machine",
"latestSnapshot": false,
- "maintenanceTasks": {
+ "maintenanceTasks":
+ {
"maintenanceWindowId": "test_window_uuid",
- "tasks": [
+ "tasks":
+ [
{
- "payload": {
- "prePostCommand": {
- "postCommand": "ls -a",
- "preCommand": "ls"
- }
+ "payload":
+ {
+ "prePostCommand":
+ { "postCommand": "ls -a", "preCommand": "ls" },
},
- "taskType": "OS_PATCHING"
+ "taskType": "OS_PATCHING",
},
{
- "payload": {
- "prePostCommand": {
- "postCommand": "ls -F",
- "preCommand": "ls -l"
- }
+ "payload":
+ {
+ "prePostCommand":
+ { "postCommand": "ls -F", "preCommand": "ls -l" },
},
- "taskType": "DB_PATCHING"
- }
- ]
- },
+ "taskType": "DB_PATCHING",
+ },
+ ],
+ },
"networkProfileId": "test_network_uuid",
"nxClusterId": "test_cluster_uuid",
"snapshotId": "test_snapshot_uuid",
@@ -127,10 +129,10 @@
"softwareProfileVersionId": "",
"timeMachineId": "test_uuid",
"timeZone": "Asia/Calcutta",
- "vmPassword": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
- },
- "uuid": null
- }
+ "vmPassword": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
+ },
+ "uuid": null,
+ }
- name: Check mode Status
assert:
@@ -139,7 +141,6 @@
fail_msg: "Unable to generate create db server vm spec with time machine as source"
success_msg: "DB server VM spec created successfully"
-
- name: create spec for db server vm using software profile and names of profile
check_mode: yes
ntnx_ndb_db_server_vms:
@@ -171,57 +172,57 @@
register: result
- set_fact:
- expected_result: {
+ expected_result:
+ {
"changed": false,
"error": null,
"failed": false,
- "response": {
- "actionArguments": [
+ "response":
+ {
+ "actionArguments":
+ [
+ { "name": "vm_name", "value": "{{ vm1_name }}" },
{
- "name": "vm_name",
- "value": "{{ vm1_name }}"
+ "name": "client_public_key",
+ "value": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
},
- {
- "name": "client_public_key",
- "value": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
- }
- ],
+ ],
"computeProfileId": "{{ compute_profile.uuid }}",
"databaseType": "postgres_database",
"description": "ansible-created-vm1-desc",
"latestSnapshot": false,
- "maintenanceTasks": {
+ "maintenanceTasks":
+ {
"maintenanceWindowId": "{{ maintenance.window_uuid }}",
- "tasks": [
+ "tasks":
+ [
{
- "payload": {
- "prePostCommand": {
- "postCommand": "ls -a",
- "preCommand": "ls"
- }
+ "payload":
+ {
+ "prePostCommand":
+ { "postCommand": "ls -a", "preCommand": "ls" },
},
- "taskType": "OS_PATCHING"
+ "taskType": "OS_PATCHING",
},
{
- "payload": {
- "prePostCommand": {
- "postCommand": "ls -F",
- "preCommand": "ls -l"
- }
+ "payload":
+ {
+ "prePostCommand":
+ { "postCommand": "ls -F", "preCommand": "ls -l" },
},
- "taskType": "DB_PATCHING"
- }
- ]
- },
+ "taskType": "DB_PATCHING",
+ },
+ ],
+ },
"networkProfileId": "{{ network_profile.uuid }}",
"nxClusterId": "{{ cluster.cluster1.uuid }}",
"softwareProfileId": "{{ software_profile.uuid }}",
"softwareProfileVersionId": "{{ software_profile.latest_version_id }}",
"timeZone": "UTC",
- "vmPassword": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
- },
- "uuid": null
- }
+ "vmPassword": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
+ },
+ "uuid": null,
+ }
- name: Check mode Status
assert:
@@ -230,7 +231,6 @@
fail_msg: "Unable to generate create db server vm spec with time machine as source and given names of profile"
success_msg: "DB server VM spec created successfully"
-
- name: create db server vm using software profile
ntnx_ndb_db_server_vms:
wait: True
@@ -292,7 +292,6 @@
- set_fact:
vm_ip: "{{ result.response.ipAddresses[0] }}"
-
################################### DB server VM update Tests #############################
- name: update db server vm name, desc, credentials, tags
@@ -330,12 +329,12 @@
- name: check idempotency
ntnx_ndb_db_server_vms:
- wait: True
- uuid: "{{db_server_uuid}}"
- name: "{{vm1_name_updated}}"
- desc: "ansible-created-vm1-updated-desc"
- tags:
- ansible-db-server-vms: "ansible-updated"
+ wait: True
+ uuid: "{{db_server_uuid}}"
+ name: "{{vm1_name_updated}}"
+ desc: "ansible-created-vm1-updated-desc"
+ tags:
+ ansible-db-server-vms: "ansible-updated"
register: result
- name: check idempotency status
@@ -347,7 +346,6 @@
fail_msg: "db server vm got updated"
success_msg: "db server vm update skipped successfully due to no changes in state"
-
- name: update db server vm name with check mode and check defaults
check_mode: yes
ntnx_ndb_db_server_vms:
@@ -379,7 +377,6 @@
ntnx_ndb_db_servers_info:
register: db_servers
-
- name: check listing status
assert:
that:
@@ -518,7 +515,6 @@
fail_msg: "module didn't errored out correctly when incorrect name is given"
success_msg: "module errored out correctly when incorrect name is given"
-
################################### maintenance tasks update tests #############################
- name: create spec for adding maintenance window tasks to db server vm
@@ -528,8 +524,8 @@
- name: "{{vm1_name_updated}}"
- uuid: "test_vm_1"
db_server_clusters:
- - uuid: "test_cluter_1"
- - uuid: "test_cluter_2"
+ - uuid: "test_cluster_1"
+ - uuid: "test_cluster_2"
maintenance_window:
name: "{{maintenance.window_name}}"
tasks:
@@ -542,45 +538,41 @@
register: result
- set_fact:
- expected_result: {
+ expected_result:
+ {
"changed": false,
"error": null,
"failed": false,
- "response": {
- "entities": {
- "ERA_DBSERVER": [
- "{{db_server_uuid}}",
- "test_vm_1"
- ],
- "ERA_DBSERVER_CLUSTER": [
- "test_cluter_1",
- "test_cluter_2"
- ]
- },
+ "response":
+ {
+ "entities":
+ {
+ "ERA_DBSERVER": ["{{db_server_uuid}}", "test_vm_1"],
+ "ERA_DBSERVER_CLUSTER": ["test_cluster_1", "test_cluster_2"],
+ },
"maintenanceWindowId": "{{maintenance.window_uuid}}",
- "tasks": [
+ "tasks":
+ [
{
- "payload": {
- "prePostCommand": {
- "postCommand": "ls",
- "preCommand": "ls -a"
- }
+ "payload":
+ {
+ "prePostCommand":
+ { "postCommand": "ls", "preCommand": "ls -a" },
},
- "taskType": "OS_PATCHING"
+ "taskType": "OS_PATCHING",
},
{
- "payload": {
- "prePostCommand": {
- "postCommand": "ls",
- "preCommand": "ls -a"
- }
+ "payload":
+ {
+ "prePostCommand":
+ { "postCommand": "ls", "preCommand": "ls -a" },
},
- "taskType": "DB_PATCHING"
- }
- ]
- },
- "uuid": "{{maintenance.window_uuid}}"
- }
+ "taskType": "DB_PATCHING",
+ },
+ ],
+ },
+ "uuid": "{{maintenance.window_uuid}}",
+ }
- name: Check mode status
assert:
@@ -590,7 +582,6 @@
fail_msg: "Unable to create spec for adding maintenance tasks for db server vm"
success_msg: "spec for adding maintenance tasks for db server vm created successfully"
-
- name: create spec for removing maintenance window tasks from above created vm
check_mode: yes
ntnx_ndb_maintenance_tasks:
@@ -602,21 +593,19 @@
register: result
- set_fact:
- expected_result: {
+ expected_result:
+ {
"changed": false,
"error": null,
"failed": false,
- "response": {
- "entities": {
- "ERA_DBSERVER": [
- "{{db_server_uuid}}"
- ]
- },
+ "response":
+ {
+ "entities": { "ERA_DBSERVER": ["{{db_server_uuid}}"] },
"maintenanceWindowId": "{{maintenance.window_uuid}}",
"tasks": [],
},
- "uuid": "{{maintenance.window_uuid}}"
- }
+ "uuid": "{{maintenance.window_uuid}}",
+ }
- name: Check mode status
assert:
@@ -626,7 +615,6 @@
fail_msg: "Unable to create spec for removing maintenance tasks for db server vm"
success_msg: "spec for removing maintenance tasks for db server vm created successfully"
-
- name: db server vm already contains some tasks so remove maintenance window tasks from above created vm
ntnx_ndb_maintenance_tasks:
db_server_vms:
@@ -662,7 +650,6 @@
fail_msg: "Unable to remove maintenance tasks for given db server vm"
success_msg: "maintenance tasks for given db server vm removed successfully"
-
- name: Add maintenance window task for vm
ntnx_ndb_maintenance_tasks:
db_server_vms:
@@ -724,7 +711,6 @@
fail_msg: "Unable to remove maintenance tasks for given db server vm"
success_msg: "maintenance tasks for given db server vm removed successfully"
-
################################### DB server VM unregistration tests #############################
- name: generate check mode spec for unregister with default values
@@ -749,7 +735,6 @@
fail_msg: "Unable to generate check mode spec for unregister"
success_msg: "DB server VM unregister spec generated successfully"
-
- name: generate check mode spec for delete vm with vgs and snapshots
check_mode: yes
ntnx_ndb_db_server_vms:
@@ -774,7 +759,6 @@
fail_msg: "Unable to generate check mode spec for unregister"
success_msg: "DB server VM update spec generated successfully"
-
- name: unregister vm
ntnx_ndb_db_server_vms:
state: "absent"
@@ -797,7 +781,6 @@
################################### DB server VM Registration tests #############################
-
- name: generate spec for registration of the previous unregistered vm using check mode
check_mode: yes
ntnx_ndb_register_db_server_vm:
@@ -830,36 +813,36 @@
# {% raw %}
- name: create action_arguments map
set_fact:
- action_arguments: "{{ action_arguments | default({}) | combine ({ item['name'] : item['value'] }) }}"
+ action_arguments: "{{ action_arguments | default({}) | combine ({ item['name'] : item['value'] }) }}"
loop: "{{result.response.actionArguments}}"
no_log: true
# {% endraw %}
- set_fact:
- maintenance_tasks: {
- "maintenanceWindowId": "{{maintenance.window_uuid}}",
- "tasks": [
- {
- "payload": {
- "prePostCommand": {
- "postCommand": "ls -a",
- "preCommand": "ls"
- }
- },
- "taskType": "OS_PATCHING"
- },
- {
- "payload": {
- "prePostCommand": {
- "postCommand": "ls -F",
- "preCommand": "ls -l"
- }
- },
- "taskType": "DB_PATCHING"
- }
- ]
- }
+ maintenance_tasks:
+ {
+ "maintenanceWindowId": "{{maintenance.window_uuid}}",
+ "tasks":
+ [
+ {
+ "payload":
+ {
+ "prePostCommand":
+ { "postCommand": "ls -a", "preCommand": "ls" },
+ },
+ "taskType": "OS_PATCHING",
+ },
+ {
+ "payload":
+ {
+ "prePostCommand":
+ { "postCommand": "ls -F", "preCommand": "ls -l" },
+ },
+ "taskType": "DB_PATCHING",
+ },
+ ],
+ }
- name: Check mode status
assert:
@@ -879,7 +862,6 @@
fail_msg: "Unable to create spec for db server vm registration"
success_msg: "DB server VM registration spec generated successfully"
-
- name: register the previous unregistered vm
ntnx_ndb_register_db_server_vm:
ip: "{{vm_ip}}"
@@ -908,7 +890,7 @@
# {% raw %}
- name: create properties map
set_fact:
- properties1: "{{ properties1 | default({}) | combine ({ item['name'] : item['value'] }) }}"
+ properties1: "{{ properties1 | default({}) | combine ({ item['name'] : item['value'] }) }}"
loop: "{{result.response.properties}}"
no_log: true
# {% endraw %}
@@ -934,13 +916,11 @@
fail_msg: "Unable to create db server vm using software profile"
success_msg: "DB server VM created successfully"
-
- set_fact:
db_server_uuid: "{{result.uuid}}"
################################### DB server VM Delete test #############################
-
- name: unregister db server vm
ntnx_ndb_db_server_vms:
state: "absent"
diff --git a/tests/integration/targets/ntnx_ndb_maintenance_windows/readme.md b/tests/integration/targets/ntnx_ndb_maintenance_windows/readme.md
index 8735ed118..a2e631b40 100644
--- a/tests/integration/targets/ntnx_ndb_maintenance_windows/readme.md
+++ b/tests/integration/targets/ntnx_ndb_maintenance_windows/readme.md
@@ -1,3 +1,4 @@
### Modules Tested:
-1. ntnx_ndb_maitenance_window
-2. ntnx_ndb_maitenance_windows_info
+
+1. ntnx_ndb_maintenance_window
+2. ntnx_ndb_maintenance_windows_info
diff --git a/tests/integration/targets/ntnx_ndb_maintenance_windows/tasks/crud.yml b/tests/integration/targets/ntnx_ndb_maintenance_windows/tasks/crud.yml
index a73653dd9..8e6a4b4bb 100644
--- a/tests/integration/targets/ntnx_ndb_maintenance_windows/tasks/crud.yml
+++ b/tests/integration/targets/ntnx_ndb_maintenance_windows/tasks/crud.yml
@@ -1,5 +1,4 @@
---
-
- debug:
msg: "start ndb database maintenance window tests"
@@ -16,7 +15,7 @@
check_mode: yes
ntnx_ndb_maintenance_window:
name: "{{window1_name}}"
- desc: "anisble-created-window"
+ desc: "ansible-created-window"
schedule:
recurrence: "weekly"
duration: 2
@@ -26,24 +25,27 @@
register: result
- set_fact:
- expected_result: {
+ expected_result:
+ {
"changed": false,
"error": null,
"failed": false,
- "response": {
- "description": "anisble-created-window",
+ "response":
+ {
+ "description": "ansible-created-window",
"name": "{{window1_name}}",
- "schedule": {
+ "schedule":
+ {
"dayOfWeek": "TUESDAY",
"duration": 2,
"recurrence": "WEEKLY",
"startTime": "11:00:00",
- "weekOfMonth": null
- },
- "timezone": "Asia/Calcutta"
- },
- "uuid": null
- }
+ "weekOfMonth": null,
+ },
+ "timezone": "Asia/Calcutta",
+ },
+ "uuid": null,
+ }
- name: Check mode status
assert:
@@ -52,11 +54,10 @@
fail_msg: "Unable to create spec for creating window"
success_msg: "spec for maintenance window generated successfully"
-
- name: create window with weekly schedule
ntnx_ndb_maintenance_window:
name: "{{window1_name}}"
- desc: "anisble-created-window"
+ desc: "ansible-created-window"
schedule:
recurrence: "weekly"
duration: 2
@@ -77,7 +78,7 @@
- result.uuid is defined
- result.response.status == "ACTIVE" or result.response.status == "SCHEDULED"
- result.response.name == window1_name
- - result.response.description == "anisble-created-window"
+ - result.response.description == "ansible-created-window"
- result.response.schedule.dayOfWeek == "TUESDAY"
- result.response.schedule.recurrence == "WEEKLY"
- result.response.schedule.startTime == "11:00:00"
@@ -88,11 +89,10 @@
fail_msg: "Unable to create maintenance window with weekly schedule"
success_msg: "maintenance window with weekly schedule created successfully"
-
- name: create window with monthly schedule
ntnx_ndb_maintenance_window:
name: "{{window2_name}}"
- desc: "anisble-created-window"
+ desc: "ansible-created-window"
schedule:
recurrence: "monthly"
duration: 2
@@ -115,7 +115,7 @@
- result.uuid is defined
- result.response.status == "ACTIVE" or result.response.status == "SCHEDULED"
- result.response.name == window2_name
- - result.response.description == "anisble-created-window"
+ - result.response.description == "ansible-created-window"
- result.response.schedule.dayOfWeek == "TUESDAY"
- result.response.schedule.recurrence == "MONTHLY"
- result.response.schedule.startTime == "11:00:00"
@@ -123,7 +123,6 @@
- result.response.schedule.weekOfMonth == 2
- result.response.schedule.duration == 2
-
fail_msg: "Unable to create maintenance window with monthly schedule"
success_msg: "maintenance window with monthly schedule created successfully"
@@ -163,12 +162,11 @@
############################################## update tests ####################################
-
- name: update window schedule
ntnx_ndb_maintenance_window:
uuid: "{{window2_uuid}}"
name: "{{window2_name}}-updated"
- desc: "anisble-created-window-updated"
+ desc: "ansible-created-window-updated"
schedule:
recurrence: "monthly"
duration: 3
@@ -187,7 +185,7 @@
- result.uuid is defined
- result.response.status == "ACTIVE" or result.response.status == "SCHEDULED"
- result.response.name == "{{window2_name}}-updated"
- - result.response.description == "anisble-created-window-updated"
+ - result.response.description == "ansible-created-window-updated"
- result.response.schedule.dayOfWeek == "WEDNESDAY"
- result.response.schedule.recurrence == "MONTHLY"
- result.response.schedule.startTime == "12:00:00"
@@ -195,7 +193,6 @@
- result.response.schedule.weekOfMonth == 3
- result.response.schedule.duration == 3
-
fail_msg: "Unable to update maintenance window"
success_msg: "maintenance window updated successfully"
@@ -220,7 +217,7 @@
- result.uuid is defined
- result.response.status == "ACTIVE" or result.response.status == "SCHEDULED"
- result.response.name == "{{window2_name}}-updated"
- - result.response.description == "anisble-created-window-updated"
+ - result.response.description == "ansible-created-window-updated"
- result.response.schedule.dayOfWeek == "WEDNESDAY"
- result.response.schedule.recurrence == "WEEKLY"
- result.response.schedule.startTime == "12:00:00"
@@ -228,7 +225,6 @@
- result.response.schedule.weekOfMonth == None
- result.response.schedule.duration == 3
-
fail_msg: "Unable to update maintenance window"
success_msg: "maintenance window updated successfully"
@@ -236,7 +232,7 @@
ntnx_ndb_maintenance_window:
uuid: "{{window2_uuid}}"
name: "{{window2_name}}-updated"
- desc: "anisble-created-window-updated"
+ desc: "ansible-created-window-updated"
schedule:
recurrence: "weekly"
duration: 3
@@ -263,7 +259,6 @@
register: result
-
- name: update status
assert:
that:
@@ -280,7 +275,6 @@
- result.response.schedule.weekOfMonth == None
- result.response.schedule.duration == 3
-
fail_msg: "Unable to update maintenance window"
success_msg: "maintenance window updated successfully"
@@ -312,7 +306,6 @@
fail_msg: "Unable to update maintenance window"
success_msg: "maintenance window updated successfully"
-
############################################## delete tests ####################################
- name: delete window 1
@@ -336,7 +329,6 @@
state: "absent"
register: result
-
- name: check delete status
assert:
that:
diff --git a/tests/integration/targets/ntnx_ndb_profiles/tasks/compute.yml b/tests/integration/targets/ntnx_ndb_profiles/tasks/compute.yml
index 081e0e8f9..8bbe06617 100644
--- a/tests/integration/targets/ntnx_ndb_profiles/tasks/compute.yml
+++ b/tests/integration/targets/ntnx_ndb_profiles/tasks/compute.yml
@@ -147,7 +147,7 @@
fail_msg: "Fail: unable to verify unpublish flow in compute profile "
success_msg: "Pass: verify unpublish flow in compute profile finished successfully"
################################################################
-- name: Delete all created cmpute profiles
+- name: Delete all created compute profiles
ntnx_ndb_profiles:
state: absent
profile_uuid: "{{ item }}"
diff --git a/tests/integration/targets/ntnx_ndb_profiles/tasks/db_params.yml b/tests/integration/targets/ntnx_ndb_profiles/tasks/db_params.yml
index 69c8634a8..8f4a0165b 100644
--- a/tests/integration/targets/ntnx_ndb_profiles/tasks/db_params.yml
+++ b/tests/integration/targets/ntnx_ndb_profiles/tasks/db_params.yml
@@ -23,7 +23,7 @@
autovacuum_vacuum_scale_factor: 0.3
autovacuum_work_mem: 1
autovacuum_max_workers: 2
- autovacuum_vacuum_cost_delay: 22
+ autovacuum_vacuum_cost_delay: 22
wal_buffers: 1
synchronous_commit: local
random_page_cost: 3
@@ -61,14 +61,13 @@
autovacuum_vacuum_scale_factor: "{{autovacuum_vacuum_scale_factor}}"
autovacuum_work_mem: "{{autovacuum_work_mem}}"
autovacuum_max_workers: "{{autovacuum_max_workers}}"
- autovacuum_vacuum_cost_delay: "{{autovacuum_vacuum_cost_delay}}"
+ autovacuum_vacuum_cost_delay: "{{autovacuum_vacuum_cost_delay}}"
wal_buffers: "{{wal_buffers}}"
synchronous_commit: "{{synchronous_commit}}"
random_page_cost: "{{random_page_cost}}"
register: result
ignore_errors: true
-
- name: check listing status
assert:
that:
@@ -151,7 +150,6 @@
register: result
ignore_errors: true
-
- name: check listing status
assert:
that:
@@ -162,7 +160,7 @@
fail_msg: "Fail: verify unpublish flow in database_parameter profile "
success_msg: "Pass: verify unpublish flow in database_parameter profile finished successfully "
################################################################
-- name: verify creatition of db params profile with defaults
+- name: verify creation of db params profile with defaults
ntnx_ndb_profiles:
name: "{{profile3_name}}"
desc: "testdesc"
@@ -181,8 +179,8 @@
- result.response.description == "testdesc"
- result.response.type == "Database_Parameter"
- result.response.versions is defined
- fail_msg: "Fail: Unable to verify creatition of db params profile with defaults "
- success_msg: "Pass: verify creatition of db params profile with defaults finished successfully "
+ fail_msg: "Fail: Unable to verify creation of db params profile with defaults "
+ success_msg: "Pass: verify creation of db params profile with defaults finished successfully "
- set_fact:
todelete: "{{ todelete + [ result.profile_uuid ] }}"
diff --git a/tests/integration/targets/ntnx_ndb_profiles/tasks/network_profile.yml b/tests/integration/targets/ntnx_ndb_profiles/tasks/network_profile.yml
index 27549a765..87d84ab9e 100644
--- a/tests/integration/targets/ntnx_ndb_profiles/tasks/network_profile.yml
+++ b/tests/integration/targets/ntnx_ndb_profiles/tasks/network_profile.yml
@@ -20,8 +20,7 @@
network:
topology: single
vlans:
- -
- cluster:
+ - cluster:
name: "{{network_profile.single.cluster.name}}"
vlan_name: "{{network_profile.single.vlan_name}}"
enable_ip_address_selection: true
diff --git a/tests/integration/targets/ntnx_ndb_software_profiles/tasks/crud.yml b/tests/integration/targets/ntnx_ndb_software_profiles/tasks/crud.yml
index aec103c53..2879a64f8 100644
--- a/tests/integration/targets/ntnx_ndb_software_profiles/tasks/crud.yml
+++ b/tests/integration/targets/ntnx_ndb_software_profiles/tasks/crud.yml
@@ -17,11 +17,9 @@
- set_fact:
profile1_name: "{{random_name[0]}}"
- profile1_name_updated: "{{random_name[0]}}-updated"
+ profile1_name_updated: "{{random_name[0]}}-updated"
profile2_name: "{{random_name[0]}}2"
-
-
- name: create software profile create spec
check_mode: yes
ntnx_ndb_profiles:
@@ -43,49 +41,39 @@
- uuid: "{{cluster.cluster2.uuid}}"
register: result
-
-
- set_fact:
- expected_result: {
+ expected_result:
+ {
"changed": false,
"error": null,
"failed": false,
"profile_uuid": null,
- "response": {
- "availableClusterIds": [
- "{{cluster.cluster1.uuid}}",
- "{{cluster.cluster2.uuid}}"
- ],
+ "response":
+ {
+ "availableClusterIds":
+ ["{{cluster.cluster1.uuid}}", "{{cluster.cluster2.uuid}}"],
"description": "{{profile1_name}}-desc",
"engineType": "postgres_database",
"name": "{{profile1_name}}",
- "properties": [
- {
- "name": "BASE_PROFILE_VERSION_NAME",
- "value": "v1.0"
- },
+ "properties":
+ [
+ { "name": "BASE_PROFILE_VERSION_NAME", "value": "v1.0" },
{
- "name": "BASE_PROFILE_VERSION_DESCRIPTION",
- "value": "v1.0-desc"
+ "name": "BASE_PROFILE_VERSION_DESCRIPTION",
+ "value": "v1.0-desc",
},
+ { "name": "OS_NOTES", "value": "os_notes" },
+ { "name": "DB_SOFTWARE_NOTES", "value": "db_notes" },
{
- "name": "OS_NOTES",
- "value": "os_notes"
+ "name": "SOURCE_DBSERVER_ID",
+ "value": "{{db_server_vm.uuid}}",
},
- {
- "name": "DB_SOFTWARE_NOTES",
- "value": "db_notes"
- },
- {
- "name": "SOURCE_DBSERVER_ID",
- "value": "{{db_server_vm.uuid}}"
- }
- ],
+ ],
"systemProfile": false,
"topology": "cluster",
- "type": "Software"
- }
- }
+ "type": "Software",
+ },
+ }
- name: check spec for creating software profile
assert:
@@ -115,8 +103,6 @@
- uuid: "{{cluster.cluster2.uuid}}"
register: result
-
-
- set_fact:
clusters: ["{{cluster.cluster1.uuid}}", "{{cluster.cluster2.uuid}}"]
@@ -142,7 +128,6 @@
fail_msg: "Fail: Unable to create software profile with base version and cluster instance topology with replicating to multiple clusters."
success_msg: "Pass: Software profile with base version, cluster instance topology and replicated to multiple clusters created successfully"
-
- name: create software profile with base version and single instance topology
ntnx_ndb_profiles:
name: "{{profile2_name}}"
@@ -162,8 +147,6 @@
- name: "{{cluster.cluster1.name}}"
register: result
-
-
- name: check status of creation
assert:
that:
@@ -185,7 +168,6 @@
fail_msg: "Fail: Unable to create software profile with base version and single instance topology"
success_msg: "Pass: Software profile with base version and single instance topology created successfully"
-
- set_fact:
profile_uuid: "{{result.profile_uuid}}"
@@ -196,8 +178,6 @@
desc: "{{profile1_name}}-desc-updated"
register: result
-
-
- name: check status of creation
assert:
that:
@@ -212,7 +192,6 @@
fail_msg: "Fail: Unable to update software profile"
success_msg: "Pass: Software profile updated successfully"
-
- name: idempotency checks
ntnx_ndb_profiles:
profile_uuid: "{{profile_uuid}}"
@@ -220,8 +199,6 @@
desc: "{{profile1_name}}-desc-updated"
register: result
-
-
- name: check status of creation
assert:
that:
@@ -233,7 +210,7 @@
- result.response.profile.name == "{{profile1_name}}-updated1"
- result.response.profile.description == "{{profile1_name}}-desc-updated"
- fail_msg: "Fail: Update didn't get skipped due to no state changes"
+ fail_msg: "Fail: Update did not get skipped due to no state changes"
success_msg: "Pass: Update skipped successfully due to no state changes"
- name: create software profile version spec
@@ -253,42 +230,41 @@
register: result
- set_fact:
- expected_result: {
+ expected_result:
+ {
"changed": false,
"error": null,
"failed": false,
"profile_type": "software",
"profile_uuid": "{{profile_uuid}}",
- "response": {
- "profile": {
+ "response":
+ {
+ "profile":
+ {
"description": "{{profile1_name}}-desc-updated",
"engineType": "postgres_database",
- "name": "{{profile1_name}}-updated1"
- },
- "version": {
+ "name": "{{profile1_name}}-updated1",
+ },
+ "version":
+ {
"description": "v2.0-desc",
"engineType": "postgres_database",
"name": "v2.0",
- "properties": [
- {
- "name": "OS_NOTES",
- "value": "os_notes for v2"
- },
+ "properties":
+ [
+ { "name": "OS_NOTES", "value": "os_notes for v2" },
+ { "name": "DB_SOFTWARE_NOTES", "value": "db_notes for v2" },
{
- "name": "DB_SOFTWARE_NOTES",
- "value": "db_notes for v2"
+ "name": "SOURCE_DBSERVER_ID",
+ "value": "{{db_server_vm.uuid}}",
},
- {
- "name": "SOURCE_DBSERVER_ID",
- "value": "{{db_server_vm.uuid}}"
- }
- ],
+ ],
"systemProfile": false,
"topology": null,
- "type": "Software"
- }
- }
- }
+ "type": "Software",
+ },
+ },
+ }
- name: check spec for creating spec for software profile version
assert:
@@ -298,7 +274,6 @@
fail_msg: "Fail: Unable to create spec for software profile version create"
success_msg: "Pass: Spec for creating software profile version generated successfully"
-
- name: create software profile version
ntnx_ndb_profiles:
profile_uuid: "{{profile_uuid}}"
@@ -314,8 +289,6 @@
register: result
-
-
- name: check status of version create
assert:
that:
@@ -349,8 +322,6 @@
register: result
-
-
- name: check status of spec
assert:
that:
@@ -366,7 +337,6 @@
fail_msg: "Fail: Unable to create spec for updating software profile version"
success_msg: "Pass: Spec for updating software profile version created successfully"
-
- name: update software profile version
ntnx_ndb_profiles:
profile_uuid: "{{profile_uuid}}"
@@ -378,8 +348,6 @@
register: result
-
-
- name: check status of update
assert:
that:
@@ -401,7 +369,6 @@
fail_msg: "Fail: Unable to update software profile version"
success_msg: "Pass: Software profile version updated successfully"
-
- set_fact:
version_uuid: "{{result.version_uuid}}"
@@ -413,8 +380,6 @@
publish: True
register: result
-
-
- name: check status of update
assert:
that:
@@ -455,7 +420,6 @@
fail_msg: "Fail: Unable to unpublish software profile version"
success_msg: "Pass: Software version unpublished successfully"
-
- name: deprecate software profile version
ntnx_ndb_profiles:
profile_uuid: "{{profile_uuid}}"
@@ -464,8 +428,6 @@
deprecate: True
register: result
-
-
- name: check status of update
assert:
that:
@@ -482,8 +444,6 @@
fail_msg: "Fail: Unable to deprecate software profile version"
success_msg: "Pass: Software version deprecated successfully"
-
-
- name: delete software profile version
ntnx_ndb_profiles:
profile_uuid: "{{profile_uuid}}"
@@ -492,7 +452,6 @@
state: "absent"
register: result
-
- name: check status of update
assert:
that:
@@ -506,7 +465,6 @@
fail_msg: "Fail: Unable to delete software profile version"
success_msg: "Pass: Software version deleted successfully"
-
- name: replicate software profile
ntnx_ndb_profiles:
profile_uuid: "{{profile_uuid}}"
@@ -518,7 +476,6 @@
ansible.builtin.pause:
minutes: 3
-
- set_fact:
clusters: {}
@@ -551,7 +508,6 @@
state: "absent"
register: result
-
- name: check status of delete
assert:
that:
diff --git a/tests/integration/targets/ntnx_ndb_vlans/tasks/create_vlans.yml b/tests/integration/targets/ntnx_ndb_vlans/tasks/create_vlans.yml
index adcdcc300..9a4aaa3f1 100644
--- a/tests/integration/targets/ntnx_ndb_vlans/tasks/create_vlans.yml
+++ b/tests/integration/targets/ntnx_ndb_vlans/tasks/create_vlans.yml
@@ -413,7 +413,7 @@
################################################################
-- name: Delete all created vlan's
+- name: Delete all created vlans
ntnx_ndb_vlans:
state: absent
vlan_uuid: "{{ item }}"
@@ -429,6 +429,6 @@
- result.changed == true
- result.msg == "All items completed"
fail_msg: "unable to delete all created vlan's"
- success_msg: "All vlan'sdeleted successfully"
+ success_msg: "All vlans deleted successfully"
- set_fact:
todelete: []
diff --git a/tests/integration/targets/ntnx_ndb_vlans/tasks/negativ_scenarios.yml b/tests/integration/targets/ntnx_ndb_vlans/tasks/negativ_scenarios.yml
index ad41fd7eb..d580388d1 100644
--- a/tests/integration/targets/ntnx_ndb_vlans/tasks/negativ_scenarios.yml
+++ b/tests/integration/targets/ntnx_ndb_vlans/tasks/negativ_scenarios.yml
@@ -1,16 +1,15 @@
---
- debug:
- msg: Start negative secanrios ntnx_ndb_vlans
+ msg: Start negative scenarios ntnx_ndb_vlans
- name: create Dhcp ndb vlan with static Configuration
ntnx_ndb_vlans:
- name: "{{ndb_vlan.name}}"
+ name: "{{ndb_vlan.name}}"
vlan_type: DHCP
gateway: "{{ndb_vlan.gateway}}"
subnet_mask: "{{ndb_vlan.subnet_mask}}"
ip_pools:
- -
- start_ip: "{{ndb_vlan.ip_pools.0.start_ip}}"
+ - start_ip: "{{ndb_vlan.ip_pools.0.start_ip}}"
end_ip: "{{ndb_vlan.ip_pools.0.end_ip}}"
primary_dns: "{{ndb_vlan.primary_dns}}"
secondary_dns: "{{ndb_vlan.secondary_dns}}"
@@ -26,11 +25,11 @@
- result.failed == true
- result.msg == "Failed generating create vlan spec"
fail_msg: "fail: create Dhcp ndb vlan with static Configuration finished successfully"
- success_msg: "pass: Returnerd error as expected"
+ success_msg: "pass: Returned error as expected"
# ###############################
- name: create static ndb vlan with missing Configuration
ntnx_ndb_vlans:
- name: "{{ndb_vlan.name}}"
+ name: "{{ndb_vlan.name}}"
vlan_type: Static
gateway: "{{ndb_vlan.gateway}}"
register: result
@@ -44,12 +43,12 @@
- result.failed == true
- result.msg == "Failed generating create vlan spec"
fail_msg: "fail: create static ndb vlan with missing Configuration finished successfully"
- success_msg: "pass: Returnerd error as expected"
+ success_msg: "pass: Returned error as expected"
###########
- name: create Dhcp ndb vlan
ntnx_ndb_vlans:
- name: "{{ndb_vlan.name}}"
+ name: "{{ndb_vlan.name}}"
vlan_type: DHCP
cluster:
uuid: "{{cluster.cluster2.uuid}}"
@@ -80,11 +79,9 @@
gateway: "{{ndb_vlan.gateway}}"
subnet_mask: "{{ndb_vlan.subnet_mask}}"
ip_pools:
- -
- start_ip: "{{ndb_vlan.ip_pools.0.start_ip}}"
+ - start_ip: "{{ndb_vlan.ip_pools.0.start_ip}}"
end_ip: "{{ndb_vlan.ip_pools.0.end_ip}}"
- -
- start_ip: "{{ndb_vlan.ip_pools.1.start_ip}}"
+ - start_ip: "{{ndb_vlan.ip_pools.1.start_ip}}"
end_ip: "{{ndb_vlan.ip_pools.1.end_ip}}"
primary_dns: "{{ndb_vlan.primary_dns}}"
secondary_dns: "{{ndb_vlan.secondary_dns}}"
@@ -100,11 +97,11 @@
- result.failed == true
- result.msg == "Failed generating update vlan spec"
fail_msg: "fail: update dhcp ndb vlan with static Configuration finished successfully"
- success_msg: "pass: Returnerd error as expected"
+ success_msg: "pass: Returned error as expected"
##################################
-- name: Delete all created vlan's
+- name: Delete all created vlan
ntnx_ndb_vlans:
state: absent
vlan_uuid: "{{ item }}"
@@ -120,7 +117,7 @@
- result.changed == true
- result.msg == "All items completed"
fail_msg: "unable to delete all created vlan's"
- success_msg: "All vlan'sdeleted successfully"
+ success_msg: "All vlans deleted successfully"
- set_fact:
todelete: []
diff --git a/tests/integration/targets/ntnx_ova/tasks/create_ova.yml b/tests/integration/targets/ntnx_ova/tasks/create_ova.yml
index 8b66c26a8..f9dea0812 100644
--- a/tests/integration/targets/ntnx_ova/tasks/create_ova.yml
+++ b/tests/integration/targets/ntnx_ova/tasks/create_ova.yml
@@ -3,10 +3,10 @@
- name: VM with minimum requirements
ntnx_vms:
- state: present
- name: integration_test_ova_vm
- cluster:
- name: "{{ cluster.name }}"
+ state: present
+ name: integration_test_ova_vm
+ cluster:
+ name: "{{ cluster.name }}"
register: vm
ignore_errors: true
@@ -15,14 +15,14 @@
that:
- vm.response is defined
- vm.response.status.state == 'COMPLETE'
- fail_msg: 'Fail: Unable to create VM with minimum requirements '
- success_msg: 'Success: VM with minimum requirements created successfully '
+ fail_msg: "Fail: Unable to create VM with minimum requirements "
+ success_msg: "Success: VM with minimum requirements created successfully "
#########################################
- name: create_ova_image with check mode
ntnx_vms_ova:
- src_vm_uuid: "{{ vm.vm_uuid }}"
- name: integration_test_VMDK_ova
- file_format: VMDK
+ src_vm_uuid: "{{ vm.vm_uuid }}"
+ name: integration_test_VMDK_ova
+ file_format: VMDK
register: result
ignore_errors: true
check_mode: yes
@@ -34,14 +34,14 @@
- result.changed == false
- result.failed == false
- result.task_uuid != ""
- success_msg: ' Success: returned as expected '
- fail_msg: ' Fail: create_ova_image with check mode '
+ success_msg: " Success: returned as expected "
+ fail_msg: " Fail: create_ova_image with check mode "
#########################################
- name: create QCOW2 ova_image
ntnx_vms_ova:
- src_vm_uuid: "{{ vm.vm_uuid }}"
- name: integration_test_QCOW2_ova
- file_format: QCOW2
+ src_vm_uuid: "{{ vm.vm_uuid }}"
+ name: integration_test_QCOW2_ova
+ file_format: QCOW2
register: result
ignore_errors: true
@@ -50,14 +50,14 @@
that:
- result.response is defined
- result.response.status.state == 'COMPLETE'
- fail_msg: 'Fail: Unable to create QCOW2 ova_image '
- success_msg: 'Success: create QCOW2 ova_image successfully '
+ fail_msg: "Fail: Unable to create QCOW2 ova_image "
+ success_msg: "Success: create QCOW2 ova_image successfully "
#########################################
- name: create VMDK ova_image
ntnx_vms_ova:
- src_vm_uuid: "{{ vm.vm_uuid }}"
- name: integration_test_VMDK_ova
- file_format: VMDK
+ src_vm_uuid: "{{ vm.vm_uuid }}"
+ name: integration_test_VMDK_ova
+ file_format: VMDK
register: result
ignore_errors: true
@@ -66,8 +66,8 @@
that:
- result.response is defined
- result.response.status.state == 'COMPLETE'
- fail_msg: 'Fail: Unable to create VMDK ova_image '
- success_msg: 'Success: create VMDK ova_image successfully '
+ fail_msg: "Fail: Unable to create VMDK ova_image "
+ success_msg: "Success: create VMDK ova_image successfully "
#########################################
- name: Delete all Created VMs
ntnx_vms:
diff --git a/tests/integration/targets/ntnx_projects/tasks/projects_with_role_mappings.yml b/tests/integration/targets/ntnx_projects/tasks/projects_with_role_mappings.yml
index dca268c0d..ed85b7b80 100644
--- a/tests/integration/targets/ntnx_projects/tasks/projects_with_role_mappings.yml
+++ b/tests/integration/targets/ntnx_projects/tasks/projects_with_role_mappings.yml
@@ -54,7 +54,7 @@
################################################################
-- name: Creat project with all specs
+- name: Create project with all specs
ntnx_projects:
name: "{{project2_name}}"
desc: desc-123
@@ -96,7 +96,8 @@
ignore_errors: "{{ignore_errors}}"
- set_fact:
- expected_subnets: ["{{ network.dhcp.uuid }}", "{{ static.uuid }}", "{{ overlay.uuid }}"]
+ expected_subnets:
+ ["{{ network.dhcp.uuid }}", "{{ static.uuid }}", "{{ overlay.uuid }}"]
response_acp: "{{result.response.status.access_control_policy_list_status[0].access_control_policy_status.resources}}"
- name: Creation Status
@@ -124,14 +125,12 @@
fail_msg: "Unable to create project with all specifications"
success_msg: "Project with all specifications created successfully"
-
- set_fact:
todelete: "{{ todelete + [ result.project_uuid ] }}"
- set_fact:
user_group_to_delete: "{{result.response.status.project_status.resources.external_user_group_reference_list[0].uuid}}"
-
- name: Update Project role mappings and subnets and quotas
ntnx_projects:
project_uuid: "{{result.project_uuid}}"
@@ -171,38 +170,42 @@
- set_fact:
response_acp: "{{result.response.status.access_control_policy_list_status[0].access_control_policy_status.resources}}"
- set_fact:
- acp_users: ["{{response_acp.user_reference_list[0].uuid}}", "{{response_acp.user_reference_list[1].uuid}}"]
+ acp_users:
+ [
+ "{{response_acp.user_reference_list[0].uuid}}",
+ "{{response_acp.user_reference_list[1].uuid}}",
+ ]
- set_fact:
- sorted_acp_users: '{{ acp_users | sort() }}'
+ sorted_acp_users: "{{ acp_users | sort() }}"
- set_fact:
expected_users: ["{{users[0]}}", "{{users[1]}}"]
- set_fact:
- expected_users_sorted: '{{ expected_users | sort() }}'
+ expected_users_sorted: "{{ expected_users | sort() }}"
- set_fact:
- project_user_reference_list: ["{{result.response.status.project_status.resources.user_reference_list[0].uuid}}", "{{result.response.status.project_status.resources.user_reference_list[1].uuid}}"]
+ project_user_reference_list:
+ [
+ "{{result.response.status.project_status.resources.user_reference_list[0].uuid}}",
+ "{{result.response.status.project_status.resources.user_reference_list[1].uuid}}",
+ ]
- set_fact:
- project_user_references_sorted: '{{ project_user_reference_list|sort() }}'
+ project_user_references_sorted: "{{ project_user_reference_list|sort() }}"
- set_fact:
- expected_quotas: [
- {
- "limit": 5,
- "resource_type": "VCPUS",
- "units": "COUNT",
- "value": 0
- },
- {
- "limit": 2147483648,
- "resource_type": "STORAGE",
- "units": "BYTES",
- "value": 0
- },
- {
- "limit": 2147483648,
- "resource_type": "MEMORY",
- "units": "BYTES",
- "value": 0
- }
- ]
+ expected_quotas:
+ [
+ { "limit": 5, "resource_type": "VCPUS", "units": "COUNT", "value": 0 },
+ {
+ "limit": 2147483648,
+ "resource_type": "STORAGE",
+ "units": "BYTES",
+ "value": 0,
+ },
+ {
+ "limit": 2147483648,
+ "resource_type": "MEMORY",
+ "units": "BYTES",
+ "value": 0,
+ },
+ ]
- set_fact:
quotas: "{{result.response.status.project_status.resources.resource_domain.resources}}"
@@ -276,10 +279,9 @@
that:
- result.changed == false
- "'Nothing to update' in result.msg"
- fail_msg: "Project update didn't got skipped for update spec same as existing project"
+ fail_msg: "Project update did not got skipped for update spec same as existing project"
success_msg: "Project got skipped successfully for no change in spec"
-
- name: Create project with existing name
ntnx_projects:
name: "{{project3_name}}"
@@ -296,13 +298,12 @@
register: result
ignore_errors: true
-
- name: Creation Status
assert:
that:
- result.changed == false
- "'Project with given name already exists' in result.msg"
- fail_msg: "Project creation didn't failed for existing name"
+ fail_msg: "Project creation did not failed for existing name"
success_msg: "Project creation failed as expected"
#################################################################
diff --git a/tests/integration/targets/ntnx_projects/tasks/update_project.yml b/tests/integration/targets/ntnx_projects/tasks/update_project.yml
index 1919e2c8c..6557f5384 100644
--- a/tests/integration/targets/ntnx_projects/tasks/update_project.yml
+++ b/tests/integration/targets/ntnx_projects/tasks/update_project.yml
@@ -12,7 +12,6 @@
- set_fact:
project1_name: "{{random_name}}{{suffix_name}}1"
-
- name: Create Project
ntnx_projects:
name: "{{project1_name}}"
@@ -154,7 +153,7 @@
that:
- result.changed == false
- "'Nothing to update' in result.msg"
- fail_msg: "Project update didn't got skipped for update spec same as existing project"
+ fail_msg: "Project update did not got skipped for update spec same as existing project"
success_msg: "Project got skipped successfully for no change in spec"
#################################################################
diff --git a/tests/integration/targets/ntnx_protection_rules/tasks/protection_rules.yml b/tests/integration/targets/ntnx_protection_rules/tasks/protection_rules.yml
index 0c2d9e7ce..b03931262 100644
--- a/tests/integration/targets/ntnx_protection_rules/tasks/protection_rules.yml
+++ b/tests/integration/targets/ntnx_protection_rules/tasks/protection_rules.yml
@@ -107,7 +107,7 @@
success_msg: "Protection policy with with synchronous schedule created successfully"
-- name: Delete created protection policy inorder to avoid conflict in further tests
+- name: Delete created protection policy in order to avoid conflict in further tests
ntnx_protection_rules:
state: absent
wait: True
diff --git a/tests/integration/targets/ntnx_recovery_plans_and_jobs/tasks/crud.yml b/tests/integration/targets/ntnx_recovery_plans_and_jobs/tasks/crud.yml
index 7b8f22eb5..75504ef2b 100644
--- a/tests/integration/targets/ntnx_recovery_plans_and_jobs/tasks/crud.yml
+++ b/tests/integration/targets/ntnx_recovery_plans_and_jobs/tasks/crud.yml
@@ -5,166 +5,163 @@
############################################################### CREATE Recovery Plan ###########################################################################################
- set_fact:
- expected_availability_zone_list: [
+ expected_availability_zone_list:
+ [
+ { "availability_zone_url": "{{dr.primary_az_url}}" },
+ { "availability_zone_url": "{{dr.recovery_az_url}}" },
+ ]
+ expected_network_mapping_list_for_check_mode:
+ [
+ {
+ "are_networks_stretched": True,
+ "availability_zone_network_mapping_list":
+ [
{
- "availability_zone_url": "{{dr.primary_az_url}}"
+ "availability_zone_url": "{{dr.primary_az_url}}",
+ "recovery_network": { "name": "{{network.dhcp.name}}" },
+ "test_network": { "name": "{{network.dhcp.name}}" },
},
{
- "availability_zone_url": "{{dr.recovery_az_url}}"
- }
- ]
- expected_network_mapping_list_for_check_mode: [
- {
- "are_networks_stretched": True,
- "availability_zone_network_mapping_list": [
- {
- "availability_zone_url": "{{dr.primary_az_url}}",
- "recovery_network": {
- "name": "{{network.dhcp.name}}"
- },
- "test_network": {
- "name": "{{network.dhcp.name}}"
- }
- },
- {
- "availability_zone_url": "{{dr.recovery_az_url}}",
- "recovery_network": {
- "name": "{{dr.recovery_site_network}}"
- },
- "test_network": {
- "name": "{{dr.recovery_site_network}}"
- }
- }
- ]
- }
- ]
- expected_network_mapping_list: [
- {
- "are_networks_stretched": False,
- "availability_zone_network_mapping_list": [
- {
- "availability_zone_url": "{{dr.primary_az_url}}",
- "recovery_ip_assignment_list": [
- {
- "ip_config_list": [
- {
- "ip_address": "{{dr.recovery_ip2}}"
- }
- ],
- "vm_reference": {
- "kind": "vm",
- "name": "{{dr_vm_name}}",
- "uuid": "{{dr_vm.uuid}}"
- }
- }
- ],
- "recovery_network": {
- "name": "{{network.dhcp.name}}",
- "subnet_list": [
- {
- "external_connectivity_state": "DISABLED",
- "gateway_ip": "{{dr.gateway_ip}}",
- "prefix_length": 24
- }
- ]
- },
- "test_ip_assignment_list": [
- {
- "ip_config_list": [
- {
- "ip_address": "{{dr.recovery_ip1}}"
- }
- ],
- "vm_reference": {
- "kind": "vm",
- "name": "{{dr_vm_name}}",
- "uuid": "{{dr_vm.uuid}}"
- }
- }
- ],
- "test_network": {
- "name": "{{network.dhcp.name}}",
- "subnet_list": [
- {
- "external_connectivity_state": "DISABLED",
- "gateway_ip": "{{dr.gateway_ip}}",
- "prefix_length": 24
- }
- ]
- }
- },
- {
- "availability_zone_url": "{{dr.recovery_az_url}}",
- "recovery_ip_assignment_list": [
- {
- "ip_config_list": [
- {
- "ip_address": "{{dr.recovery_ip2}}"
- }
- ],
- "vm_reference": {
- "kind": "vm",
- "name": "{{dr_vm_name}}",
- "uuid": "{{dr_vm.uuid}}"
- }
- }
- ],
- "recovery_network": {
- "name": "{{dr.recovery_site_network}}",
- "subnet_list": [
- {
- "external_connectivity_state": "DISABLED",
- "gateway_ip": "{{dr.gateway_ip}}",
- "prefix_length": 24
- }
- ]
- },
- "test_ip_assignment_list": [
- {
- "ip_config_list": [
- {
- "ip_address": "{{dr.recovery_ip1}}"
- }
- ],
- "vm_reference": {
- "kind": "vm",
- "name": "{{dr_vm_name}}",
- "uuid": "{{dr_vm.uuid}}"
- }
- }
- ],
- "test_network": {
- "name": "{{dr.recovery_site_network}}",
- "subnet_list": [
- {
- "external_connectivity_state": "DISABLED",
- "gateway_ip": "{{dr.gateway_ip}}",
- "prefix_length": 24
- }
- ]
- }
- }
- ]
- }
- ]
- expected_stage_work_0: {
- "recover_entities": {
- "entity_info_list": [
- {
- "any_entity_reference": {
- "kind": "vm",
- "name": "{{dr_vm_name}}",
- "uuid": "{{dr_vm.uuid}}"
- },
- "script_list": [
- {
- "enable_script_exec": true
- }
- ]
- }
- ]
- }
- }
+ "availability_zone_url": "{{dr.recovery_az_url}}",
+ "recovery_network": { "name": "{{dr.recovery_site_network}}" },
+ "test_network": { "name": "{{dr.recovery_site_network}}" },
+ },
+ ],
+ },
+ ]
+ expected_network_mapping_list:
+ [
+ {
+ "are_networks_stretched": False,
+ "availability_zone_network_mapping_list":
+ [
+ {
+ "availability_zone_url": "{{dr.primary_az_url}}",
+ "recovery_ip_assignment_list":
+ [
+ {
+ "ip_config_list":
+ [{ "ip_address": "{{dr.recovery_ip2}}" }],
+ "vm_reference":
+ {
+ "kind": "vm",
+ "name": "{{dr_vm_name}}",
+ "uuid": "{{dr_vm.uuid}}",
+ },
+ },
+ ],
+ "recovery_network":
+ {
+ "name": "{{network.dhcp.name}}",
+ "subnet_list":
+ [
+ {
+ "external_connectivity_state": "DISABLED",
+ "gateway_ip": "{{dr.gateway_ip}}",
+ "prefix_length": 24,
+ },
+ ],
+ },
+ "test_ip_assignment_list":
+ [
+ {
+ "ip_config_list":
+ [{ "ip_address": "{{dr.recovery_ip1}}" }],
+ "vm_reference":
+ {
+ "kind": "vm",
+ "name": "{{dr_vm_name}}",
+ "uuid": "{{dr_vm.uuid}}",
+ },
+ },
+ ],
+ "test_network":
+ {
+ "name": "{{network.dhcp.name}}",
+ "subnet_list":
+ [
+ {
+ "external_connectivity_state": "DISABLED",
+ "gateway_ip": "{{dr.gateway_ip}}",
+ "prefix_length": 24,
+ },
+ ],
+ },
+ },
+ {
+ "availability_zone_url": "{{dr.recovery_az_url}}",
+ "recovery_ip_assignment_list":
+ [
+ {
+ "ip_config_list":
+ [{ "ip_address": "{{dr.recovery_ip2}}" }],
+ "vm_reference":
+ {
+ "kind": "vm",
+ "name": "{{dr_vm_name}}",
+ "uuid": "{{dr_vm.uuid}}",
+ },
+ },
+ ],
+ "recovery_network":
+ {
+ "name": "{{dr.recovery_site_network}}",
+ "subnet_list":
+ [
+ {
+ "external_connectivity_state": "DISABLED",
+ "gateway_ip": "{{dr.gateway_ip}}",
+ "prefix_length": 24,
+ },
+ ],
+ },
+ "test_ip_assignment_list":
+ [
+ {
+ "ip_config_list":
+ [{ "ip_address": "{{dr.recovery_ip1}}" }],
+ "vm_reference":
+ {
+ "kind": "vm",
+ "name": "{{dr_vm_name}}",
+ "uuid": "{{dr_vm.uuid}}",
+ },
+ },
+ ],
+ "test_network":
+ {
+ "name": "{{dr.recovery_site_network}}",
+ "subnet_list":
+ [
+ {
+ "external_connectivity_state": "DISABLED",
+ "gateway_ip": "{{dr.gateway_ip}}",
+ "prefix_length": 24,
+ },
+ ],
+ },
+ },
+ ],
+ },
+ ]
+ expected_stage_work_0:
+ {
+ "recover_entities":
+ {
+ "entity_info_list":
+ [
+ {
+ "any_entity_reference":
+ {
+ "kind": "vm",
+ "name": "{{dr_vm_name}}",
+ "uuid": "{{dr_vm.uuid}}",
+ },
+ "script_list": [{ "enable_script_exec": true }],
+ },
+ ],
+ },
+ }
- name: Create checkmode spec for recovery plan with networks and 2 stage
check_mode: yes
@@ -195,7 +192,6 @@
name: "{{dr.recovery_site_network}}"
register: result
-
- name: Checkmode spec assert
assert:
that:
@@ -209,8 +205,8 @@
- result.response.spec.resources.stage_list[0]["stage_work"] == expected_stage_work_0
- result.response.spec.resources.parameters.availability_zone_list == expected_availability_zone_list
- result.response.spec.resources.parameters.network_mapping_list == expected_network_mapping_list_for_check_mode
- fail_msg: 'Unable to create recovery plan check mode spec'
- success_msg: 'Recovery plan check mode spec created successfully'
+ fail_msg: "Unable to create recovery plan check mode spec"
+ success_msg: "Recovery plan check mode spec created successfully"
- name: Create recovery plan with networks and 2 stage
ntnx_recovery_plans:
@@ -284,135 +280,124 @@
- result.response.status.resources.stage_list[0]["stage_work"] == expected_stage_work_0
- result.response.status.resources.parameters.availability_zone_list == expected_availability_zone_list
- result.response.status.resources.parameters.network_mapping_list == expected_network_mapping_list
- fail_msg: 'Unable to create recovery plans'
- success_msg: 'Recovery plan created successfully'
+ fail_msg: "Unable to create recovery plans"
+ success_msg: "Recovery plan created successfully"
############################################################### Update Recovery Plan ###########################################################################################
- set_fact:
- expected_availability_zone_list: [
+ expected_availability_zone_list:
+ [
+ { "availability_zone_url": "{{dr.primary_az_url}}" },
+ { "availability_zone_url": "{{dr.recovery_az_url}}" },
+ ]
+ expected_network_mapping_list_in_check_mode:
+ [
+ {
+ "are_networks_stretched": false,
+ "availability_zone_network_mapping_list":
+ [
+ {
+ "availability_zone_url": "{{dr.primary_az_url}}",
+ "recovery_network":
+ {
+ "name": "{{static.name}}",
+ "subnet_list":
+ [
+ {
+ "gateway_ip": "{{static.gateway_ip}}",
+ "prefix_length": 24,
+ },
+ ],
+ },
+ "test_network":
+ {
+ "name": "{{static.name}}",
+ "subnet_list":
+ [
+ {
+ "gateway_ip": "{{static.gateway_ip}}",
+ "prefix_length": 24,
+ },
+ ],
+ },
+ },
+ {
+ "availability_zone_url": "{{dr.recovery_az_url}}",
+ "recovery_network": { "name": "{{dr.recovery_site_network}}" },
+ "test_network": { "name": "{{dr.recovery_site_network}}" },
+ },
+ ],
+ },
+ ]
+ expected_network_mapping_list:
+ [
+ {
+ "are_networks_stretched": false,
+ "availability_zone_network_mapping_list":
+ [
{
- "availability_zone_url": "{{dr.primary_az_url}}"
+ "availability_zone_url": "{{dr.primary_az_url}}",
+ "recovery_network":
+ {
+ "name": "{{static.name}}",
+ "subnet_list":
+ [
+ {
+ "external_connectivity_state": "DISABLED",
+ "gateway_ip": "{{static.gateway_ip}}",
+ "prefix_length": 24,
+ },
+ ],
+ },
+ "test_network":
+ {
+ "name": "{{static.name}}",
+ "subnet_list":
+ [
+ {
+ "external_connectivity_state": "DISABLED",
+ "gateway_ip": "{{static.gateway_ip}}",
+ "prefix_length": 24,
+ },
+ ],
+ },
},
{
- "availability_zone_url": "{{dr.recovery_az_url}}"
- }
- ]
- expected_network_mapping_list_in_check_mode: [
- {
- "are_networks_stretched": false,
- "availability_zone_network_mapping_list": [
- {
- "availability_zone_url": "{{dr.primary_az_url}}",
- "recovery_network": {
- "name": "{{static.name}}",
- "subnet_list": [
- {
- "gateway_ip": "{{static.gateway_ip}}",
- "prefix_length": 24
- }
- ]
- },
- "test_network": {
- "name": "{{static.name}}",
- "subnet_list": [
- {
- "gateway_ip": "{{static.gateway_ip}}",
- "prefix_length": 24
- }
- ]
- }
- },
- {
- "availability_zone_url": "{{dr.recovery_az_url}}",
- "recovery_network": {
- "name": "{{dr.recovery_site_network}}"
- },
- "test_network": {
- "name": "{{dr.recovery_site_network}}"
- }
- }
- ]
- }
- ]
- expected_network_mapping_list: [
- {
- "are_networks_stretched": false,
- "availability_zone_network_mapping_list": [
- {
- "availability_zone_url": "{{dr.primary_az_url}}",
- "recovery_network": {
- "name": "{{static.name}}",
- "subnet_list": [
- {
- "external_connectivity_state": "DISABLED",
- "gateway_ip": "{{static.gateway_ip}}",
- "prefix_length": 24
- }
- ]
- },
- "test_network": {
- "name": "{{static.name}}",
- "subnet_list": [
- {
- "external_connectivity_state": "DISABLED",
- "gateway_ip": "{{static.gateway_ip}}",
- "prefix_length": 24
- }
- ]
- }
- },
- {
- "availability_zone_url": "{{dr.recovery_az_url}}",
- "recovery_network": {
- "name": "{{dr.recovery_site_network}}"
- },
- "test_network": {
- "name": "{{dr.recovery_site_network}}"
- }
- }
- ]
- }
- ]
- exepected_stage_work_0: {
- "recover_entities": {
- "entity_info_list": [
- {
- "any_entity_reference": {
- "kind": "vm",
- "name": "{{dr_vm.name}}",
- "uuid": "{{dr_vm.uuid}}"
- },
- "script_list": [
- {
- "enable_script_exec": true
- }
- ]
- },
- {
- "categories": {
- "Environment": "Staging"
- },
- "script_list": [
- {
- "enable_script_exec": true
- }
- ]
- }
- ]
- }
- }
- exepected_stage_work_1: {
- "recover_entities": {
- "entity_info_list": [
- {
- "categories": {
- "Environment": "Dev"
- }
- }
- ]
- }
- }
+ "availability_zone_url": "{{dr.recovery_az_url}}",
+ "recovery_network": { "name": "{{dr.recovery_site_network}}" },
+ "test_network": { "name": "{{dr.recovery_site_network}}" },
+ },
+ ],
+ },
+ ]
+ expected_stage_work_0:
+ {
+ "recover_entities":
+ {
+ "entity_info_list":
+ [
+ {
+ "any_entity_reference":
+ {
+ "kind": "vm",
+ "name": "{{dr_vm.name}}",
+ "uuid": "{{dr_vm.uuid}}",
+ },
+ "script_list": [{ "enable_script_exec": true }],
+ },
+ {
+ "categories": { "Environment": "Staging" },
+ "script_list": [{ "enable_script_exec": true }],
+ },
+ ],
+ },
+ }
+ expected_stage_work_1:
+ {
+ "recover_entities":
+ { "entity_info_list": [{ "categories": { "Environment": "Dev" } }] },
+ }
- name: Checkmode spec for Update recovery plan. Update networks and stages.
check_mode: yes
@@ -466,13 +451,12 @@
- result.response.spec.description == "test-integration-rp-desc-updated"
- result.response.spec.resources.parameters.availability_zone_list == expected_availability_zone_list
- result.response.spec.resources.parameters.network_mapping_list == expected_network_mapping_list_in_check_mode
- - result.response.spec.resources.stage_list[0]["stage_work"] == exepected_stage_work_0
- - result.response.spec.resources.stage_list[1]["stage_work"] == exepected_stage_work_1
+ - result.response.spec.resources.stage_list[0]["stage_work"] == expected_stage_work_0
+ - result.response.spec.resources.stage_list[1]["stage_work"] == expected_stage_work_1
- result.response.spec.resources.stage_list[0]["delay_time_secs"] == 2
- fail_msg: 'Unable to create update recovery plan checkmode spec'
- success_msg: 'Recovery plan update spec created successfully'
-
+ fail_msg: "Unable to create update recovery plan checkmode spec"
+ success_msg: "Recovery plan update spec created successfully"
- name: Update recovery plan. Add another stage, vm and update networks.
ntnx_recovery_plans:
@@ -526,13 +510,12 @@
- recovery_plan.response.status.description == "test-integration-rp-desc-updated"
- recovery_plan.response.status.resources.parameters.availability_zone_list == expected_availability_zone_list
- recovery_plan.response.status.resources.parameters.network_mapping_list == expected_network_mapping_list
- - recovery_plan.response.status.resources.stage_list[0]["stage_work"] == exepected_stage_work_0
- - recovery_plan.response.status.resources.stage_list[1]["stage_work"] == exepected_stage_work_1
+ - recovery_plan.response.status.resources.stage_list[0]["stage_work"] == expected_stage_work_0
+ - recovery_plan.response.status.resources.stage_list[1]["stage_work"] == expected_stage_work_1
- recovery_plan.response.status.resources.stage_list[0]["delay_time_secs"] == 2
- fail_msg: 'Unable to update recovery plans'
- success_msg: 'Recovery plan updated successfully'
-
+ fail_msg: "Unable to update recovery plans"
+ success_msg: "Recovery plan updated successfully"
- name: Idempotency Check
ntnx_recovery_plans:
@@ -587,7 +570,6 @@
############################################################### Run Recovery Plan Jobs###########################################################################################
-
- name: Run Test Failover with validation errors for checking negative scenario. It will fail in validation phase
ntnx_recovery_plan_jobs:
nutanix_host: "{{recovery_site_ip}}"
@@ -632,7 +614,6 @@
register: test_failover_job
-
- name: assert job status
assert:
that:
@@ -649,7 +630,6 @@
fail_msg: "Test failover job failed"
success_msg: "Test failover job run successfully"
-
- name: Run Cleanup
ntnx_recovery_plan_jobs:
job_uuid: "{{test_failover_job.job_uuid}}"
@@ -658,7 +638,6 @@
action: CLEANUP
register: result
-
- name: assert job status
assert:
that:
diff --git a/tests/integration/targets/ntnx_roles/tasks/create.yml b/tests/integration/targets/ntnx_roles/tasks/create.yml
index 541965519..60fab7013 100644
--- a/tests/integration/targets/ntnx_roles/tasks/create.yml
+++ b/tests/integration/targets/ntnx_roles/tasks/create.yml
@@ -44,7 +44,7 @@
- ("{{ p1 }}" == "{{ test_permission_1_uuid }}" and "{{ p2 }}" == "{{ test_permission_2_uuid }}") or ("{{ p2 }}" == "{{ test_permission_1_uuid }}" and "{{ p1 }}" == "{{ test_permission_2_uuid }}")
fail_msg: "Unable to create roles with certain permissions"
- success_msg: "Roles with given permissions created susccessfully"
+ success_msg: "Roles with given permissions created successfully"
- set_fact:
todelete: '{{ result["response"]["metadata"]["uuid"] }}'
@@ -99,7 +99,6 @@
###################################################################################################
-
- name: cleanup created entities
ntnx_roles:
state: absent
@@ -107,6 +106,5 @@
register: result
ignore_errors: True
-
- set_fact:
todelete: []
diff --git a/tests/integration/targets/ntnx_roles/tasks/delete.yml b/tests/integration/targets/ntnx_roles/tasks/delete.yml
index 3d4f00410..c13f6e01e 100644
--- a/tests/integration/targets/ntnx_roles/tasks/delete.yml
+++ b/tests/integration/targets/ntnx_roles/tasks/delete.yml
@@ -28,7 +28,7 @@
- test_role.response is defined
- test_role.changed == True
fail_msg: "Unable to create roles with certain permissions"
- success_msg: "Roles with given permissions created susccessfully"
+ success_msg: "Roles with given permissions created successfully"
###################################################################################################
diff --git a/tests/integration/targets/ntnx_roles/tasks/update.yml b/tests/integration/targets/ntnx_roles/tasks/update.yml
index 16644d37e..81b9c20ca 100644
--- a/tests/integration/targets/ntnx_roles/tasks/update.yml
+++ b/tests/integration/targets/ntnx_roles/tasks/update.yml
@@ -34,8 +34,7 @@
- test_role.response is defined
- test_role.changed == True
fail_msg: "Unable to create roles with certain permissions"
- success_msg: "Roles with given permissions created susccessfully"
-
+ success_msg: "Roles with given permissions created successfully"
###################################################################################################
@@ -63,7 +62,7 @@
- result.response.status.resources.permission_reference_list | length == 1
fail_msg: "Unable to update role"
- success_msg: "Roles with given permissions updated susccessfully"
+ success_msg: "Roles with given permissions updated successfully"
###################################################################################################
diff --git a/tests/integration/targets/ntnx_security_rules/tasks/app_rule.yml b/tests/integration/targets/ntnx_security_rules/tasks/app_rule.yml
index 0e9b038e3..b2ebfe70f 100644
--- a/tests/integration/targets/ntnx_security_rules/tasks/app_rule.yml
+++ b/tests/integration/targets/ntnx_security_rules/tasks/app_rule.yml
@@ -87,7 +87,7 @@
fail_msg: ' fail: unable to create app security rule with inbound and outbound list'
success_msg: 'pass: create app security rule with inbound and outbound list successfully'
-- name: update app security rule by adding to outbound list and remove tule from inbound list
+- name: update app security rule by adding to outbound list and remove rule from inbound list
ntnx_security_rules:
security_rule_uuid: '{{ result.response.metadata.uuid }}'
app_rule:
diff --git a/tests/integration/targets/ntnx_security_rules/tasks/isolation_rule.yml b/tests/integration/targets/ntnx_security_rules/tasks/isolation_rule.yml
index 5a7243409..682b58280 100644
--- a/tests/integration/targets/ntnx_security_rules/tasks/isolation_rule.yml
+++ b/tests/integration/targets/ntnx_security_rules/tasks/isolation_rule.yml
@@ -5,11 +5,11 @@
name: test_isolation_rule
isolation_rule:
isolate_category:
- Environment:
- - Dev
+ Environment:
+ - Dev
from_category:
- Environment:
- - Production
+ Environment:
+ - Production
subset_category:
Environment:
- Staging
@@ -26,7 +26,7 @@
- result.changed == false
- result.response.spec.name=="test_isolation_rule"
- result.security_rule_uuid is none
- fail_msg: ' fail: unable to create isolation security rule with first_entity_filter and second_entity_filter with check mode '
+ fail_msg: " fail: unable to create isolation security rule with first_entity_filter and second_entity_filter with check mode "
success_msg: >-
pass: create isolation security rule with first_entity_filter and
second_entity_filter successfully with check mode
@@ -37,11 +37,11 @@
name: test_isolation_rule
isolation_rule:
isolate_category:
- Environment:
- - Dev
+ Environment:
+ - Dev
from_category:
- Environment:
- - Production
+ Environment:
+ - Production
subset_category:
Environment:
- Staging
@@ -57,14 +57,14 @@
- result.failed == false
- result.response.spec.name=="test_isolation_rule"
- result.response.status.state == 'COMPLETE'
- fail_msg: ' fail: unable to create isolation security rule with first_entity_filter and second_entity_filter'
+ fail_msg: " fail: unable to create isolation security rule with first_entity_filter and second_entity_filter"
success_msg: >-
pass: create isolation security rule with first_entity_filter and
second_entity_filter successfully
-- name: update isoloation security rule action with check_mode
+- name: update isolation security rule action with check_mode
ntnx_security_rules:
- security_rule_uuid: '{{ result.response.metadata.uuid }}'
+ security_rule_uuid: "{{ result.response.metadata.uuid }}"
isolation_rule:
policy_mode: APPLY
register: output
@@ -79,13 +79,13 @@
- output.changed == false
- output.response.spec.name=="test_isolation_rule"
- output.security_rule_uuid is none
- fail_msg: ' fail: unable to update isoloation security rule action with check_mode'
+ fail_msg: " fail: unable to update isolation security rule action with check_mode"
success_msg: >-
- pass: update isoloation security rule action with check_mode successfully
+ pass: update isolation security rule action with check_mode successfully
-- name: update isoloation security rule action
+- name: update isolation security rule action
ntnx_security_rules:
- security_rule_uuid: '{{ result.security_rule_uuid}}'
+ security_rule_uuid: "{{ result.security_rule_uuid}}"
isolation_rule:
policy_mode: APPLY
register: result
@@ -99,11 +99,11 @@
- result.changed == true
- result.response.status.state == 'COMPLETE'
- result.response.spec.resources.isolation_rule.action == "APPLY"
- fail_msg: ' fail: unable to update isolation rule action '
- success_msg: 'pass : update isolation rule action successfully'
-- name: update isoloation security with same values
+ fail_msg: " fail: unable to update isolation rule action "
+ success_msg: "pass : update isolation rule action successfully"
+- name: update isolation security with same values
ntnx_security_rules:
- security_rule_uuid: '{{result.security_rule_uuid}}'
+ security_rule_uuid: "{{result.security_rule_uuid}}"
isolation_rule:
policy_mode: APPLY
register: output
@@ -114,12 +114,12 @@
- output.failed == false
- output.changed == false
- output.msg == "Nothing to change"
- fail_msg: ' fail: unable to update isolation rule action '
- success_msg: 'pass : update isolation rule action successfully'
+ fail_msg: " fail: unable to update isolation rule action "
+ success_msg: "pass : update isolation rule action successfully"
- name: delete isolation rule
ntnx_security_rules:
state: absent
- security_rule_uuid: '{{ result.security_rule_uuid }}'
+ security_rule_uuid: "{{ result.security_rule_uuid }}"
register: result
ignore_errors: true
@@ -129,5 +129,5 @@
- result.response is defined
- result.failed == false
- result.response.status == 'SUCCEEDED'
- fail_msg: ' fail: unable to delete isolation security rule '
- success_msg: 'pass : delete isolation security rule successfully'
+ fail_msg: " fail: unable to delete isolation security rule "
+ success_msg: "pass : delete isolation security rule successfully"
diff --git a/tests/integration/targets/ntnx_security_rules_info/tasks/get_security_rules.yml b/tests/integration/targets/ntnx_security_rules_info/tasks/get_security_rules.yml
index d8396b751..a3edcc138 100644
--- a/tests/integration/targets/ntnx_security_rules_info/tasks/get_security_rules.yml
+++ b/tests/integration/targets/ntnx_security_rules_info/tasks/get_security_rules.yml
@@ -22,8 +22,8 @@
- first_rule.failed == false
- first_rule.response.status.state == 'COMPLETE'
- first_rule.response.spec.name=="isolation_test_rule"
- fail_msg: ' fail: Unable to create isolation_rule for testing '
- success_msg: 'pass: isolation_rule for testing created successfully '
+ fail_msg: " fail: Unable to create isolation_rule for testing "
+ success_msg: "pass: isolation_rule for testing created successfully "
###################################
- name: getting all security rules
ntnx_security_rules_info:
@@ -38,12 +38,12 @@
- result.failed == false
- result.response.metadata.kind == "network_security_rule"
- result.response.metadata.total_matches > 0
- fail_msg: ' fail: unable to get security rules '
- success_msg: 'pass: get all security rules successfully '
+ fail_msg: " fail: unable to get security rules "
+ success_msg: "pass: get all security rules successfully "
###################################
- name: getting particular security rule using security_rule_uuid
ntnx_security_rules_info:
- security_rule_uuid: '{{ first_rule.response.metadata.uuid }}'
+ security_rule_uuid: "{{ first_rule.response.metadata.uuid }}"
register: result
ignore_errors: true
@@ -55,8 +55,8 @@
- result.failed == false
- result.response.status.state == 'COMPLETE'
- first_rule.response.metadata.uuid == result.response.metadata.uuid
- fail_msg: ' fail : unable to get particular security rule using security_rule_uuid'
- success_msg: 'pass: getting security rule using security_rule_uuid succesfuly'
+ fail_msg: " fail : unable to get particular security rule using security_rule_uuid"
+ success_msg: "pass: getting security rule using security_rule_uuid successfully"
###################################
- name: getting all security rules sorted
ntnx_security_rules_info:
@@ -74,13 +74,13 @@
- result.response.metadata.kind == "network_security_rule"
- result.response.metadata.sort_order == "ASCENDING"
- result.response.metadata.sort_attribute == "Name"
- fail_msg: ' fail: unable to get all security rules sorted'
- success_msg: 'pass: getting all security rules sorted successfully '
+ fail_msg: " fail: unable to get all security rules sorted"
+ success_msg: "pass: getting all security rules sorted successfully "
###################################
- name: delete security rule
ntnx_security_rules:
state: absent
- security_rule_uuid: '{{ first_rule.response.metadata.uuid }}'
+ security_rule_uuid: "{{ first_rule.response.metadata.uuid }}"
register: result
ignore_errors: true
@@ -90,6 +90,6 @@
- result.response is defined
- result.failed == false
- result.response.status == 'SUCCEEDED'
- fail_msg: ' fail: unable to delete secutiry rule '
- success_msg: 'pass: security rule deleted successfully '
+ fail_msg: " fail: unable to delete security rule "
+ success_msg: "pass: security rule deleted successfully "
###################################
diff --git a/tests/integration/targets/ntnx_service_groups/tasks/create.yml b/tests/integration/targets/ntnx_service_groups/tasks/create.yml
index 47b8759cc..1eb48c81d 100644
--- a/tests/integration/targets/ntnx_service_groups/tasks/create.yml
+++ b/tests/integration/targets/ntnx_service_groups/tasks/create.yml
@@ -4,7 +4,7 @@
- name: create tcp service group
ntnx_service_groups:
- name: tcp_srvive_group
+ name: tcp_service_group
desc: desc
service_details:
tcp:
@@ -15,9 +15,9 @@
register: result
ignore_errors: true
-- name: getting particular service_group using uuid
+- name: getting particular service_group using uuid
ntnx_service_groups_info:
- service_group_uuid: '{{ result.service_group_uuid }}'
+ service_group_uuid: "{{ result.service_group_uuid }}"
register: result
ignore_errors: true
@@ -43,7 +43,7 @@
################################################################
- name: create udp service group
ntnx_service_groups:
- name: udp_srvive_group
+ name: udp_service_group
desc: desc
service_details:
udp:
@@ -54,9 +54,9 @@
register: result
ignore_errors: true
-- name: getting particular service_group using uuid
+- name: getting particular service_group using uuid
ntnx_service_groups_info:
- service_group_uuid: '{{ result.service_group_uuid }}'
+ service_group_uuid: "{{ result.service_group_uuid }}"
register: result
ignore_errors: true
@@ -82,7 +82,7 @@
################################################################
- name: create icmp with service group
ntnx_service_groups:
- name: icmp_srvive_group
+ name: icmp_service_group
desc: desc
service_details:
icmp:
@@ -93,9 +93,9 @@
register: result
ignore_errors: true
-- name: getting particular service_group using uuid
+- name: getting particular service_group using uuid
ntnx_service_groups_info:
- service_group_uuid: '{{ result.service_group_uuid }}'
+ service_group_uuid: "{{ result.service_group_uuid }}"
register: result
ignore_errors: true
@@ -117,7 +117,7 @@
################################################################
- name: create service group with tcp and udp and icmp
ntnx_service_groups:
- name: app_srvive_group
+ name: app_service_group
desc: desc
service_details:
tcp:
@@ -130,9 +130,9 @@
register: result
ignore_errors: true
-- name: getting particular service_group using uuid
+- name: getting particular service_group using uuid
ntnx_service_groups_info:
- service_group_uuid: '{{ result.service_group_uuid }}'
+ service_group_uuid: "{{ result.service_group_uuid }}"
register: result
ignore_errors: true
diff --git a/tests/integration/targets/ntnx_service_groups/tasks/update.yml b/tests/integration/targets/ntnx_service_groups/tasks/update.yml
index 2845caa71..2b2039cab 100644
--- a/tests/integration/targets/ntnx_service_groups/tasks/update.yml
+++ b/tests/integration/targets/ntnx_service_groups/tasks/update.yml
@@ -1,8 +1,7 @@
---
-
- name: create tcp service group
ntnx_service_groups:
- name: tcp_srvive_group
+ name: tcp_service_group
desc: desc
service_details:
tcp:
@@ -42,9 +41,9 @@
register: result
ignore_errors: true
-- name: getting particular service_group using uuid
+- name: getting particular service_group using uuid
ntnx_service_groups_info:
- service_group_uuid: '{{ result.service_group_uuid }}'
+ service_group_uuid: "{{ result.service_group_uuid }}"
register: result
ignore_errors: true
diff --git a/tests/integration/targets/ntnx_static_routes/tasks/create.yml b/tests/integration/targets/ntnx_static_routes/tasks/create.yml
index a677ca55d..581f280b8 100644
--- a/tests/integration/targets/ntnx_static_routes/tasks/create.yml
+++ b/tests/integration/targets/ntnx_static_routes/tasks/create.yml
@@ -39,8 +39,8 @@
- result.response.status.resources.default_route["destination"] == "0.0.0.0/0"
- result.response.status.resources.default_route["nexthop"]["external_subnet_reference"]["name"] == "{{ external_nat_subnet.name }}"
- fail_msg: 'Fail: Unable to update static routes of vpc'
- success_msg: 'Succes: static routes updated successfully'
+ fail_msg: "Fail: Unable to update static routes of vpc"
+ success_msg: "Success: static routes updated successfully"
###########################################################################################################
@@ -97,7 +97,7 @@
###########################################################################################################
-- name: Netgative scenario of creating multiple default routes
+- name: Negative scenario of creating multiple default routes
ntnx_static_routes:
vpc_uuid: "{{ vpc.uuid }}"
static_routes:
diff --git a/tests/integration/targets/ntnx_static_routes_info/tasks/info.yml b/tests/integration/targets/ntnx_static_routes_info/tasks/info.yml
index 4b79f1a08..98e3e79ba 100644
--- a/tests/integration/targets/ntnx_static_routes_info/tasks/info.yml
+++ b/tests/integration/targets/ntnx_static_routes_info/tasks/info.yml
@@ -25,8 +25,8 @@
- result.response is defined
- result.response.status.state == 'COMPLETE'
- result.changed == true
- fail_msg: 'Fail: Unable to update static routes of vpc'
- success_msg: 'Succes: static routes updated successfully'
+ fail_msg: "Fail: Unable to update static routes of vpc"
+ success_msg: "Success: static routes updated successfully"
###########################################################################################################
@@ -35,7 +35,6 @@
vpc_uuid: "{{ vpc.uuid }}"
register: result
-
- set_fact:
d1: "{{ result.response.status.resources.static_routes_list[0].destination }}"
d2: "{{ result.response.status.resources.static_routes_list[1].destination }}"
@@ -54,8 +53,8 @@
- result.response.status.resources.default_route["destination"] == "0.0.0.0/0"
- result.response.status.resources.default_route["nexthop"]["external_subnet_reference"]["name"] == "{{ external_nat_subnet.name }}"
- fail_msg: 'Fail: Unable to get static routes for vpc'
- success_msg: 'Succes'
+ fail_msg: "Fail: Unable to get static routes for vpc"
+ success_msg: "Success"
###########################################################################################################
diff --git a/tests/integration/targets/ntnx_users/tasks/create.yml b/tests/integration/targets/ntnx_users/tasks/create.yml
index b6bdf0c4b..6c705ad19 100644
--- a/tests/integration/targets/ntnx_users/tasks/create.yml
+++ b/tests/integration/targets/ntnx_users/tasks/create.yml
@@ -20,7 +20,7 @@
- result.failed == false
- result.user_uuid == None
- result.response.spec.resources.directory_service_user.directory_service_reference.uuid == "{{directory_service_uuid}}"
- fail_msg: "fail: user created whil check mode on"
+ fail_msg: "fail: user created while check mode on"
success_msg: "pass: returned as expected"
diff --git a/tests/integration/targets/ntnx_vms_clone/tasks/create.yml b/tests/integration/targets/ntnx_vms_clone/tasks/create.yml
index 9d08f1480..1bc50375d 100644
--- a/tests/integration/targets/ntnx_vms_clone/tasks/create.yml
+++ b/tests/integration/targets/ntnx_vms_clone/tasks/create.yml
@@ -5,25 +5,25 @@
copy:
dest: "init_cloud.yml"
content: |
- #cloud-config
- chpasswd:
- list: |
- root: "{{ password }}"
- expire: False
- fqdn: myNutanixVM
+ #cloud-config
+ chpasswd:
+ list: |
+ root: "{{ password }}"
+ expire: False
+ fqdn: myNutanixVM
- name: VM with minimum requirements to clone
ntnx_vms:
- state: present
- name: integration_test_clone_vm
- cluster:
- name: "{{ cluster.name }}"
- disks:
- - type: "DISK"
- clone_image:
- name: "{{ ubuntu }}"
- bus: "SCSI"
- size_gb: 20
+ state: present
+ name: integration_test_clone_vm
+ cluster:
+ name: "{{ cluster.name }}"
+ disks:
+ - type: "DISK"
+ clone_image:
+ name: "{{ ubuntu }}"
+ bus: "SCSI"
+ size_gb: 20
register: vm
ignore_errors: true
@@ -32,19 +32,19 @@
that:
- vm.response is defined
- vm.response.status.state == 'COMPLETE'
- fail_msg: 'Fail: Unable to create VM with minimum requirements to clone '
- success_msg: 'Succes: VM with minimum requirements created successfully '
+ fail_msg: "Fail: Unable to create VM with minimum requirements to clone "
+ success_msg: "Success: VM with minimum requirements created successfully "
##############################
- name: clone vm and change vcpus,memory_gb,cores_per_vcpu,timezone,desc,name with force_power_off
ntnx_vms_clone:
- src_vm_uuid: "{{ vm.vm_uuid }}"
- vcpus: 2
- cores_per_vcpu: 2
- memory_gb: 2
- name: cloned vm
- timezone: GMT
- force_power_off: true
+ src_vm_uuid: "{{ vm.vm_uuid }}"
+ vcpus: 2
+ cores_per_vcpu: 2
+ memory_gb: 2
+ name: cloned vm
+ timezone: GMT
+ force_power_off: true
register: result
ignore_errors: true
@@ -53,19 +53,19 @@
that:
- result.response is defined
- result.response.status.state == 'COMPLETE'
- fail_msg: 'Fail: Unable to clone vm and change vcpus,memory_gb,cores_per_vcpu,timezone,desc,name with force_power_off'
- success_msg: 'Succes: VM cloned successfully and change vcpus,memory_gb,cores_per_vcpu,timezone,desc,name with force_power_off '
+ fail_msg: "Fail: Unable to clone vm and change vcpus,memory_gb,cores_per_vcpu,timezone,desc,name with force_power_off"
+ success_msg: "Success: VM cloned successfully and change vcpus,memory_gb,cores_per_vcpu,timezone,desc,name with force_power_off "
- set_fact:
- todelete: '{{ todelete + [ result.vm_uuid ] }}'
+ todelete: "{{ todelete + [ result.vm_uuid ] }}"
##############################
- name: clone vm and add network
ntnx_vms_clone:
- src_vm_uuid: "{{ vm.vm_uuid }}"
- networks:
- - is_connected: true
- subnet:
- uuid: "{{ static.uuid }}"
+ src_vm_uuid: "{{ vm.vm_uuid }}"
+ networks:
+ - is_connected: true
+ subnet:
+ uuid: "{{ static.uuid }}"
register: result
ignore_errors: true
@@ -74,19 +74,19 @@
that:
- result.response is defined
- result.response.status.state == 'COMPLETE'
- fail_msg: 'Fail: Unable to clone vm while it is off '
- success_msg: 'Succes: VM cloned successfully '
+ fail_msg: "Fail: Unable to clone vm while it is off "
+ success_msg: "Success: VM cloned successfully "
- set_fact:
- todelete: '{{ todelete + [ result.vm_uuid ] }}'
+ todelete: "{{ todelete + [ result.vm_uuid ] }}"
###########################################
- name: clone vm with check mode
ntnx_vms_clone:
- src_vm_uuid: "{{ vm.vm_uuid }}"
- networks:
- - is_connected: false
- subnet:
- name: "{{ network.dhcp.name }}"
+ src_vm_uuid: "{{ vm.vm_uuid }}"
+ networks:
+ - is_connected: false
+ subnet:
+ name: "{{ network.dhcp.name }}"
register: result
ignore_errors: true
check_mode: yes
@@ -98,16 +98,16 @@
- result.changed == false
- result.failed == false
- result.task_uuid != ""
- success_msg: ' Success: returned response as expected '
- fail_msg: ' Fail: clone vm with check_mode '
+ success_msg: " Success: returned response as expected "
+ fail_msg: " Fail: clone vm with check_mode "
###########################################
- name: clone vm with script
ntnx_vms_clone:
- src_vm_uuid: "{{ vm.vm_uuid }}"
- guest_customization:
- type: "cloud_init"
- script_path: "./init_cloud.yml"
- is_overridable: True
+ src_vm_uuid: "{{ vm.vm_uuid }}"
+ guest_customization:
+ type: "cloud_init"
+ script_path: "./init_cloud.yml"
+ is_overridable: True
register: result
ignore_errors: true
@@ -116,19 +116,19 @@
that:
- result.response is defined
- result.response.status.state == 'COMPLETE'
- fail_msg: 'Fail: Unable to clone vm vm with script'
- success_msg: 'Succes: VM cloned with script successfully '
+ fail_msg: "Fail: Unable to clone vm vm with script"
+ success_msg: "Success: VM cloned with script successfully "
- set_fact:
- todelete: '{{ todelete + [ result.vm_uuid ] }}'
+ todelete: "{{ todelete + [ result.vm_uuid ] }}"
###########################################
- name: Delete all Created VMs
ntnx_vms:
- state: absent
- vm_uuid: '{{ item }}'
- loop: '{{ todelete }}'
+ state: absent
+ vm_uuid: "{{ item }}"
+ loop: "{{ todelete }}"
- name: Delete all Created VMs
ntnx_vms:
- state: absent
- vm_uuid: '{{ vm.vm_uuid }}'
+ state: absent
+ vm_uuid: "{{ vm.vm_uuid }}"
diff --git a/tests/integration/targets/nutanix_floating_ips_info/tasks/list_floating_ips.yml b/tests/integration/targets/nutanix_floating_ips_info/tasks/list_floating_ips.yml
index 43570995d..dfe3c20c0 100644
--- a/tests/integration/targets/nutanix_floating_ips_info/tasks/list_floating_ips.yml
+++ b/tests/integration/targets/nutanix_floating_ips_info/tasks/list_floating_ips.yml
@@ -11,7 +11,7 @@
that:
- result.response is defined
fail_msg: " Unable to list floating_ips "
- success_msg: " Floatong_ips listed successfully "
+ success_msg: " Floating_ips listed successfully "
##############################################################
- name: List floating_ips using length and offset
ntnx_floating_ips_info:
@@ -26,7 +26,7 @@
that:
- result.response is defined
fail_msg: " Unable to list floating_ips "
- success_msg: " Floatong_ips listed successfully "
+ success_msg: " Floating_ips listed successfully "
#############################################################
- name: List floating_ips using ascending ip sorting
ntnx_floating_ips_info:
@@ -40,5 +40,5 @@
that:
- result.response is defined
fail_msg: " Unable to list floating_ips "
- success_msg: " Floatong_ips listed successfully "
+ success_msg: " Floating_ips listed successfully "
#############################################################
diff --git a/tests/integration/targets/nutanix_subnets/tasks/negative_scenarios.yml b/tests/integration/targets/nutanix_subnets/tasks/negative_scenarios.yml
index c392c795d..8720e8fb5 100644
--- a/tests/integration/targets/nutanix_subnets/tasks/negative_scenarios.yml
+++ b/tests/integration/targets/nutanix_subnets/tasks/negative_scenarios.yml
@@ -1,84 +1,84 @@
- - debug:
- msg: "Started Negative Creation Cases"
+- debug:
+ msg: "Started Negative Creation Cases"
- - name: Unknown virtual switch name
- ntnx_subnets:
- state: present
- name: VLAN subnet without IPAM
- vlan_subnet:
- vlan_id: "{{ vlan_subnets_ids.0 }}"
- virtual_switch:
- name: "virtual_switch"
- cluster:
- uuid: "{{ cluster.uuid }}"
- register: result
- ignore_errors: True
+- name: Unknown virtual switch name
+ ntnx_subnets:
+ state: present
+ name: VLAN subnet without IPAM
+ vlan_subnet:
+ vlan_id: "{{ vlan_subnets_ids.0 }}"
+ virtual_switch:
+ name: "virtual_switch"
+ cluster:
+ uuid: "{{ cluster.uuid }}"
+ register: result
+ ignore_errors: True
- - name: Creation Status
- assert:
- that:
- - result.failed==True
- - result.msg=="Failed generating subnet spec"
- success_msg: ' Success: returned error as expected '
+- name: Creation Status
+ assert:
+ that:
+ - result.failed==True
+ - result.msg=="Failed generating subnet spec"
+ success_msg: " Success: returned error as expected "
###############################################################
- - name: Unknown virtual switch uuid
- ntnx_subnets:
- state: present
- name: VLAN subnet with IPAM
- vlan_subnet:
- vlan_id: "{{ vlan_subnets_ids.1 }}"
- virtual_switch:
- uuid: 91639374-c0b9-48c3-bfc1-f9c89343b3e
- cluster:
- name: "{{ cluster.name }}"
- ipam:
- network_ip: "{{ ip_address_management.network_ip }}"
- network_prefix: "{{ ip_address_management.network_prefix }}"
- gateway_ip: "{{ ip_address_management.gateway_ip_address }}"
- register: result
- ignore_errors: true
+- name: Unknown virtual switch uuid
+ ntnx_subnets:
+ state: present
+ name: VLAN subnet with IPAM
+ vlan_subnet:
+ vlan_id: "{{ vlan_subnets_ids.1 }}"
+ virtual_switch:
+ uuid: 91639374-c0b9-48c3-bfc1-f9c89343b3e
+ cluster:
+ name: "{{ cluster.name }}"
+ ipam:
+ network_ip: "{{ ip_address_management.network_ip }}"
+ network_prefix: "{{ ip_address_management.network_prefix }}"
+ gateway_ip: "{{ ip_address_management.gateway_ip_address }}"
+ register: result
+ ignore_errors: true
- - name: Creation Status
- assert:
- that:
- - result.failed==True
- success_msg: ' Success: returned error as expected '
+- name: Creation Status
+ assert:
+ that:
+ - result.failed==True
+ success_msg: " Success: returned error as expected "
###############################################################
- - name: Unknown Cluster
- ntnx_subnets:
- state: present
- name: VLAN subnet with IPAM and IP pools
- vlan_subnet:
- vlan_id: "{{vlan_subnets_ids.2}}"
- virtual_switch:
- name: "{{ virtual_switch.name }}"
- cluster:
- name: auto_cluster_prod_1a642ea0a5c
- ipam:
- network_ip: "{{ ip_address_management.network_ip }}"
- network_prefix: "{{ ip_address_management.network_prefix }}"
- gateway_ip: "{{ ip_address_management.gateway_ip_address }}"
- ip_pools:
- - start_ip: "{{ ip_address_pools.start_address }}"
- end_ip: "{{ ip_address_pools.end_address }}"
- register: result
- ignore_errors: true
+- name: Unknown Cluster
+ ntnx_subnets:
+ state: present
+ name: VLAN subnet with IPAM and IP pools
+ vlan_subnet:
+ vlan_id: "{{vlan_subnets_ids.2}}"
+ virtual_switch:
+ name: "{{ virtual_switch.name }}"
+ cluster:
+ name: auto_cluster_prod_1a642ea0a5c
+ ipam:
+ network_ip: "{{ ip_address_management.network_ip }}"
+ network_prefix: "{{ ip_address_management.network_prefix }}"
+ gateway_ip: "{{ ip_address_management.gateway_ip_address }}"
+ ip_pools:
+ - start_ip: "{{ ip_address_pools.start_address }}"
+ end_ip: "{{ ip_address_pools.end_address }}"
+ register: result
+ ignore_errors: true
- - name: Creation Status
- assert:
- that:
- - result.failed==True
- success_msg: ' Success: returned error as expected '
+- name: Creation Status
+ assert:
+ that:
+ - result.failed==True
+ success_msg: " Success: returned error as expected "
###############################################################
- - name: Delete subnet with unknown uuid
- ntnx_subnets:
- state: absent
- subnet_uuid: 5
- register: resultt
- ignore_errors: true
+- name: Delete subnet with unknown uuid
+ ntnx_subnets:
+ state: absent
+ subnet_uuid: 5
+ register: result
+ ignore_errors: true
- - name: Creation Status
- assert:
- that:
- - result.failed==True
- success_msg: ' Success: returned error as expected '
+- name: Creation Status
+ assert:
+ that:
+ - result.failed==True
+ success_msg: " Success: returned error as expected "
diff --git a/tests/integration/targets/nutanix_vms/tasks/create.yml b/tests/integration/targets/nutanix_vms/tasks/create.yml
index d2f99f460..b928c3dd6 100644
--- a/tests/integration/targets/nutanix_vms/tasks/create.yml
+++ b/tests/integration/targets/nutanix_vms/tasks/create.yml
@@ -1,601 +1,601 @@
- - name: Create Cloud-init Script file
- copy:
- dest: "cloud_init.yml"
- content: |
- #cloud-config
- chpasswd:
- list: |
- root: "{{ password }}"
- expire: False
- fqdn: myNutanixVM
+- name: Create Cloud-init Script file
+ copy:
+ dest: "cloud_init.yml"
+ content: |
+ #cloud-config
+ chpasswd:
+ list: |
+ root: "{{ password }}"
+ expire: False
+ fqdn: myNutanixVM
##########################################################################
- - name: VM with none values
- ntnx_vms:
- state: present
- name: none
- timezone: GMT
- project:
- uuid: "{{ project.uuid }}"
- cluster:
- name: "{{ cluster.name }}"
- categories:
- AppType:
- - Apache_Spark
- disks:
- - type: DISK
- size_gb: 5
- bus: SCSI
- vcpus:
- cores_per_vcpu:
- memory_gb:
- register: result
- ignore_errors: true
+- name: VM with none values
+ ntnx_vms:
+ state: present
+ name: none
+ timezone: GMT
+ project:
+ uuid: "{{ project.uuid }}"
+ cluster:
+ name: "{{ cluster.name }}"
+ categories:
+ AppType:
+ - Apache_Spark
+ disks:
+ - type: DISK
+ size_gb: 5
+ bus: SCSI
+ vcpus:
+ cores_per_vcpu:
+ memory_gb:
+ register: result
+ ignore_errors: true
- - name: Creation Status
- assert:
- that:
- - result.response is defined
- - result.response.status.state == 'COMPLETE'
- fail_msg: 'Unable to Create VM with none values '
- success_msg: 'VM with none values created successfully '
+- name: Creation Status
+ assert:
+ that:
+ - result.response is defined
+ - result.response.status.state == 'COMPLETE'
+ fail_msg: "Unable to Create VM with none values "
+ success_msg: "VM with none values created successfully "
- - set_fact:
- todelete: '{{ todelete + [ result["response"]["metadata"]["uuid"] ] }}'
+- set_fact:
+ todelete: '{{ todelete + [ result["response"]["metadata"]["uuid"] ] }}'
# ##################################################################################
- - name: VM with owner name
- ntnx_vms:
- state: present
- name: none
- timezone: GMT
- project:
- uuid: "{{ project.uuid }}"
- cluster:
- name: "{{ cluster.name }}"
- categories:
- AppType:
- - Apache_Spark
- owner:
- name: "{{ vm_owner.name }}"
- disks:
- - type: DISK
- size_gb: 5
- bus: SCSI
- register: result
- ignore_errors: true
+- name: VM with owner name
+ ntnx_vms:
+ state: present
+ name: none
+ timezone: GMT
+ project:
+ uuid: "{{ project.uuid }}"
+ cluster:
+ name: "{{ cluster.name }}"
+ categories:
+ AppType:
+ - Apache_Spark
+ owner:
+ name: "{{ vm_owner.name }}"
+ disks:
+ - type: DISK
+ size_gb: 5
+ bus: SCSI
+ register: result
+ ignore_errors: true
- - name: Creation Status
- assert:
- that:
- - result.response is defined
- - result.response.status.state == 'COMPLETE'
- - result.response.metadata.owner_reference.name == "{{ vm_owner.name }}"
- - result.response.metadata.owner_reference.uuid == "{{ vm_owner.uuid }}"
- - result.response.metadata.owner_reference.kind == "user"
- fail_msg: 'Unable to Create VM with owner'
- success_msg: 'VM with owner created successfully '
+- name: Creation Status
+ assert:
+ that:
+ - result.response is defined
+ - result.response.status.state == 'COMPLETE'
+ - result.response.metadata.owner_reference.name == "{{ vm_owner.name }}"
+ - result.response.metadata.owner_reference.uuid == "{{ vm_owner.uuid }}"
+ - result.response.metadata.owner_reference.kind == "user"
+ fail_msg: "Unable to Create VM with owner"
+ success_msg: "VM with owner created successfully "
- - set_fact:
- todelete: '{{ todelete + [ result["response"]["metadata"]["uuid"] ] }}'
+- set_fact:
+ todelete: '{{ todelete + [ result["response"]["metadata"]["uuid"] ] }}'
##################################################################################
- - name: VM with ubuntu image and different specifications
- ntnx_vms:
- state: present
- project:
- name: "{{ project.name }}"
- name: "VM with Ubuntu image"
- desc: "VM with cluster, network, category, disk with Ubuntu image, guest customization "
- categories:
- AppType:
- - Default
- Environment:
- - Dev
- cluster:
- name: "{{ cluster.name }}"
- networks:
- - is_connected: True
- subnet:
- name: "{{ network.dhcp.name }}"
- disks:
- - type: "DISK"
- size_gb: 30
- bus: "SATA"
- clone_image:
- name: "{{ ubuntu }}"
- vcpus: 1
- cores_per_vcpu: 1
- memory_gb: 1
- guest_customization:
- type: "cloud_init"
- script_path: "./cloud_init.yml"
- is_overridable: True
- register: result
+- name: VM with ubuntu image and different specifications
+ ntnx_vms:
+ state: present
+ project:
+ name: "{{ project.name }}"
+ name: "VM with Ubuntu image"
+ desc: "VM with cluster, network, category, disk with Ubuntu image, guest customization "
+ categories:
+ AppType:
+ - Default
+ Environment:
+ - Dev
+ cluster:
+ name: "{{ cluster.name }}"
+ networks:
+ - is_connected: True
+ subnet:
+ name: "{{ network.dhcp.name }}"
+ disks:
+ - type: "DISK"
+ size_gb: 30
+ bus: "SATA"
+ clone_image:
+ name: "{{ ubuntu }}"
+ vcpus: 1
+ cores_per_vcpu: 1
+ memory_gb: 1
+ guest_customization:
+ type: "cloud_init"
+ script_path: "./cloud_init.yml"
+ is_overridable: True
+ register: result
- - name: Creation Status
- assert:
- that:
- - result.response is defined
- - result.response.status.state == 'COMPLETE'
- - result.response.metadata.categories_mapping["AppType"] == ["Default"]
- - result.response.metadata.categories_mapping["Environment"] == ["Dev"]
- fail_msg: 'Unable to Create VM with Ubuntu image and different specifications '
- success_msg: 'VM with Ubuntu image and different specifications created successfully '
+- name: Creation Status
+ assert:
+ that:
+ - result.response is defined
+ - result.response.status.state == 'COMPLETE'
+ - result.response.metadata.categories_mapping["AppType"] == ["Default"]
+ - result.response.metadata.categories_mapping["Environment"] == ["Dev"]
+ fail_msg: "Unable to Create VM with Ubuntu image and different specifications "
+ success_msg: "VM with Ubuntu image and different specifications created successfully "
- - set_fact:
- todelete: '{{ todelete + [ result["response"]["metadata"]["uuid"] ] }}'
- when: result.response.status.state == 'COMPLETE'
+- set_fact:
+ todelete: '{{ todelete + [ result["response"]["metadata"]["uuid"] ] }}'
+ when: result.response.status.state == 'COMPLETE'
#########################################################################################
- - name: VM with CentOS-7-cloud-init image with disk image size
- ntnx_vms:
- state: present
- name: VM with CentOS-7-cloud-init image
- memory_gb: 1
- timezone: "UTC"
- cluster:
- uuid: "{{ cluster.uuid }}"
- disks:
- - type: "DISK"
- size_gb: 10
- clone_image:
- name: "{{ centos }}"
- bus: "SCSI"
- guest_customization:
- type: "cloud_init"
- script_path: "./cloud_init.yml"
- is_overridable: True
- register: result
- ignore_errors: True
- - name: Creation Status
- assert:
- that:
- - result.response is defined
- - result.response.status.state == 'COMPLETE'
- fail_msg: 'Unable to create VM with CentOS-7-cloud-init image'
- success_msg: 'VM with CentOS-7-cloud-init image created successfully '
+- name: VM with CentOS-7-cloud-init image with disk image size
+ ntnx_vms:
+ state: present
+ name: VM with CentOS-7-cloud-init image
+ memory_gb: 1
+ timezone: "UTC"
+ cluster:
+ uuid: "{{ cluster.uuid }}"
+ disks:
+ - type: "DISK"
+ size_gb: 10
+ clone_image:
+ name: "{{ centos }}"
+ bus: "SCSI"
+ guest_customization:
+ type: "cloud_init"
+ script_path: "./cloud_init.yml"
+ is_overridable: True
+ register: result
+ ignore_errors: True
+- name: Creation Status
+ assert:
+ that:
+ - result.response is defined
+ - result.response.status.state == 'COMPLETE'
+ fail_msg: "Unable to create VM with CentOS-7-cloud-init image"
+ success_msg: "VM with CentOS-7-cloud-init image created successfully "
- - set_fact:
- todelete: '{{ todelete + [ result["response"]["metadata"]["uuid"] ] }}'
- when: result.response.status.state == 'COMPLETE'
+- set_fact:
+ todelete: '{{ todelete + [ result["response"]["metadata"]["uuid"] ] }}'
+ when: result.response.status.state == 'COMPLETE'
#################################################################################
- - name: VM with CentOS-7-cloud-init image without disk image size
- ntnx_vms:
- state: present
- memory_gb: 1
- name: VM with CentOS-7-cloud-init image without image size
- timezone: "UTC"
- cluster:
- uuid: "{{ cluster.uuid }}"
- disks:
- - type: "DISK"
- clone_image:
- name: "{{ centos }}"
- bus: "SCSI"
- guest_customization:
- type: "cloud_init"
- script_path: "./cloud_init.yml"
- is_overridable: True
- register: result
- ignore_errors: True
- - name: Creation Status
- assert:
- that:
- - result.response is defined
- - result.response.status.state == 'COMPLETE'
- fail_msg: 'Unable to create VM with CentOS-7-cloud-init image'
- success_msg: 'VM with CentOS-7-cloud-init image created successfully '
+- name: VM with CentOS-7-cloud-init image without disk image size
+ ntnx_vms:
+ state: present
+ memory_gb: 1
+ name: VM with CentOS-7-cloud-init image without image size
+ timezone: "UTC"
+ cluster:
+ uuid: "{{ cluster.uuid }}"
+ disks:
+ - type: "DISK"
+ clone_image:
+ name: "{{ centos }}"
+ bus: "SCSI"
+ guest_customization:
+ type: "cloud_init"
+ script_path: "./cloud_init.yml"
+ is_overridable: True
+ register: result
+ ignore_errors: True
+- name: Creation Status
+ assert:
+ that:
+ - result.response is defined
+ - result.response.status.state == 'COMPLETE'
+ fail_msg: "Unable to create VM with CentOS-7-cloud-init image"
+ success_msg: "VM with CentOS-7-cloud-init image created successfully "
- - set_fact:
- todelete: '{{ todelete + [ result["response"]["metadata"]["uuid"] ] }}'
- when: result.response.status.state == 'COMPLETE'
+- set_fact:
+ todelete: '{{ todelete + [ result["response"]["metadata"]["uuid"] ] }}'
+ when: result.response.status.state == 'COMPLETE'
- - name: Delete all Created VMs
- ntnx_vms:
- state: absent
- vm_uuid: '{{ item }}'
- register: result
- loop: '{{ todelete }}'
- - set_fact:
- todelete: []
+- name: Delete all Created VMs
+ ntnx_vms:
+ state: absent
+ vm_uuid: "{{ item }}"
+ register: result
+ loop: "{{ todelete }}"
+- set_fact:
+ todelete: []
#################################################################################
- - name: VM with Cluster, Network, Universal time zone, one Disk
- ntnx_vms:
- state: present
- name: "VM with Cluster Network and Disk"
- memory_gb: 1
- timezone: "Universal"
- cluster:
- name: "{{ cluster.name }}"
- networks:
- - is_connected: False
- subnet:
- uuid: "{{ network.dhcp.uuid }}"
- disks:
- - type: "DISK"
- size_gb: 10
- bus: "PCI"
- register: result
- - name: Creation Status
- assert:
- that:
- - result.response is defined
- - result.response.status.state == 'COMPLETE'
- fail_msg: 'Unable to create VM with Cluster , Network, Universal time zone, one Disk'
- success_msg: 'VM with Cluster , Network, Universal time zone, one Disk created successfully '
+- name: VM with Cluster, Network, Universal time zone, one Disk
+ ntnx_vms:
+ state: present
+ name: "VM with Cluster Network and Disk"
+ memory_gb: 1
+ timezone: "Universal"
+ cluster:
+ name: "{{ cluster.name }}"
+ networks:
+ - is_connected: False
+ subnet:
+ uuid: "{{ network.dhcp.uuid }}"
+ disks:
+ - type: "DISK"
+ size_gb: 10
+ bus: "PCI"
+ register: result
+- name: Creation Status
+ assert:
+ that:
+ - result.response is defined
+ - result.response.status.state == 'COMPLETE'
+ fail_msg: "Unable to create VM with Cluster , Network, Universal time zone, one Disk"
+ success_msg: "VM with Cluster , Network, Universal time zone, one Disk created successfully "
- - set_fact:
- todelete: '{{ todelete + [ result["response"]["metadata"]["uuid"] ] }}'
- when: result.response.status.state == 'COMPLETE'
+- set_fact:
+ todelete: '{{ todelete + [ result["response"]["metadata"]["uuid"] ] }}'
+ when: result.response.status.state == 'COMPLETE'
########################################################################################
- - name: VM with Cluster, different Disks, Memory size
- ntnx_vms:
- state: present
- name: "VM with different disks"
- timezone: "UTC"
- cluster:
- uuid: "{{ cluster.uuid }}"
- disks:
- - type: "DISK"
- size_gb: 10
- bus: "SATA"
- - type: "DISK"
- size_gb: 30
- bus: "SCSI"
- memory_gb: 2
- register: result
- ignore_errors: True
- - name: Creation Status
- assert:
- that:
- - result.response is defined
- - result.response.status.state == 'COMPLETE'
- fail_msg: 'Unable to create VM with Cluster, different Disks, Memory size'
- success_msg: 'VM with Cluster, different Disks, Memory size created successfully '
+- name: VM with Cluster, different Disks, Memory size
+ ntnx_vms:
+ state: present
+ name: "VM with different disks"
+ timezone: "UTC"
+ cluster:
+ uuid: "{{ cluster.uuid }}"
+ disks:
+ - type: "DISK"
+ size_gb: 10
+ bus: "SATA"
+ - type: "DISK"
+ size_gb: 30
+ bus: "SCSI"
+ memory_gb: 2
+ register: result
+ ignore_errors: True
+- name: Creation Status
+ assert:
+ that:
+ - result.response is defined
+ - result.response.status.state == 'COMPLETE'
+ fail_msg: "Unable to create VM with Cluster, different Disks, Memory size"
+ success_msg: "VM with Cluster, different Disks, Memory size created successfully "
- - set_fact:
- todelete: '{{ todelete + [ result["response"]["metadata"]["uuid"] ] }}'
- when: result.response.status.state == 'COMPLETE'
+- set_fact:
+ todelete: '{{ todelete + [ result["response"]["metadata"]["uuid"] ] }}'
+ when: result.response.status.state == 'COMPLETE'
#####################################################################################
- - name: VM with Cluster, different CDROMs
- ntnx_vms:
- state: present
- memory_gb: 1
- wait: true
- name: "VM with multiple CDROMs"
- cluster:
- name: "{{ cluster.name }}"
- disks:
- - type: "CDROM"
- bus: "SATA"
- empty_cdrom: True
- - type: "CDROM"
- bus: "IDE"
- empty_cdrom: True
- cores_per_vcpu: 1
- register: result
- ignore_errors: True
+- name: VM with Cluster, different CDROMs
+ ntnx_vms:
+ state: present
+ memory_gb: 1
+ wait: true
+ name: "VM with multiple CDROMs"
+ cluster:
+ name: "{{ cluster.name }}"
+ disks:
+ - type: "CDROM"
+ bus: "SATA"
+ empty_cdrom: True
+ - type: "CDROM"
+ bus: "IDE"
+ empty_cdrom: True
+ cores_per_vcpu: 1
+ register: result
+ ignore_errors: True
- - name: Creation Status
- assert:
- that:
- - result.response is defined
- - result.response.status.state == 'COMPLETE'
- fail_msg: 'Unable to Create VM with Cluster, different CDROMs '
- success_msg: 'VM with Cluster, different CDROMs created successfully '
+- name: Creation Status
+ assert:
+ that:
+ - result.response is defined
+ - result.response.status.state == 'COMPLETE'
+ fail_msg: "Unable to Create VM with Cluster, different CDROMs "
+ success_msg: "VM with Cluster, different CDROMs created successfully "
- - set_fact:
- todelete: '{{ todelete + [ result["response"]["metadata"]["uuid"] ] }}'
+- set_fact:
+ todelete: '{{ todelete + [ result["response"]["metadata"]["uuid"] ] }}'
- - name: Delete all Created VMs
- ntnx_vms:
- state: absent
- vm_uuid: '{{ item }}'
- register: result
- loop: '{{ todelete }}'
- - set_fact:
- todelete: []
+- name: Delete all Created VMs
+ ntnx_vms:
+ state: absent
+ vm_uuid: "{{ item }}"
+ register: result
+ loop: "{{ todelete }}"
+- set_fact:
+ todelete: []
####################################################################################
- - name: VM with all specification
- ntnx_vms:
- state: present
- wait: True
- name: "All specification"
- timezone: "GMT"
- cluster:
- uuid: "{{ cluster.uuid }}"
- disks:
- - type: "DISK"
- size_gb: 2
- bus: "SCSI"
- - type: "DISK"
- size_gb: 10
- bus: "PCI"
- - type: "DISK"
- size_gb: 2
- bus: "SATA"
- - type: "DISK"
- size_gb: 10
- bus: "SCSI"
- - type: "CDROM"
- bus: "IDE"
- empty_cdrom: True
- boot_config:
- boot_type: "UEFI"
- boot_order:
- - "DISK"
- - "CDROM"
- - "NETWORK"
- vcpus: 1
- cores_per_vcpu: 2
- memory_gb: 1
- register: result
- ignore_errors: True
+- name: VM with all specification
+ ntnx_vms:
+ state: present
+ wait: True
+ name: "All specification"
+ timezone: "GMT"
+ cluster:
+ uuid: "{{ cluster.uuid }}"
+ disks:
+ - type: "DISK"
+ size_gb: 2
+ bus: "SCSI"
+ - type: "DISK"
+ size_gb: 10
+ bus: "PCI"
+ - type: "DISK"
+ size_gb: 2
+ bus: "SATA"
+ - type: "DISK"
+ size_gb: 10
+ bus: "SCSI"
+ - type: "CDROM"
+ bus: "IDE"
+ empty_cdrom: True
+ boot_config:
+ boot_type: "UEFI"
+ boot_order:
+ - "DISK"
+ - "CDROM"
+ - "NETWORK"
+ vcpus: 1
+ cores_per_vcpu: 2
+ memory_gb: 1
+ register: result
+ ignore_errors: True
- - name: Creation Status
- assert:
- that:
- - result.response is defined
- - result.response.status.state == 'COMPLETE'
- fail_msg: ' Unable to create VM with all specification '
- success_msg: ' VM with all specification created successfully '
+- name: Creation Status
+ assert:
+ that:
+ - result.response is defined
+ - result.response.status.state == 'COMPLETE'
+ fail_msg: " Unable to create VM with all specification "
+ success_msg: " VM with all specification created successfully "
- - set_fact:
- todelete: '{{ todelete + [ result["response"]["metadata"]["uuid"] ] }}'
- when: result.response.status.state == 'COMPLETE'
+- set_fact:
+ todelete: '{{ todelete + [ result["response"]["metadata"]["uuid"] ] }}'
+ when: result.response.status.state == 'COMPLETE'
##################################################################################################
- - name: VM with managed subnet
- ntnx_vms:
- state: present
- name: VM with managed subnet
- memory_gb: 1
- cluster:
- name: "{{ cluster.name }}"
- networks:
- - is_connected: true
- subnet:
- uuid: "{{ network.dhcp.uuid }}"
- register: result
- ignore_errors: true
+- name: VM with managed subnet
+ ntnx_vms:
+ state: present
+ name: VM with managed subnet
+ memory_gb: 1
+ cluster:
+ name: "{{ cluster.name }}"
+ networks:
+ - is_connected: true
+ subnet:
+ uuid: "{{ network.dhcp.uuid }}"
+ register: result
+ ignore_errors: true
- - name: Creation Status
- assert:
- that:
- - result.response is defined
- - result.response.status.state == 'COMPLETE'
- fail_msg: ' Unable to create VM with managed subnet '
- success_msg: ' VM with with managed subnet created successfully '
+- name: Creation Status
+ assert:
+ that:
+ - result.response is defined
+ - result.response.status.state == 'COMPLETE'
+ fail_msg: " Unable to create VM with managed subnet "
+ success_msg: " VM with with managed subnet created successfully "
- - set_fact:
- todelete: '{{ todelete + [ result["response"]["metadata"]["uuid"] ] }}'
- when: result.response.status.state == 'COMPLETE'
+- set_fact:
+ todelete: '{{ todelete + [ result["response"]["metadata"]["uuid"] ] }}'
+ when: result.response.status.state == 'COMPLETE'
###################################################################################################
- - name: VM with minimum requirements
- ntnx_vms:
- state: present
- name: MinReqVM
- cluster:
- name: "{{ cluster.name }}"
- register: result
- ignore_errors: true
+- name: VM with minimum requirements
+ ntnx_vms:
+ state: present
+ name: MinReqVM
+ cluster:
+ name: "{{ cluster.name }}"
+ register: result
+ ignore_errors: true
- - name: Creation Status
- assert:
- that:
- - result.response is defined
- - result.response.status.state == 'COMPLETE'
- fail_msg: ' Unable to create VM with minimum requirements '
- success_msg: ' VM with minimum requirements created successfully '
+- name: Creation Status
+ assert:
+ that:
+ - result.response is defined
+ - result.response.status.state == 'COMPLETE'
+ fail_msg: " Unable to create VM with minimum requirements "
+ success_msg: " VM with minimum requirements created successfully "
- - set_fact:
- todelete: '{{ todelete + [ result["response"]["metadata"]["uuid"] ] }}'
- when: result.response.status.state == 'COMPLETE'
+- set_fact:
+ todelete: '{{ todelete + [ result["response"]["metadata"]["uuid"] ] }}'
+ when: result.response.status.state == 'COMPLETE'
- - name: Delete all Created VMs
- ntnx_vms:
- state: absent
- vm_uuid: '{{ item }}'
- register: result
- loop: '{{ todelete }}'
- - set_fact:
- todelete: []
+- name: Delete all Created VMs
+ ntnx_vms:
+ state: absent
+ vm_uuid: "{{ item }}"
+ register: result
+ loop: "{{ todelete }}"
+- set_fact:
+ todelete: []
##################################################################################################
- - name: VM with unmanaged vlan
- ntnx_vms:
- desc: "VM with unmanaged vlan"
- state: present
- name: VM with unmanaged vlan
- timezone: UTC
- cluster:
- uuid: "{{ cluster.uuid }}"
- networks:
- - is_connected: false
- subnet:
- uuid: "{{ static.uuid }}"
- private_ip: "{{ network.static.ip }}"
- boot_config:
- boot_type: LEGACY
- boot_order:
- - DISK
- - CDROM
- - NETWORK
- vcpus: 1
- cores_per_vcpu: 1
- memory_gb: 1
- register: result
- ignore_errors: true
+- name: VM with unmanaged vlan
+ ntnx_vms:
+ desc: "VM with unmanaged vlan"
+ state: present
+ name: VM with unmanaged vlan
+ timezone: UTC
+ cluster:
+ uuid: "{{ cluster.uuid }}"
+ networks:
+ - is_connected: false
+ subnet:
+ uuid: "{{ static.uuid }}"
+ private_ip: "{{ network.static.ip }}"
+ boot_config:
+ boot_type: LEGACY
+ boot_order:
+ - DISK
+ - CDROM
+ - NETWORK
+ vcpus: 1
+ cores_per_vcpu: 1
+ memory_gb: 1
+ register: result
+ ignore_errors: true
- - name: Creation Status
- assert:
- that:
- - result.response is defined
- - result.response.status.state == 'COMPLETE'
- fail_msg: ' Unable to create VM with unmanaged vlan '
- success_msg: ' VM with unmanaged vlan created successfully '
+- name: Creation Status
+ assert:
+ that:
+ - result.response is defined
+ - result.response.status.state == 'COMPLETE'
+ fail_msg: " Unable to create VM with unmanaged vlan "
+ success_msg: " VM with unmanaged vlan created successfully "
- - set_fact:
- todelete: '{{ todelete + [ result["response"]["metadata"]["uuid"] ] }}'
- when: result.response.status.state == 'COMPLETE'
+- set_fact:
+ todelete: '{{ todelete + [ result["response"]["metadata"]["uuid"] ] }}'
+ when: result.response.status.state == 'COMPLETE'
- - name: Delete all Created VM
- ntnx_vms:
- state: absent
- vm_uuid: '{{ item }}'
- register: result
- loop: '{{ todelete }}'
- - set_fact:
- todelete: []
+- name: Delete all Created VM
+ ntnx_vms:
+ state: absent
+ vm_uuid: "{{ item }}"
+ register: result
+ loop: "{{ todelete }}"
+- set_fact:
+ todelete: []
######################################################################################
- - name: VM with managed and unmanaged network
- ntnx_vms:
- state: present
- name: VM_NIC
- timezone: UTC
- cluster:
- name: "{{ cluster.name }}"
- networks:
- - is_connected: true
- subnet:
- name: "{{ network.dhcp.name }}"
- cluster:
- name: "{{ cluster.name }}"
- - is_connected: true
- subnet:
- uuid: "{{ static.uuid }}"
- cluster:
- uuid: "{{ cluster.uuid }}"
- disks:
- - type: DISK
- size_gb: 1
- bus: SCSI
- - type: DISK
- size_gb: 3
- bus: PCI
- - type: CDROM
- bus: SATA
- empty_cdrom: True
- - type: CDROM
- bus: IDE
- empty_cdrom: True
- boot_config:
- boot_type: UEFI
- boot_order:
- - DISK
- - CDROM
- - NETWORK
- vcpus: 1
- cores_per_vcpu: 1
- memory_gb: 1
- register: result
- ignore_errors: true
+- name: VM with managed and unmanaged network
+ ntnx_vms:
+ state: present
+ name: VM_NIC
+ timezone: UTC
+ cluster:
+ name: "{{ cluster.name }}"
+ networks:
+ - is_connected: true
+ subnet:
+ name: "{{ network.dhcp.name }}"
+ cluster:
+ name: "{{ cluster.name }}"
+ - is_connected: true
+ subnet:
+ uuid: "{{ static.uuid }}"
+ cluster:
+ uuid: "{{ cluster.uuid }}"
+ disks:
+ - type: DISK
+ size_gb: 1
+ bus: SCSI
+ - type: DISK
+ size_gb: 3
+ bus: PCI
+ - type: CDROM
+ bus: SATA
+ empty_cdrom: True
+ - type: CDROM
+ bus: IDE
+ empty_cdrom: True
+ boot_config:
+ boot_type: UEFI
+ boot_order:
+ - DISK
+ - CDROM
+ - NETWORK
+ vcpus: 1
+ cores_per_vcpu: 1
+ memory_gb: 1
+ register: result
+ ignore_errors: true
- - name: Creation Status
- assert:
- that:
- - result.response is defined
- - result.response.status.state == 'COMPLETE'
- fail_msg: ' Unable to create VM with managed and unmanaged network '
- success_msg: ' VM with managed and unmanaged network created successfully '
+- name: Creation Status
+ assert:
+ that:
+ - result.response is defined
+ - result.response.status.state == 'COMPLETE'
+ fail_msg: " Unable to create VM with managed and unmanaged network "
+ success_msg: " VM with managed and unmanaged network created successfully "
- - set_fact:
- todelete: '{{ todelete + [ result["response"]["metadata"]["uuid"] ] }}'
- when: result.response.status.state == 'COMPLETE'
+- set_fact:
+ todelete: '{{ todelete + [ result["response"]["metadata"]["uuid"] ] }}'
+ when: result.response.status.state == 'COMPLETE'
#########################################################################################
- - name: VM with different disk types and different sizes with UEFI boot type
- ntnx_vms:
- state: present
- name: VM with UEFI boot type
- timezone: GMT
- cluster:
- name: "{{ cluster.name }}"
- categories:
- AppType:
- - Apache_Spark
- disks:
- - type: "DISK"
- clone_image:
- name: "{{ ubuntu }}"
- bus: "SCSI"
- size_gb: 20
- - type: DISK
- size_gb: 1
- bus: SCSI
- storage_container:
- name: "{{ storage_container.name }}"
- - type: DISK
- size_gb: 2
- bus: PCI
- storage_container:
- name: "{{ storage_container.name }}"
- - type: DISK
- size_gb: 3
- bus: SATA
- boot_config:
- boot_type: UEFI
- boot_order:
- - DISK
- - CDROM
- - NETWORK
- vcpus: 1
- cores_per_vcpu: 1
- memory_gb: 1
- register: result
+- name: VM with different disk types and different sizes with UEFI boot type
+ ntnx_vms:
+ state: present
+ name: VM with UEFI boot type
+ timezone: GMT
+ cluster:
+ name: "{{ cluster.name }}"
+ categories:
+ AppType:
+ - Apache_Spark
+ disks:
+ - type: "DISK"
+ clone_image:
+ name: "{{ ubuntu }}"
+ bus: "SCSI"
+ size_gb: 20
+ - type: DISK
+ size_gb: 1
+ bus: SCSI
+ storage_container:
+ name: "{{ storage_container.name }}"
+ - type: DISK
+ size_gb: 2
+ bus: PCI
+ storage_container:
+ name: "{{ storage_container.name }}"
+ - type: DISK
+ size_gb: 3
+ bus: SATA
+ boot_config:
+ boot_type: UEFI
+ boot_order:
+ - DISK
+ - CDROM
+ - NETWORK
+ vcpus: 1
+ cores_per_vcpu: 1
+ memory_gb: 1
+ register: result
################################################################################
- - name: Creation Status
- assert:
- that:
- - result.response is defined
- - result.response.status.state == 'COMPLETE'
- fail_msg: ' Unable to create VM with different disk types and different sizes with UEFI boot type '
- success_msg: ' VM with different disk types and different sizes with UEFI boot type created successfully '
+- name: Creation Status
+ assert:
+ that:
+ - result.response is defined
+ - result.response.status.state == 'COMPLETE'
+ fail_msg: " Unable to create VM with different disk types and different sizes with UEFI boot type "
+ success_msg: " VM with different disk types and different sizes with UEFI boot type created successfully "
- - set_fact:
- todelete: '{{ todelete + [ result["response"]["metadata"]["uuid"] ] }}'
- when: result.response.status.state == 'COMPLETE'
+- set_fact:
+ todelete: '{{ todelete + [ result["response"]["metadata"]["uuid"] ] }}'
+ when: result.response.status.state == 'COMPLETE'
- - name: Delete all Created VM
- ntnx_vms:
- state: absent
- vm_uuid: '{{ item }}'
- register: result
- loop: '{{ todelete }}'
+- name: Delete all Created VM
+ ntnx_vms:
+ state: absent
+ vm_uuid: "{{ item }}"
+ register: result
+ loop: "{{ todelete }}"
- - set_fact:
- todelete: []
+- set_fact:
+ todelete: []
####################################################################################
- - name: VM with storage container
- ntnx_vms:
- state: present
- name: VM with UEFI boot type
- timezone: GMT
- cluster:
- name: "{{ cluster.name }}"
- categories:
- AppType:
- - Apache_Spark
- disks:
- - type: DISK
- size_gb: 1
- bus: SCSI
- storage_container:
- uuid: "{{ storage_container.uuid }}"
- vcpus: 1
- cores_per_vcpu: 1
- memory_gb: 1
- register: result
+- name: VM with storage container
+ ntnx_vms:
+ state: present
+ name: VM with UEFI boot type
+ timezone: GMT
+ cluster:
+ name: "{{ cluster.name }}"
+ categories:
+ AppType:
+ - Apache_Spark
+ disks:
+ - type: DISK
+ size_gb: 1
+ bus: SCSI
+ storage_container:
+ uuid: "{{ storage_container.uuid }}"
+ vcpus: 1
+ cores_per_vcpu: 1
+ memory_gb: 1
+ register: result
- - name: Creation Status
- assert:
- that:
- - result.response is defined
- - result.response.status.state == 'COMPLETE'
- fail_msg: ' Unable to create VM withstorage container '
- success_msg: ' VM with storage container created successfully '
+- name: Creation Status
+ assert:
+ that:
+ - result.response is defined
+ - result.response.status.state == 'COMPLETE'
+ fail_msg: " Unable to create VM with storage container "
+ success_msg: " VM with storage container created successfully "
- - set_fact:
- todelete: '{{ todelete + [ result["response"]["metadata"]["uuid"] ] }}'
- when: result.response.status.state == 'COMPLETE'
+- set_fact:
+ todelete: '{{ todelete + [ result["response"]["metadata"]["uuid"] ] }}'
+ when: result.response.status.state == 'COMPLETE'
####################################################################################
- - name: Delete all Created VMs
- ntnx_vms:
- state: absent
- vm_uuid: '{{ item }}'
- register: result
- loop: '{{ todelete }}'
+- name: Delete all Created VMs
+ ntnx_vms:
+ state: absent
+ vm_uuid: "{{ item }}"
+ register: result
+ loop: "{{ todelete }}"
diff --git a/tests/integration/targets/nutanix_vms/tasks/delete.yml b/tests/integration/targets/nutanix_vms/tasks/delete.yml
index c3faaf636..e78ab6416 100644
--- a/tests/integration/targets/nutanix_vms/tasks/delete.yml
+++ b/tests/integration/targets/nutanix_vms/tasks/delete.yml
@@ -1,20 +1,20 @@
---
- name: VM with minimum requirements
ntnx_vms:
- state: present
- name: MinReqVM
- cluster:
- name: "{{ cluster.name }}"
+ state: present
+ name: MinReqVM
+ cluster:
+ name: "{{ cluster.name }}"
register: result
ignore_errors: true
- name: Creation Status
assert:
- that:
- - result.response is defined
- - result.response.status.state == 'COMPLETE'
- fail_msg: ' Unable to create VM with minimum requirements '
- success_msg: ' VM with minimum requirements created successfully '
+ that:
+ - result.response is defined
+ - result.response.status.state == 'COMPLETE'
+ fail_msg: " Unable to create VM with minimum requirements "
+ success_msg: " VM with minimum requirements created successfully "
- name: Delete VM
ntnx_vms:
diff --git a/tests/integration/targets/nutanix_vms/tasks/main.yml b/tests/integration/targets/nutanix_vms/tasks/main.yml
index 1a9593038..34b67b801 100644
--- a/tests/integration/targets/nutanix_vms/tasks/main.yml
+++ b/tests/integration/targets/nutanix_vms/tasks/main.yml
@@ -1,14 +1,14 @@
---
- module_defaults:
- group/nutanix.ncp.ntnx:
- nutanix_host: "{{ ip }}"
- nutanix_username: "{{ username }}"
- nutanix_password: "{{ password }}"
- validate_certs: "{{ validate_certs }}"
+ group/nutanix.ncp.ntnx:
+ nutanix_host: "{{ ip }}"
+ nutanix_username: "{{ username }}"
+ nutanix_password: "{{ password }}"
+ validate_certs: "{{ validate_certs }}"
block:
- - import_tasks: "create.yml"
- - import_tasks: "negtaive_scenarios.yml"
- - import_tasks: "delete.yml"
- - import_tasks: "vm_operations.yml"
- - import_tasks: "vm_update.yml"
- - import_tasks: "negtaive_vm_update.yml"
+ - import_tasks: "create.yml"
+ - import_tasks: "negative_scenarios.yml"
+ - import_tasks: "delete.yml"
+ - import_tasks: "vm_operations.yml"
+ - import_tasks: "vm_update.yml"
+ - import_tasks: "negative_vm_update.yml"
diff --git a/tests/integration/targets/nutanix_vms/tasks/negtaive_scenarios.yml b/tests/integration/targets/nutanix_vms/tasks/negtaive_scenarios.yml
index 0488155ff..66b6a9c8f 100644
--- a/tests/integration/targets/nutanix_vms/tasks/negtaive_scenarios.yml
+++ b/tests/integration/targets/nutanix_vms/tasks/negtaive_scenarios.yml
@@ -1,309 +1,307 @@
- - debug:
- msg: "Started Negative Creation Cases"
+- debug:
+ msg: "Started Negative Creation Cases"
- - name: Unknown project name
- ntnx_vms:
- state: present
- name: Unknown project name
- timezone: "UTC"
- project:
- name: project
- cluster:
- uuid: "{{ cluster.uuid }}"
- disks:
- - type: "DISK"
- size_gb: 10
- clone_image:
- name: "{{ centos }}"
- bus: "SCSI"
- register: result
- ignore_errors: True
+- name: Unknown project name
+ ntnx_vms:
+ state: present
+ name: Unknown project name
+ timezone: "UTC"
+ project:
+ name: project
+ cluster:
+ uuid: "{{ cluster.uuid }}"
+ disks:
+ - type: "DISK"
+ size_gb: 10
+ clone_image:
+ name: "{{ centos }}"
+ bus: "SCSI"
+ register: result
+ ignore_errors: True
- - name: Creation Status
- assert:
- that:
- - result.msg == "Failed generating VM Spec"
- - result.failed == True
- - result.failed is defined
- - result.error == "Project project not found."
- success_msg: ' Success: returned error as expected '
+- name: Creation Status
+ assert:
+ that:
+ - result.msg == "Failed generating VM Spec"
+ - result.failed == True
+ - result.failed is defined
+ - result.error == "Project project not found."
+ success_msg: " Success: returned error as expected "
#############################################################
- - name: Check if error is produced when disk size is not given for storage container
- check_mode: yes
- ntnx_vms:
- state: present
- name: VM with storage container
- timezone: GMT
- cluster:
- name: "{{ cluster.name }}"
- categories:
- AppType:
- - Apache_Spark
- disks:
- - type: DISK
- bus: SCSI
- storage_container:
- name: "{{ storage_container.name }}"
- vcpus: 1
- cores_per_vcpu: 1
- memory_gb: 1
- register: result
- ignore_errors: True
+- name: Check if error is produced when disk size is not given for storage container
+ check_mode: yes
+ ntnx_vms:
+ state: present
+ name: VM with storage container
+ timezone: GMT
+ cluster:
+ name: "{{ cluster.name }}"
+ categories:
+ AppType:
+ - Apache_Spark
+ disks:
+ - type: DISK
+ bus: SCSI
+ storage_container:
+ name: "{{ storage_container.name }}"
+ vcpus: 1
+ cores_per_vcpu: 1
+ memory_gb: 1
+ register: result
+ ignore_errors: True
- - name: Creation Status
- assert:
- that:
- - result.msg == "Unsupported operation: Unable to create disk, 'size_gb' is required for using storage container."
- - result.failed == True
- - result.failed is defined
- success_msg: ' Success: returned error as expected '
+- name: Creation Status
+ assert:
+ that:
+ - result.msg == "Unsupported operation: Unable to create disk, 'size_gb' is required for using storage container."
+ - result.failed == True
+ - result.failed is defined
+ success_msg: " Success: returned error as expected "
##################################################################################
- - name: Unknown Cluster
- ntnx_vms:
- state: present
- name: Unknown Cluster
- timezone: "UTC"
- cluster:
- uuid: "auto_cluster_1aa888141361"
- disks:
- - type: "DISK"
- size_gb: 10
- clone_image:
- name: "{{ centos }}"
- bus: "SCSI"
- register: result
- ignore_errors: True
+- name: Unknown Cluster
+ ntnx_vms:
+ state: present
+ name: Unknown Cluster
+ timezone: "UTC"
+ cluster:
+ uuid: "auto_cluster_1aa888141361"
+ disks:
+ - type: "DISK"
+ size_gb: 10
+ clone_image:
+ name: "{{ centos }}"
+ bus: "SCSI"
+ register: result
+ ignore_errors: True
- - name: Creation Status
- assert:
- that:
- - result.response is defined
- - result.failed == True
- - result.response.state == 'ERROR'
- - result.status_code == 422
- - result.error == "HTTP Error 422: UNPROCESSABLE ENTITY"
- success_msg: ' Success: returned error as expected '
- fail_msg: ' Fail Vm created successfully with unknown cluster '
+- name: Creation Status
+ assert:
+ that:
+ - result.response is defined
+ - result.failed == True
+ - result.response.state == 'ERROR'
+ - result.status_code == 422
+ - result.error == "HTTP Error 422: UNPROCESSABLE ENTITY"
+ success_msg: " Success: returned error as expected "
+ fail_msg: " Fail Vm created successfully with unknown cluster "
################################################################################
- - name: Unknown Cluster name
- ntnx_vms:
- state: present
- name: Unknown Cluster
- timezone: "UTC"
- cluster:
- name: "auto_cluster"
- disks:
- - type: "DISK"
- size_gb: 10
- clone_image:
- name: "{{ centos }}"
- bus: "SCSI"
- register: result
- ignore_errors: True
+- name: Unknown Cluster name
+ ntnx_vms:
+ state: present
+ name: Unknown Cluster
+ timezone: "UTC"
+ cluster:
+ name: "auto_cluster"
+ disks:
+ - type: "DISK"
+ size_gb: 10
+ clone_image:
+ name: "{{ centos }}"
+ bus: "SCSI"
+ register: result
+ ignore_errors: True
- - name: Creation Status
- assert:
- that:
- - result.msg == "Failed generating VM Spec"
- - result.failed == True
- - result.response is defined
- - result.error == "Cluster auto_cluster not found."
- success_msg: ' Success: returned error as expected '
- fail_msg: ' Fail Vm created successfully with unknown cluster '
+- name: Creation Status
+ assert:
+ that:
+ - result.msg == "Failed generating VM Spec"
+ - result.failed == True
+ - result.response is defined
+ - result.error == "Cluster auto_cluster not found."
+ success_msg: " Success: returned error as expected "
+ fail_msg: " Fail Vm created successfully with unknown cluster "
###################################################################################
- - name: Unknown Network name
- ntnx_vms:
- state: present
- name: Unknown Network
- desc: "Unknown network"
- categories:
- AppType:
- - "Apache_Spark"
- cluster:
- name: "{{ cluster.name }}"
- networks:
- - is_connected: True
- subnet:
- name: "vlan.8000"
- register: result
- ignore_errors: True
+- name: Unknown Network name
+ ntnx_vms:
+ state: present
+ name: Unknown Network
+ desc: "Unknown network"
+ categories:
+ AppType:
+ - "Apache_Spark"
+ cluster:
+ name: "{{ cluster.name }}"
+ networks:
+ - is_connected: True
+ subnet:
+ name: "vlan.8000"
+ register: result
+ ignore_errors: True
- - name: Creation Status
- assert:
- that:
- - result.response is defined
- - result.failed == True
- - result.msg == "Failed generating VM Spec"
- - result.error == "Subnet vlan.8000 not found."
- success_msg: ' Success: returned error as expected '
- fail_msg: ' Fail VM created successfully with unknown network name '
+- name: Creation Status
+ assert:
+ that:
+ - result.response is defined
+ - result.failed == True
+ - result.msg == "Failed generating VM Spec"
+ - result.error == "Subnet vlan.8000 not found."
+ success_msg: " Success: returned error as expected "
+ fail_msg: " Fail VM created successfully with unknown network name "
###################################################################################
- - name: Unknown Network uuid
- ntnx_vms:
- state: present
- name: Unknown Network
- desc: "Unknown network"
- categories:
- AppType:
- - "Apache_Spark"
- cluster:
- name: "{{ cluster.name }}"
- networks:
- - is_connected: True
- subnet:
- uuid: "8000"
- register: result
- ignore_errors: True
+- name: Unknown Network uuid
+ ntnx_vms:
+ state: present
+ name: Unknown Network
+ desc: "Unknown network"
+ categories:
+ AppType:
+ - "Apache_Spark"
+ cluster:
+ name: "{{ cluster.name }}"
+ networks:
+ - is_connected: True
+ subnet:
+ uuid: "8000"
+ register: result
+ ignore_errors: True
- - name: Creation Status
- assert:
- that:
- - result.response is defined
- - result.failed == True
- - result.error == "HTTP Error 422: UNPROCESSABLE ENTITY"
- - result.response.state == 'ERROR'
- - result.status_code == 422
- success_msg: ' Success: returned error as expected '
- fail_msg: ' Fail VM created successfully with unknown network name '
+- name: Creation Status
+ assert:
+ that:
+ - result.response is defined
+ - result.failed == True
+ - result.error == "HTTP Error 422: UNPROCESSABLE ENTITY"
+ - result.response.state == 'ERROR'
+ - result.status_code == 422
+ success_msg: " Success: returned error as expected "
+ fail_msg: " Fail VM created successfully with unknown network name "
###################################################################################
- - name: Unknown Image name
- ntnx_vms:
- state: present
- name: unknown image_vm
- timezone: "UTC"
- cluster:
- name: "{{ cluster.name }}"
- disks:
- - type: "DISK"
- size_gb: 10
- clone_image:
- name: "centos-7-cloudinit"
- bus: "SCSI"
- register: result
- ignore_errors: True
+- name: Unknown Image name
+ ntnx_vms:
+ state: present
+ name: unknown image_vm
+ timezone: "UTC"
+ cluster:
+ name: "{{ cluster.name }}"
+ disks:
+ - type: "DISK"
+ size_gb: 10
+ clone_image:
+ name: "centos-7-cloudinit"
+ bus: "SCSI"
+ register: result
+ ignore_errors: True
- - name: Creation Status
- assert:
- that:
- - result.response is defined
- - result.failed == True
- - result.response.state == 'ERROR'
- - result.status_code == 422
- success_msg: ' Success: returned error as expected '
- fail_msg: ' Fail VM created successfully with not existed image '
+- name: Creation Status
+ assert:
+ that:
+ - result.response is defined
+ - result.failed == True
+ - result.response.state == 'ERROR'
+ - result.status_code == 422
+ success_msg: " Success: returned error as expected "
+ fail_msg: " Fail VM created successfully with not existed image "
########################################################################################
- - name: Wrong disk size value
- ntnx_vms:
- state: present
- name: "Wrong disk size value"
- timezone: "UTC"
- cluster:
- name: "{{ cluster.name }}"
- networks:
- - is_connected: True
- subnet:
- name: "{{ network.dhcp.name }}"
- disks:
- - type: "DISK"
- size_gb: 10g
- bus: "PCI"
- register: result
- ignore_errors: True
+- name: Wrong disk size value
+ ntnx_vms:
+ state: present
+ name: "Wrong disk size value"
+ timezone: "UTC"
+ cluster:
+ name: "{{ cluster.name }}"
+ networks:
+ - is_connected: True
+ subnet:
+ name: "{{ network.dhcp.name }}"
+ disks:
+ - type: "DISK"
+ size_gb: 10g
+ bus: "PCI"
+ register: result
+ ignore_errors: True
- - name: Creation Status
- assert:
- that:
- - result.failed == True
- success_msg: ' Success: returned error as expected '
- fail_msg: ' Fail VM created successfully with invalid argument for size_gb '
+- name: Creation Status
+ assert:
+ that:
+ - result.failed == True
+ success_msg: " Success: returned error as expected "
+ fail_msg: " Fail VM created successfully with invalid argument for size_gb "
#############################################################################################
- - name: Image size less than actual
- ntnx_vms:
- state: present
- name: "image size less than actual"
- categories:
- AppType:
- - "Apache_Spark"
- cluster:
- name: "{{ cluster.name }}"
- networks:
- - is_connected: True
- subnet:
- name: "{{ network.dhcp.name }}"
- disks:
- - type: "DISK"
- size_gb: 2 #must be 20
- bus: "SATA"
- clone_image:
- name: "{{ centos }}"
- vcpus: 1
- cores_per_vcpu: 1
- memory_gb: 1
- guest_customization:
- type: "cloud_init"
- script_path: "cloud_init.yml"
- is_overridable: True
- register: result
- ignore_errors: True
+- name: Image size less than actual
+ ntnx_vms:
+ state: present
+ name: "image size less than actual"
+ categories:
+ AppType:
+ - "Apache_Spark"
+ cluster:
+ name: "{{ cluster.name }}"
+ networks:
+ - is_connected: True
+ subnet:
+ name: "{{ network.dhcp.name }}"
+ disks:
+ - type: "DISK"
+ size_gb: 2 #must be 20
+ bus: "SATA"
+ clone_image:
+ name: "{{ centos }}"
+ vcpus: 1
+ cores_per_vcpu: 1
+ memory_gb: 1
+ guest_customization:
+ type: "cloud_init"
+ script_path: "cloud_init.yml"
+ is_overridable: True
+ register: result
+ ignore_errors: True
-
-
- - name: Creation Status
- assert:
- that:
- - result.response is defined
- - result.failed == True
- success_msg: ' Success: returned error as expected '
- fail_msg: ' Fail: VM created successfully with image size is less than actual '
+- name: Creation Status
+ assert:
+ that:
+ - result.response is defined
+ - result.failed == True
+ success_msg: " Success: returned error as expected "
+ fail_msg: " Fail: VM created successfully with image size is less than actual "
#################################################################################
- - name: Unknown storage container name
- ntnx_vms:
- state: present
- name: unknown storage container
- timezone: "UTC"
- cluster:
- name: "{{ cluster.name }}"
- disks:
- - type: "DISK"
- size_gb: 10
- storage_container:
- name: "storage"
- bus: "SCSI"
- register: result
- ignore_errors: True
+- name: Unknown storage container name
+ ntnx_vms:
+ state: present
+ name: unknown storage container
+ timezone: "UTC"
+ cluster:
+ name: "{{ cluster.name }}"
+ disks:
+ - type: "DISK"
+ size_gb: 10
+ storage_container:
+ name: "storage"
+ bus: "SCSI"
+ register: result
+ ignore_errors: True
- - name: Creation Status
- assert:
- that:
- - result.response is defined
- - result.failed == True
- success_msg: ' Success: returned error as expected '
- fail_msg: ' Fail VM created successfully with unknown storage container name '
+- name: Creation Status
+ assert:
+ that:
+ - result.response is defined
+ - result.failed == True
+ success_msg: " Success: returned error as expected "
+ fail_msg: " Fail VM created successfully with unknown storage container name "
#################################################################################
- - name: Delete vm with unknown uuid
- ntnx_vms:
- state: absent
- vm_uuid: 5
- register: result
- ignore_errors: True
+- name: Delete vm with unknown uuid
+ ntnx_vms:
+ state: absent
+ vm_uuid: 5
+ register: result
+ ignore_errors: True
- - name: Creation Status
- assert:
- that:
- - result.response is defined
- - result.failed == True
- success_msg: ' Success: returned error as expected '
- fail_msg: ' Fail deleting VM with unknown uuid '
+- name: Creation Status
+ assert:
+ that:
+ - result.response is defined
+ - result.failed == True
+ success_msg: " Success: returned error as expected "
+ fail_msg: " Fail deleting VM with unknown uuid "
#################################################################################
- - name: Delete vm with missing uuid
- ntnx_vms:
- state: absent
- register: result
- ignore_errors: True
+- name: Delete vm with missing uuid
+ ntnx_vms:
+ state: absent
+ register: result
+ ignore_errors: True
- - name: Creation Status
- assert:
- that:
- - result.failed == True
- success_msg: ' Success: returned error as expected '
- fail_msg: ' Fail deleting VM with missing uuid '
+- name: Creation Status
+ assert:
+ that:
+ - result.failed == True
+ success_msg: " Success: returned error as expected "
+ fail_msg: " Fail deleting VM with missing uuid "
diff --git a/tests/integration/targets/nutanix_vms/tasks/negtaive_vm_update.yml b/tests/integration/targets/nutanix_vms/tasks/negtaive_vm_update.yml
index 49adf7614..18afc583c 100644
--- a/tests/integration/targets/nutanix_vms/tasks/negtaive_vm_update.yml
+++ b/tests/integration/targets/nutanix_vms/tasks/negtaive_vm_update.yml
@@ -43,8 +43,8 @@
- vm.response.status.state == 'COMPLETE'
- vm.vm_uuid
- vm.task_uuid
- fail_msg: ' Unable to create VM with minimum requirements '
- success_msg: ' VM with minimum requirements created successfully '
+ fail_msg: " Unable to create VM with minimum requirements "
+ success_msg: " VM with minimum requirements created successfully "
- name: update vm without change any value
ntnx_vms:
@@ -55,15 +55,14 @@
register: result
ignore_errors: true
-
- name: Update Status
assert:
that:
- - result.failed == false
- - result.changed == false
- - result.msg == 'Nothing to change'
- fail_msg: 'Fail : VM updated successfully with same current values '
- success_msg: ' Success: returned error as expected '
+ - result.failed == false
+ - result.changed == false
+ - result.msg == 'Nothing to change'
+ fail_msg: "Fail : VM updated successfully with same current values "
+ success_msg: " Success: returned error as expected "
###############################################################
- debug:
msg: Start negative update scenarios tests for memory vcpus cores_per_vcpu
@@ -78,11 +77,11 @@
- name: Update Status
assert:
that:
- - result.failed == True
- - result.changed == false
- - result.msg == "To make these changes, the VM should be restarted, but 'force_power_off' is False"
- fail_msg: 'Fail : decrease the value for vcpus while while vm is on '
- success_msg: ' Success: returned error as expected '
+ - result.failed == True
+ - result.changed == false
+ - result.msg == "To make these changes, the VM should be restarted, but 'force_power_off' is False"
+ fail_msg: "Fail : decrease the value for vcpus while while vm is on "
+ success_msg: " Success: returned error as expected "
- name: decrease values for memory_gb without force_power_off and vm is on
ntnx_vms:
@@ -94,11 +93,11 @@
- name: Update Status
assert:
that:
- - result.failed == True
- - result.changed == false
- - result.msg == "To make these changes, the VM should be restarted, but 'force_power_off' is False"
- fail_msg: 'Fail : decrease the value for memory_gb while while vm is on '
- success_msg: ' Success: returned error as expected '
+ - result.failed == True
+ - result.changed == false
+ - result.msg == "To make these changes, the VM should be restarted, but 'force_power_off' is False"
+ fail_msg: "Fail : decrease the value for memory_gb while while vm is on "
+ success_msg: " Success: returned error as expected "
- name: decrease values for cores_per_vcpu without force_power_off and vm is on
ntnx_vms:
@@ -110,11 +109,11 @@
- name: Update Status
assert:
that:
- - result.failed == True
- - result.changed == false
- - result.msg == "To make these changes, the VM should be restarted, but 'force_power_off' is False"
- fail_msg: 'Fail : decrease the value for cores_per_vcpu while while vm is on '
- success_msg: ' Success: returned error as expected '
+ - result.failed == True
+ - result.changed == false
+ - result.msg == "To make these changes, the VM should be restarted, but 'force_power_off' is False"
+ fail_msg: "Fail : decrease the value for cores_per_vcpu while while vm is on "
+ success_msg: " Success: returned error as expected "
###############################################################
- debug:
msg: Start negative update scenarios tests for disks
@@ -134,9 +133,8 @@
assert:
that:
- result.msg == ' Unsupported operation: Unable to decrease disk size.'
- fail_msg: ' Fail: decreasing the size of the disk that contains the image with SCSI bus type '
- success_msg: ' Success: returned error as expected '
-
+ fail_msg: " Fail: decreasing the size of the disk that contains the image with SCSI bus type "
+ success_msg: " Success: returned error as expected "
- name: Update VM by decreasing the size of the SCSI disk with storage container
ntnx_vms:
@@ -152,8 +150,8 @@
assert:
that:
- result.msg == ' Unsupported operation: Unable to decrease disk size.'
- fail_msg: ' Fail: decreasing the size of the SCSI disk with storage container '
- success_msg: ' Success: returned error as expected '
+ fail_msg: " Fail: decreasing the size of the SCSI disk with storage container "
+ success_msg: " Success: returned error as expected "
- name: Update VM by decreasing the size of the empty ide cdrom #error
ntnx_vms:
@@ -171,8 +169,8 @@
- result.msg == 'Unsupported operation: Cannot resize empty cdrom.'
- result.changed == false
- result.failed == true
- fail_msg: ' Fail: change the size of the empty CDROM'
- success_msg: ' Success: returned error as expected '
+ fail_msg: " Fail: change the size of the empty CDROM"
+ success_msg: " Success: returned error as expected "
- name: Update VM by decreasing the size of the pci disk
ntnx_vms:
@@ -188,8 +186,8 @@
assert:
that:
- result.msg == ' Unsupported operation: Unable to decrease disk size.'
- fail_msg: ' Fail: decreasing the size of the pci disk'
- success_msg: ' Success: returned error as expected '
+ fail_msg: " Fail: decreasing the size of the pci disk"
+ success_msg: " Success: returned error as expected "
- name: Update VM by decreasing the size of the sata disk
ntnx_vms:
@@ -205,8 +203,8 @@
assert:
that:
- result.msg == ' Unsupported operation: Unable to decrease disk size.'
- fail_msg: ' Fail: decreasing the size of the sata disk'
- success_msg: ' Success: returned error as expected '
+ fail_msg: " Fail: decreasing the size of the sata disk"
+ success_msg: " Success: returned error as expected "
- name: Update VM by decreasing the size of the SCSI disk
ntnx_vms:
@@ -222,8 +220,8 @@
assert:
that:
- result.msg == ' Unsupported operation: Unable to decrease disk size.'
- fail_msg: ' Fail: decreasing the size of the SCSI disk'
- success_msg: ' Success: returned error as expected '
+ fail_msg: " Fail: decreasing the size of the SCSI disk"
+ success_msg: " Success: returned error as expected "
- name: Update VM by decreasing the size of the IDE disk
ntnx_vms:
@@ -239,8 +237,8 @@
assert:
that:
- result.msg == ' Unsupported operation: Unable to decrease disk size.'
- fail_msg: ' Fail: decreasing the size of the IDE disk'
- success_msg: ' Success: returned error as expected '
+ fail_msg: " Fail: decreasing the size of the IDE disk"
+ success_msg: " Success: returned error as expected "
################
- name: Update VM by change the bus type of ide disk
ntnx_vms:
@@ -257,8 +255,8 @@
that:
- result.msg == ' parameters are mutually exclusive: uuid|bus found in disks '
- result.failed == True
- success_msg: ' Success: returned error as expected '
- fail_msg: ' Fail: Update VM by change the bus type of ide disk successfully '
+ success_msg: " Success: returned error as expected "
+ fail_msg: " Fail: Update VM by change ths bus type of ide disk successfully "
############
- name: Update VM by adding IDE disk while vm is on
ntnx_vms:
@@ -273,11 +271,11 @@
- name: Update Status
assert:
that:
- - result.failed == True
- - result.changed == false
- - result.msg == "To make these changes, the VM should be restarted, but 'force_power_off' is False"
- fail_msg: 'Fail : update vm by add ide disk while vm is on '
- success_msg: ' Success: returned error as expected '
+ - result.failed == True
+ - result.changed == false
+ - result.msg == "To make these changes, the VM should be restarted, but 'force_power_off' is False"
+ fail_msg: "Fail : update vm by add ide disk while vm is on "
+ success_msg: " Success: returned error as expected "
- name: Update VM by adding SATA disk while vm is on
ntnx_vms:
@@ -292,11 +290,11 @@
- name: Update Status
assert:
that:
- - result.failed == True
- - result.changed == false
- - result.msg == "To make these changes, the VM should be restarted, but 'force_power_off' is False"
- fail_msg: 'Fail : update vm by add SATA disk while vm is on '
- success_msg: ' Success: returned error as expected '
+ - result.failed == True
+ - result.changed == false
+ - result.msg == "To make these changes, the VM should be restarted, but 'force_power_off' is False"
+ fail_msg: "Fail : update vm by add SATA disk while vm is on "
+ success_msg: " Success: returned error as expected "
#############
- name: Update VM by removing IDE disks while vm is on
ntnx_vms:
@@ -309,11 +307,11 @@
- name: Update Status
assert:
that:
- - result.failed == True
- - result.changed == false
- - result.msg == "To make these changes, the VM should be restarted, but 'force_power_off' is False"
- fail_msg: 'Fail : update vm by by removing IDE disks while vm is on '
- success_msg: ' Success: returned error as expected '
+ - result.failed == True
+ - result.changed == false
+ - result.msg == "To make these changes, the VM should be restarted, but 'force_power_off' is False"
+ fail_msg: "Fail : update vm by by removing IDE disks while vm is on "
+ success_msg: " Success: returned error as expected "
- name: Update VM by removing IDE disks while vm is on
ntnx_vms:
@@ -327,11 +325,11 @@
- name: Update Status
assert:
that:
- - result.failed == True
- - result.changed == false
- - result.msg == "To make these changes, the VM should be restarted, but 'force_power_off' is False"
- fail_msg: 'Fail : update vm by by removing IDE disks while vm is on '
- success_msg: ' Success: returned error as expected '
+ - result.failed == True
+ - result.changed == false
+ - result.msg == "To make these changes, the VM should be restarted, but 'force_power_off' is False"
+ fail_msg: "Fail : update vm by by removing IDE disks while vm is on "
+ success_msg: " Success: returned error as expected "
- name: Update VM by removing PCI disks while vm is on
ntnx_vms:
@@ -345,11 +343,11 @@
- name: Update Status
assert:
that:
- - result.failed == True
- - result.changed == false
- - result.msg == "To make these changes, the VM should be restarted, but 'force_power_off' is False"
- fail_msg: 'Fail : update vm by by removing PCI disks while vm is on '
- success_msg: ' Success: returned error as expected '
+ - result.failed == True
+ - result.changed == false
+ - result.msg == "To make these changes, the VM should be restarted, but 'force_power_off' is False"
+ fail_msg: "Fail : update vm by by removing PCI disks while vm is on "
+ success_msg: " Success: returned error as expected "
- name: Update VM by removing SATA disks while vm is on
ntnx_vms:
@@ -363,11 +361,11 @@
- name: Update Status
assert:
that:
- - result.failed == True
- - result.changed == false
- - result.msg == "To make these changes, the VM should be restarted, but 'force_power_off' is False"
- fail_msg: 'Fail : update vm by by removing SATA disks while vm is on '
- success_msg: ' Success: returned error as expected '
+ - result.failed == True
+ - result.changed == false
+ - result.msg == "To make these changes, the VM should be restarted, but 'force_power_off' is False"
+ fail_msg: "Fail : update vm by by removing SATA disks while vm is on "
+ success_msg: " Success: returned error as expected "
###########################################################
- name: Delete created vm's
ntnx_vms:
@@ -382,5 +380,5 @@
- result.response.status == 'SUCCEEDED'
- result.vm_uuid
- result.task_uuid
- fail_msg: 'Fail: Unable to delete created vm '
- success_msg: 'Success: Vm deleted sucessfully'
+ fail_msg: "Fail: Unable to delete created vm "
+ success_msg: "Success: Vm deleted successfully"
diff --git a/tests/integration/targets/nutanix_vms/tasks/vm_operations.yml b/tests/integration/targets/nutanix_vms/tasks/vm_operations.yml
index de2a1304a..38fb9edc6 100644
--- a/tests/integration/targets/nutanix_vms/tasks/vm_operations.yml
+++ b/tests/integration/targets/nutanix_vms/tasks/vm_operations.yml
@@ -2,19 +2,19 @@
msg: Start testing VM with different operations
- set_fact:
- todelete: []
+ todelete: []
- name: VM with minimum requirements
ntnx_vms:
- state: present
- name: integration_test_opperations_vm
- cluster:
- name: "{{ cluster.name }}"
- disks:
- - type: "DISK"
- clone_image:
- name: "{{ ubuntu }}"
- bus: "SCSI"
- size_gb: 20
+ state: present
+ name: integration_test_operations_vm
+ cluster:
+ name: "{{ cluster.name }}"
+ disks:
+ - type: "DISK"
+ clone_image:
+ name: "{{ ubuntu }}"
+ bus: "SCSI"
+ size_gb: 20
register: vm
ignore_errors: true
@@ -23,22 +23,22 @@
that:
- vm.response is defined
- vm.response.status.state == 'COMPLETE'
- fail_msg: ' Unable to create VM with minimum requirements '
- success_msg: ' VM with minimum requirements created successfully '
+ fail_msg: " Unable to create VM with minimum requirements "
+ success_msg: " VM with minimum requirements created successfully "
############################################
- name: VM with minimum requirements with check mode
ntnx_vms:
- state: present
- name: integration_test_opperations_vm
- cluster:
- name: "{{ cluster.name }}"
- disks:
- - type: "DISK"
- clone_image:
- name: "{{ ubuntu }}"
- bus: "SCSI"
- size_gb: 20
+ state: present
+ name: integration_test_operations_vm
+ cluster:
+ name: "{{ cluster.name }}"
+ disks:
+ - type: "DISK"
+ clone_image:
+ name: "{{ ubuntu }}"
+ bus: "SCSI"
+ size_gb: 20
register: result
ignore_errors: true
check_mode: yes
@@ -50,13 +50,13 @@
- result.changed == false
- result.failed == false
- result.task_uuid != ""
- success_msg: ' Success: returned as expected '
- fail_msg: ' Fail '
+ success_msg: " Success: returned as expected "
+ fail_msg: " Fail "
###########################################
- name: hard power off the vm
ntnx_vms:
- vm_uuid: "{{ vm.vm_uuid }}"
- state: hard_poweroff
+ vm_uuid: "{{ vm.vm_uuid }}"
+ state: hard_poweroff
register: result
ignore_errors: true
@@ -66,13 +66,13 @@
- result.response is defined
- result.response.status.state == 'COMPLETE'
- result.response.status.resources.power_state == 'OFF'
- fail_msg: ' Unable to hard power off the vm '
- success_msg: ' VM powerd off successfully '
+ fail_msg: " Unable to hard power off the vm "
+ success_msg: " VM was powered off successfully "
# ###########################################
- name: power on the vm
ntnx_vms:
- state: power_on
- vm_uuid: "{{ vm.vm_uuid }}"
+ state: power_on
+ vm_uuid: "{{ vm.vm_uuid }}"
register: result
ignore_errors: true
@@ -82,13 +82,13 @@
- result.response is defined
- result.response.status.state == 'COMPLETE'
- result.response.status.resources.power_state == 'ON'
- fail_msg: ' Unable to power on vm '
- success_msg: ' VM powerd on successfully '
+ fail_msg: " Unable to power on vm "
+ success_msg: " VM was powered on successfully "
##########################################
- name: power on the vm while it's on
ntnx_vms:
- state: power_on
- vm_uuid: "{{ vm.vm_uuid }}"
+ state: power_on
+ vm_uuid: "{{ vm.vm_uuid }}"
register: result
ignore_errors: true
@@ -96,8 +96,8 @@
assert:
that:
- result.msg == "Nothing to change"
- success_msg: ' Success: returned msg as expected '
- fail_msg: ' Fail '
+ success_msg: " Success: returned msg as expected "
+ fail_msg: " Fail "
##########################################
# - name: soft shut down the vm
# ntnx_vms:
@@ -119,7 +119,7 @@
# - name: VM with minimum requirements and soft_shutdown
# ntnx_vms:
# state: present
-# name: integration_test_opperations_vm
+# name: integration_test_operations_vm
# operation: soft_shutdown
# cluster:
# name: "{{ cluster.name }}"
@@ -147,10 +147,10 @@
- name: Create VM with minimum requirements with hard_poweroff operation
ntnx_vms:
- state: hard_poweroff
- name: integration_test_opperations_vm
- cluster:
- name: "{{ cluster.name }}"
+ state: hard_poweroff
+ name: integration_test_operations_vm
+ cluster:
+ name: "{{ cluster.name }}"
register: result
ignore_errors: true
@@ -161,19 +161,19 @@
- result.response.status.state == 'COMPLETE'
- result.response.status.resources.power_state == 'OFF'
- result.response.status.resources.power_state_mechanism.mechanism == 'HARD'
- fail_msg: ' Unable to create VM with minimum requirements with hard_poweroff operation '
- success_msg: ' VM with minimum requirements and hard_poweroff state created successfully '
+ fail_msg: " Unable to create VM with minimum requirements with hard_poweroff operation "
+ success_msg: " VM with minimum requirements and hard_poweroff state created successfully "
- set_fact:
- todelete: '{{ todelete + [ result["response"]["metadata"]["uuid"] ] }}'
+ todelete: '{{ todelete + [ result["response"]["metadata"]["uuid"] ] }}'
- name: Create VM with minimum requirements with hard_poweroff operation without wait
ntnx_vms:
- state: hard_poweroff
- name: integration_test_opperations_vm_111
- cluster:
- name: "{{ cluster.name }}"
- wait: false
+ state: hard_poweroff
+ name: integration_test_operations_vm_111
+ cluster:
+ name: "{{ cluster.name }}"
+ wait: false
register: result
ignore_errors: true
@@ -184,23 +184,23 @@
- result.response.status.state == 'COMPLETE' or result.response.status.state == 'PENDING'
- result.vm_uuid
- result.task_uuid
- fail_msg: ' Unable to create VM with minimum requirements with hard_poweroff operation '
- success_msg: ' VM with minimum requirements and hard_poweroff state created successfully '
+ fail_msg: " Unable to create VM with minimum requirements with hard_poweroff operation "
+ success_msg: " VM with minimum requirements and hard_poweroff state created successfully "
- set_fact:
- todelete: '{{ todelete + [ result["response"]["metadata"]["uuid"] ] }}'
+ todelete: '{{ todelete + [ result["response"]["metadata"]["uuid"] ] }}'
when: result.response.status.state == 'COMPLETE'
- name: Delete all Created VMs
ntnx_vms:
- state: absent
- vm_uuid: '{{ item }}'
- loop: '{{ todelete }}'
+ state: absent
+ vm_uuid: "{{ item }}"
+ loop: "{{ todelete }}"
- name: Delete all Created VMs
ntnx_vms:
- state: absent
- vm_uuid: '{{ vm.vm_uuid }}'
+ state: absent
+ vm_uuid: "{{ vm.vm_uuid }}"
- set_fact:
- todelete: []
+ todelete: []
diff --git a/tests/integration/targets/nutanix_vms/tasks/vm_update.yml b/tests/integration/targets/nutanix_vms/tasks/vm_update.yml
index 86fed5585..4e7c0b9c7 100644
--- a/tests/integration/targets/nutanix_vms/tasks/vm_update.yml
+++ b/tests/integration/targets/nutanix_vms/tasks/vm_update.yml
@@ -608,4 +608,4 @@
- result.vm_uuid
- result.task_uuid
fail_msg: "Fail: Unable to delete created vm "
- success_msg: "Success: Vm deleted sucessfully"
+ success_msg: "Success: Vm deleted successfully"
diff --git a/tests/integration/targets/nutanix_vms_info/tasks/list_vms.yml b/tests/integration/targets/nutanix_vms_info/tasks/list_vms.yml
index 05b029742..fcf5874c4 100644
--- a/tests/integration/targets/nutanix_vms_info/tasks/list_vms.yml
+++ b/tests/integration/targets/nutanix_vms_info/tasks/list_vms.yml
@@ -1,7 +1,7 @@
- set_fact:
todelete: []
-- name: Creat another VM with same name
+- name: Create another VM with same name
ntnx_vms:
name: "{{ vm.name }}"
cluster:
@@ -15,8 +15,8 @@
that:
- output.response is defined
- output.response.status.state == 'COMPLETE'
- fail_msg: ' Unable to create VM with minimum requirements '
- success_msg: ' VM with minimum requirements created successfully '
+ fail_msg: " Unable to create VM with minimum requirements "
+ success_msg: " VM with minimum requirements created successfully "
- set_fact:
todelete: '{{ todelete + [ output["response"]["metadata"]["uuid"] ] }}'
@@ -46,7 +46,6 @@
register: result
ignore_errors: True
-
- name: Listing Status
assert:
that:
@@ -87,8 +86,8 @@
- name: Delete all Created VMs
ntnx_vms:
state: absent
- vm_uuid: '{{ item }}'
+ vm_uuid: "{{ item }}"
register: result
- loop: '{{ todelete }}'
+ loop: "{{ todelete }}"
- set_fact:
todelete: []
diff --git a/tests/integration/targets/nutanix_vpcs/tasks/create_vpcs.yml b/tests/integration/targets/nutanix_vpcs/tasks/create_vpcs.yml
index 3cc3113d4..1061ab9bf 100644
--- a/tests/integration/targets/nutanix_vpcs/tasks/create_vpcs.yml
+++ b/tests/integration/targets/nutanix_vpcs/tasks/create_vpcs.yml
@@ -48,7 +48,7 @@
name: vpc_with_routable_ips
routable_ips:
- network_ip: "{{ routable_ips.network_ip }}"
- network_prefix: "{{ routable_ips.network_prefix }}"
+ network_prefix: "{{ routable_ips.network_prefix }}"
register: result
ignore_errors: True
@@ -71,7 +71,7 @@
- subnet_name: "{{ external_nat_subnet.name }}"
routable_ips:
- network_ip: "{{ routable_ips.network_ip_2 }}"
- network_prefix: "{{ routable_ips.network_prefix_2 }}"
+ network_prefix: "{{ routable_ips.network_prefix_2 }}"
register: result
ignore_errors: True
@@ -95,7 +95,6 @@
register: result
ignore_errors: True
-
- set_fact:
todelete: "{{ todelete + [ result.vpc_uuid ] }}"
##########################################################
@@ -110,16 +109,16 @@
- set_fact:
todelete: []
##########################################################
-- name: Create VPC with all specfactions
+- name: Create VPC with all specifications
ntnx_vpcs:
state: present
- name: vpc_with_add_specfactions
+ name: vpc_with_add_specifications
external_subnets:
- subnet_name: "{{ external_nat_subnet.name }}"
dns_servers: "{{ dns_servers }}"
routable_ips:
- network_ip: "{{ routable_ips.network_ip }}"
- network_prefix: "{{ routable_ips.network_prefix }}"
+ network_prefix: "{{ routable_ips.network_prefix }}"
register: result
ignore_errors: True
@@ -128,8 +127,8 @@
that:
- result.response is defined
- result.response.status.state == 'COMPLETE'
- fail_msg: " Unable to create vpc all specfactions "
- success_msg: " VPC with all specfactions created successfully "
+ fail_msg: " Unable to create vpc all specifications "
+ success_msg: " VPC with all specifications created successfully "
- set_fact:
todelete: "{{ todelete + [ result.vpc_uuid ] }}"
diff --git a/tests/integration/targets/nutanix_vpcs/tasks/delete_vpc.yml b/tests/integration/targets/nutanix_vpcs/tasks/delete_vpc.yml
index 7d0339fa6..977cce555 100644
--- a/tests/integration/targets/nutanix_vpcs/tasks/delete_vpc.yml
+++ b/tests/integration/targets/nutanix_vpcs/tasks/delete_vpc.yml
@@ -1,14 +1,14 @@
---
-- name: Create VPC with all specfactions
+- name: Create VPC with all specifications
ntnx_vpcs:
state: present
- name: vpc_with_add_specfactions
+ name: vpc_with_add_specifications
external_subnets:
- subnet_name: "{{ external_nat_subnet.name }}"
dns_servers: "{{ dns_servers }}"
routable_ips:
- network_ip: "{{ routable_ips.network_ip }}"
- network_prefix: "{{ routable_ips.network_prefix }}"
+ network_prefix: "{{ routable_ips.network_prefix }}"
register: result
ignore_errors: True
@@ -17,9 +17,8 @@
that:
- result.response is defined
- result.response.status.state == 'COMPLETE'
- fail_msg: " Unable to create vpc all specfactions "
- success_msg: " VPC with all specfactions created successfully "
-
+ fail_msg: " Unable to create vpc all specifications "
+ success_msg: " VPC with all specifications created successfully "
- name: Delete vpc
ntnx_vpcs:
diff --git a/tests/unit/plugins/module_utils/test_entity.py b/tests/unit/plugins/module_utils/test_entity.py
index 419263a2b..3529442b3 100644
--- a/tests/unit/plugins/module_utils/test_entity.py
+++ b/tests/unit/plugins/module_utils/test_entity.py
@@ -132,7 +132,7 @@ def test_negative_list_action(self):
self.assertEqual(result["request"], req)
self.assertEqual(entity.headers.get("Authorization"), None)
- def test_raed_action(self):
+ def test_read_action(self):
uuid = "test_uuid"
req = {
"method": "GET",