Skip to content

Commit

Permalink
Merge pull request #17 from data-platform-hq/fix_refactor
Browse files Browse the repository at this point in the history
feat: module refactor
  • Loading branch information
owlleg6 authored Apr 9, 2023
2 parents 0a3a93d + 63970cb commit 3b354d9
Show file tree
Hide file tree
Showing 5 changed files with 4 additions and 71 deletions.
7 changes: 1 addition & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -117,8 +117,7 @@ No modules.
| [databricks_token.pat](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/token) | resource |
| [databricks_user.this](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/user) | resource |
| [azurerm_role_assignment.this](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/role_assignment) | resource |
| [databricks_cluster_policy.this](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/cluster_policy) | resource |
| [databricks_cluster.this](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/cluster) | resource |
| [databricks_cluster.this](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/cluster) | resource |
| [databricks_mount.adls](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/mount) | resource |
| [databricks_secret_scope.main](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/secret_scope) | resource |
| [databricks_secret.main](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/secret) | resource |
Expand All @@ -134,11 +133,9 @@ No modules.
| <a name="input_sp_key_secret_name"></a> [sp\_key\_secret\_name](#input\_sp\_key\_secret\_name) | The name of Azure Key Vault secret that contains client secret of Service Principal to access in Azure Key Vault | `string` | n/a | yes |
| <a name="input_tenant_id_secret_name"></a> [tenant\_id\_secret\_name](#input\_tenant\_id\_secret\_name) | The name of Azure Key Vault secret that contains tenant ID secret of Service Principal to access in Azure Key Vault | `string` | n/a | yes |
| <a name="input_key_vault_id"></a> [key\_vault\_id](#input\_key\_vault\_id) | ID of the Key Vault instance where the Secret resides | `string` | n/a | yes |
| <a name="input_sku"></a> [sku](#input\_sku) | The sku to use for the Databricks Workspace: [standard \ premium \ trial] | `string` | "standard" | no |
| <a name="input_pat_token_lifetime_seconds"></a> [pat\_token\_lifetime\_seconds](#input\_pat\_token\_lifetime\_seconds) | The lifetime of the token, in seconds. If no lifetime is specified, the token remains valid indefinitely | `number` | 315569520 | no |
| <a name="input_users"></a> [users](#input\_users) | List of users to access Databricks | `list(string)` | [] | no |
| <a name="input_permissions"></a> [permissions](#input\_permissions) | Databricks Workspace permission maps | `list(map(string))` | <pre> [{ <br> object_id = null <br> role = null <br> }] </pre> | no |
| <a name="input_custom_cluster_policies"></a> [custom\_cluster\_policies](#input\_custom\_cluster\_policies) | Provides an ability to create custom cluster policy, assign it to cluster and grant CAN_USE permissions on it to certain custom groups | <pre>list(object({<br> name = string<br> can_use = list(string)<br> definition = any<br> assigned = bool<br>}))</pre> | <pre>[{<br> name = null<br> can_use = null<br> definition = null<br> assigned = false<br>}]</pre> | no |
| <a name="input_cluster_nodes_availability"></a> [cluster\_nodes\_availability](#input\_cluster\_nodes\_availability) | Availability type used for all subsequent nodes past the first_on_demand ones: [SPOT_AZURE \ SPOT_WITH_FALLBACK_AZURE \ ON_DEMAND_AZURE] | `string` | null | no |
| <a name="input_first_on_demand"></a> [first\_on\_demand](#input\_first\_on\_demand) | The first first_on_demand nodes of the cluster will be placed on on-demand instances: [[ \:number ]] | `number` | 0 | no |
| <a name="input_spot_bid_max_price"></a> [spot\_bid\_max\_price](#input\_spot\_bid\_max\_price) | The max price for Azure spot instances. Use -1 to specify lowest price | `number` | -1 | no |
Expand All @@ -163,8 +160,6 @@ No modules.
| ------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------- |
| <a name="output_token"></a> [token](#output\_token) | Databricks Personal Authorization Token |
| <a name="output_cluster_id"></a> [cluster\_id](#output\_cluster\_id) | Databricks Cluster Id |
| <a name="output_cluster_policies_object"></a> [cluster\_policies\_object](#output\_cluster\_policies\_object) | Databricks Cluster Policies object map |
| <a name="output_secret_scope_object"></a> [secret_scope\_object](#output\_secret_scope\_object) | Databricks-managed Secret Scope object map to create ACLs |
<!-- END_TF_DOCS -->

## License
Expand Down
14 changes: 1 addition & 13 deletions main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ resource "databricks_token" "pat" {
}

resource "databricks_user" "this" {
for_each = var.sku == "premium" ? [] : toset(var.users)
for_each = toset(var.users)
user_name = each.value
lifecycle { ignore_changes = [external_id] }
}
Expand All @@ -34,24 +34,12 @@ resource "azurerm_role_assignment" "this" {
principal_id = each.value.object_id
}

resource "databricks_cluster_policy" "this" {
for_each = var.sku == "premium" ? {
for param in var.custom_cluster_policies : (param.name) => param.definition
if param.definition != null
} : {}

name = each.key
definition = jsonencode(each.value)
}

resource "databricks_cluster" "this" {
cluster_name = var.custom_default_cluster_name == null ? "shared autoscaling" : var.custom_default_cluster_name
spark_version = var.spark_version
spark_conf = var.spark_conf
spark_env_vars = var.spark_env_vars

policy_id = var.sku == "premium" ? one([for policy in var.custom_cluster_policies : databricks_cluster_policy.this[policy.name].id if policy.assigned]) : null

data_security_mode = var.data_security_mode
node_type_id = var.node_type
autotermination_minutes = var.autotermination_minutes
Expand Down
17 changes: 0 additions & 17 deletions outputs.tf
Original file line number Diff line number Diff line change
Expand Up @@ -7,20 +7,3 @@ output "cluster_id" {
value = databricks_cluster.this.id
description = "Databricks Cluster Id"
}

output "cluster_policies_object" {
value = [for policy in var.custom_cluster_policies : {
id = databricks_cluster_policy.this[policy.name].id
name = databricks_cluster_policy.this[policy.name].name
can_use = policy.can_use
} if policy.definition != null && var.sku == "premium"]
description = "Databricks Cluster Policies object map"
}

output "secret_scope_object" {
value = [for param in var.secret_scope : {
scope_name = databricks_secret_scope.this[param.scope_name].name
acl = param.acl
} if param.acl != null]
description = "Databricks-managed Secret Scope object map to create ACLs"
}
4 changes: 2 additions & 2 deletions secrets.tf
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ locals {
# Secret Scope with SP secrets for mounting Azure Data Lake Storage
resource "databricks_secret_scope" "main" {
name = "main"
initial_manage_principal = var.sku == "premium" ? null : "users"
initial_manage_principal = "users"
}

resource "databricks_secret" "main" {
Expand All @@ -33,7 +33,7 @@ resource "databricks_secret_scope" "this" {
}

name = each.key
initial_manage_principal = var.sku == "premium" ? null : "users"
initial_manage_principal = "users"
}

resource "databricks_secret" "this" {
Expand Down
33 changes: 0 additions & 33 deletions variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -23,12 +23,6 @@ variable "key_vault_id" {
description = "ID of the Key Vault instance where the Secret resides"
}

variable "sku" {
type = string
description = "The sku to use for the Databricks Workspace: [standard|premium|trial]"
default = "standard"
}

variable "pat_token_lifetime_seconds" {
type = number
description = "The lifetime of the token, in seconds. If no lifetime is specified, the token remains valid indefinitely"
Expand All @@ -52,33 +46,6 @@ variable "permissions" {
]
}

# Cluster policy variables
variable "custom_cluster_policies" {
type = list(object({
name = string
can_use = list(string)
definition = any
assigned = bool
}))
description = <<-EOT
Provides an ability to create custom cluster policy, assign it to cluster and grant CAN_USE permissions on it to certain custom groups
name - name of custom cluster policy to create
can_use - list of string, where values are custom group names, there groups have to be created with Terraform;
definition - JSON document expressed in Databricks Policy Definition Language. No need to call 'jsonencode()' function on it when providing a value;
assigned - boolean flag which assigns policy to default 'shared autoscaling' cluster, only single custom policy could be assigned;
EOT
default = [{
name = null
can_use = null
definition = null
assigned = false
}]
validation {
condition = length([for policy in var.custom_cluster_policies : policy.assigned if policy.assigned]) <= 1
error_message = "Only single cluster policy assignment allowed. Please set 'assigned' parameter to 'true' for exact one or none policy"
}
}

# Shared autoscaling cluster config variables
variable "cluster_nodes_availability" {
type = string
Expand Down

0 comments on commit 3b354d9

Please sign in to comment.