Skip to content

Commit

Permalink
Merge pull request #49 from data-platform-hq/feat/policy_permissions_…
Browse files Browse the repository at this point in the history
…resource

feat: resource to grant CAN_USE permissions to groups
  • Loading branch information
owlleg6 authored Sep 25, 2024
2 parents b3afd8c + 680b48d commit 32941cc
Show file tree
Hide file tree
Showing 2 changed files with 29 additions and 11 deletions.
23 changes: 12 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -193,6 +193,7 @@ No modules.
| [databricks_mount.adls](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/mount) | resource |
| [databricks_permissions.clusters](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/permissions) | resource |
| [databricks_permissions.sql_endpoint](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/permissions) | resource |
| [databricks_permissions.this](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/permissions) | resource |
| [databricks_secret.main](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/secret) | resource |
| [databricks_secret.this](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/secret) | resource |
| [databricks_secret_acl.external](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/secret_acl) | resource |
Expand All @@ -213,30 +214,30 @@ No modules.

| Name | Description | Type | Default | Required |
|------|-------------|------|---------|:--------:|
| <a name="input_clusters"></a> [clusters](#input\_clusters) | Set of objects with parameters to configure Databricks clusters and assign permissions to it for certain custom groups | <pre>set(object({<br> cluster_name = string<br> spark_version = optional(string, "13.3.x-scala2.12")<br> spark_conf = optional(map(any), {})<br> cluster_conf_passthrought = optional(bool, false)<br> spark_env_vars = optional(map(any), {})<br> data_security_mode = optional(string, "USER_ISOLATION")<br> node_type_id = optional(string, "Standard_D3_v2")<br> autotermination_minutes = optional(number, 30)<br> min_workers = optional(number, 1)<br> max_workers = optional(number, 2)<br> availability = optional(string, "ON_DEMAND_AZURE")<br> first_on_demand = optional(number, 0)<br> spot_bid_max_price = optional(number, 1)<br> cluster_log_conf_destination = optional(string, null)<br> init_scripts_workspace = optional(set(string), [])<br> init_scripts_volumes = optional(set(string), [])<br> init_scripts_dbfs = optional(set(string), [])<br> init_scripts_abfss = optional(set(string), [])<br> single_user_name = optional(string, null)<br> single_node_enable = optional(bool, false)<br> custom_tags = optional(map(string), {})<br> permissions = optional(set(object({<br> group_name = string<br> permission_level = string<br> })), [])<br> pypi_library_repository = optional(set(string), [])<br> maven_library_repository = optional(set(object({<br> coordinates = string<br> exclusions = set(string)<br> })), [])<br> }))</pre> | `[]` | no |
| <a name="input_clusters"></a> [clusters](#input\_clusters) | Set of objects with parameters to configure Databricks clusters and assign permissions to it for certain custom groups | <pre>set(object({<br/> cluster_name = string<br/> spark_version = optional(string, "13.3.x-scala2.12")<br/> spark_conf = optional(map(any), {})<br/> cluster_conf_passthrought = optional(bool, false)<br/> spark_env_vars = optional(map(any), {})<br/> data_security_mode = optional(string, "USER_ISOLATION")<br/> node_type_id = optional(string, "Standard_D3_v2")<br/> autotermination_minutes = optional(number, 30)<br/> min_workers = optional(number, 1)<br/> max_workers = optional(number, 2)<br/> availability = optional(string, "ON_DEMAND_AZURE")<br/> first_on_demand = optional(number, 0)<br/> spot_bid_max_price = optional(number, 1)<br/> cluster_log_conf_destination = optional(string, null)<br/> init_scripts_workspace = optional(set(string), [])<br/> init_scripts_volumes = optional(set(string), [])<br/> init_scripts_dbfs = optional(set(string), [])<br/> init_scripts_abfss = optional(set(string), [])<br/> single_user_name = optional(string, null)<br/> single_node_enable = optional(bool, false)<br/> custom_tags = optional(map(string), {})<br/> permissions = optional(set(object({<br/> group_name = string<br/> permission_level = string<br/> })), [])<br/> pypi_library_repository = optional(set(string), [])<br/> maven_library_repository = optional(set(object({<br/> coordinates = string<br/> exclusions = set(string)<br/> })), [])<br/> }))</pre> | `[]` | no |
| <a name="input_create_databricks_access_policy_to_key_vault"></a> [create\_databricks\_access\_policy\_to\_key\_vault](#input\_create\_databricks\_access\_policy\_to\_key\_vault) | Boolean flag to enable creation of Key Vault Access Policy for Databricks Global Service Principal. | `bool` | `true` | no |
| <a name="input_custom_cluster_policies"></a> [custom\_cluster\_policies](#input\_custom\_cluster\_policies) | Provides an ability to create custom cluster policy, assign it to cluster and grant CAN\_USE permissions on it to certain custom groups<br>name - name of custom cluster policy to create<br>can\_use - list of string, where values are custom group names, there groups have to be created with Terraform;<br>definition - JSON document expressed in Databricks Policy Definition Language. No need to call 'jsonencode()' function on it when providing a value; | <pre>list(object({<br> name = string<br> can_use = list(string)<br> definition = any<br> }))</pre> | <pre>[<br> {<br> "can_use": null,<br> "definition": null,<br> "name": null<br> }<br>]</pre> | no |
| <a name="input_default_cluster_policies_override"></a> [default\_cluster\_policies\_override](#input\_default\_cluster\_policies\_override) | Provides an ability to override default cluster policy<br>name - name of cluster policy to override<br>family\_id - family id of corresponding policy<br>definition - JSON document expressed in Databricks Policy Definition Language. No need to call 'jsonencode()' function on it when providing a value; | <pre>list(object({<br> name = string<br> family_id = string<br> definition = any<br> }))</pre> | <pre>[<br> {<br> "definition": null,<br> "family_id": null,<br> "name": null<br> }<br>]</pre> | no |
| <a name="input_custom_cluster_policies"></a> [custom\_cluster\_policies](#input\_custom\_cluster\_policies) | Provides an ability to create custom cluster policy, assign it to cluster and grant CAN\_USE permissions on it to certain custom groups<br/>name - name of custom cluster policy to create<br/>can\_use - list of string, where values are custom group names, there groups have to be created with Terraform;<br/>definition - JSON document expressed in Databricks Policy Definition Language. No need to call 'jsonencode()' function on it when providing a value; | <pre>list(object({<br/> name = string<br/> can_use = list(string)<br/> definition = any<br/> }))</pre> | <pre>[<br/> {<br/> "can_use": null,<br/> "definition": null,<br/> "name": null<br/> }<br/>]</pre> | no |
| <a name="input_default_cluster_policies_override"></a> [default\_cluster\_policies\_override](#input\_default\_cluster\_policies\_override) | Provides an ability to override default cluster policy<br/>name - name of cluster policy to override<br/>family\_id - family id of corresponding policy<br/>definition - JSON document expressed in Databricks Policy Definition Language. No need to call 'jsonencode()' function on it when providing a value; | <pre>list(object({<br/> name = string<br/> family_id = string<br/> definition = any<br/> }))</pre> | <pre>[<br/> {<br/> "definition": null,<br/> "family_id": null,<br/> "name": null<br/> }<br/>]</pre> | no |
| <a name="input_global_databricks_sp_object_id"></a> [global\_databricks\_sp\_object\_id](#input\_global\_databricks\_sp\_object\_id) | Global 'AzureDatabricks' SP object id. Used to create Key Vault Access Policy for Secret Scope | `string` | `"9b38785a-6e08-4087-a0c4-20634343f21f"` | no |
| <a name="input_iam_account_groups"></a> [iam\_account\_groups](#input\_iam\_account\_groups) | List of objects with group name and entitlements for this group | <pre>list(object({<br> group_name = optional(string)<br> entitlements = optional(list(string))<br> }))</pre> | `[]` | no |
| <a name="input_iam_workspace_groups"></a> [iam\_workspace\_groups](#input\_iam\_workspace\_groups) | Used to create workspace group. Map of group name and its parameters, such as users and service principals added to the group. Also possible to configure group entitlements. | <pre>map(object({<br> user = optional(list(string))<br> service_principal = optional(list(string))<br> entitlements = optional(list(string))<br> }))</pre> | `{}` | no |
| <a name="input_iam_account_groups"></a> [iam\_account\_groups](#input\_iam\_account\_groups) | List of objects with group name and entitlements for this group | <pre>list(object({<br/> group_name = optional(string)<br/> entitlements = optional(list(string))<br/> }))</pre> | `[]` | no |
| <a name="input_iam_workspace_groups"></a> [iam\_workspace\_groups](#input\_iam\_workspace\_groups) | Used to create workspace group. Map of group name and its parameters, such as users and service principals added to the group. Also possible to configure group entitlements. | <pre>map(object({<br/> user = optional(list(string))<br/> service_principal = optional(list(string))<br/> entitlements = optional(list(string))<br/> }))</pre> | `{}` | no |
| <a name="input_ip_rules"></a> [ip\_rules](#input\_ip\_rules) | Map of IP addresses permitted for access to DB | `map(string)` | `{}` | no |
| <a name="input_key_vault_secret_scope"></a> [key\_vault\_secret\_scope](#input\_key\_vault\_secret\_scope) | Object with Azure Key Vault parameters required for creation of Azure-backed Databricks Secret scope | <pre>list(object({<br> name = string<br> key_vault_id = string<br> dns_name = string<br> tenant_id = string<br> }))</pre> | `[]` | no |
| <a name="input_key_vault_secret_scope"></a> [key\_vault\_secret\_scope](#input\_key\_vault\_secret\_scope) | Object with Azure Key Vault parameters required for creation of Azure-backed Databricks Secret scope | <pre>list(object({<br/> name = string<br/> key_vault_id = string<br/> dns_name = string<br/> tenant_id = string<br/> }))</pre> | `[]` | no |
| <a name="input_mount_adls_passthrough"></a> [mount\_adls\_passthrough](#input\_mount\_adls\_passthrough) | Boolean flag to use mount options for credentials passthrough. Should be used with mount\_cluster\_name, specified cluster should have option cluster\_conf\_passthrought == true | `bool` | `false` | no |
| <a name="input_mount_cluster_name"></a> [mount\_cluster\_name](#input\_mount\_cluster\_name) | Name of the cluster that will be used during storage mounting. If mount\_adls\_passthrough == true, cluster should also have option cluster\_conf\_passthrought == true | `string` | `null` | no |
| <a name="input_mount_enabled"></a> [mount\_enabled](#input\_mount\_enabled) | Boolean flag that determines whether mount point for storage account filesystem is created | `bool` | `false` | no |
| <a name="input_mount_service_principal_client_id"></a> [mount\_service\_principal\_client\_id](#input\_mount\_service\_principal\_client\_id) | Application(client) Id of Service Principal used to perform storage account mounting | `string` | `null` | no |
| <a name="input_mount_service_principal_secret"></a> [mount\_service\_principal\_secret](#input\_mount\_service\_principal\_secret) | Service Principal Secret used to perform storage account mounting | `string` | `null` | no |
| <a name="input_mount_service_principal_tenant_id"></a> [mount\_service\_principal\_tenant\_id](#input\_mount\_service\_principal\_tenant\_id) | Service Principal tenant id used to perform storage account mounting | `string` | `null` | no |
| <a name="input_mountpoints"></a> [mountpoints](#input\_mountpoints) | Mountpoints for databricks | <pre>map(object({<br> storage_account_name = string<br> container_name = string<br> }))</pre> | `{}` | no |
| <a name="input_mountpoints"></a> [mountpoints](#input\_mountpoints) | Mountpoints for databricks | <pre>map(object({<br/> storage_account_name = string<br/> container_name = string<br/> }))</pre> | `{}` | no |
| <a name="input_pat_token_lifetime_seconds"></a> [pat\_token\_lifetime\_seconds](#input\_pat\_token\_lifetime\_seconds) | The lifetime of the token, in seconds. If no lifetime is specified, the token remains valid indefinitely | `number` | `315569520` | no |
| <a name="input_secret_scope"></a> [secret\_scope](#input\_secret\_scope) | Provides an ability to create custom Secret Scope, store secrets in it and assigning ACL for access management<br>scope\_name - name of Secret Scope to create;<br>acl - list of objects, where 'principal' custom group name, this group is created in 'Premium' module; 'permission' is one of "READ", "WRITE", "MANAGE";<br>secrets - list of objects, where object's 'key' param is created key name and 'string\_value' is a value for it; | <pre>list(object({<br> scope_name = string<br> acl = optional(list(object({<br> principal = string<br> permission = string<br> })))<br> secrets = optional(list(object({<br> key = string<br> string_value = string<br> })))<br> }))</pre> | <pre>[<br> {<br> "acl": null,<br> "scope_name": null,<br> "secrets": null<br> }<br>]</pre> | no |
| <a name="input_sql_endpoint"></a> [sql\_endpoint](#input\_sql\_endpoint) | Set of objects with parameters to configure SQL Endpoint and assign permissions to it for certain custom groups | <pre>set(object({<br> name = string<br> cluster_size = optional(string, "2X-Small")<br> min_num_clusters = optional(number, 0)<br> max_num_clusters = optional(number, 1)<br> auto_stop_mins = optional(string, "30")<br> enable_photon = optional(bool, false)<br> enable_serverless_compute = optional(bool, false)<br> spot_instance_policy = optional(string, "COST_OPTIMIZED")<br> warehouse_type = optional(string, "PRO")<br> permissions = optional(set(object({<br> group_name = string<br> permission_level = string<br> })), [])<br> }))</pre> | `[]` | no |
| <a name="input_secret_scope"></a> [secret\_scope](#input\_secret\_scope) | Provides an ability to create custom Secret Scope, store secrets in it and assigning ACL for access management<br/>scope\_name - name of Secret Scope to create;<br/>acl - list of objects, where 'principal' custom group name, this group is created in 'Premium' module; 'permission' is one of "READ", "WRITE", "MANAGE";<br/>secrets - list of objects, where object's 'key' param is created key name and 'string\_value' is a value for it; | <pre>list(object({<br/> scope_name = string<br/> acl = optional(list(object({<br/> principal = string<br/> permission = string<br/> })))<br/> secrets = optional(list(object({<br/> key = string<br/> string_value = string<br/> })))<br/> }))</pre> | <pre>[<br/> {<br/> "acl": null,<br/> "scope_name": null,<br/> "secrets": null<br/> }<br/>]</pre> | no |
| <a name="input_sql_endpoint"></a> [sql\_endpoint](#input\_sql\_endpoint) | Set of objects with parameters to configure SQL Endpoint and assign permissions to it for certain custom groups | <pre>set(object({<br/> name = string<br/> cluster_size = optional(string, "2X-Small")<br/> min_num_clusters = optional(number, 0)<br/> max_num_clusters = optional(number, 1)<br/> auto_stop_mins = optional(string, "30")<br/> enable_photon = optional(bool, false)<br/> enable_serverless_compute = optional(bool, false)<br/> spot_instance_policy = optional(string, "COST_OPTIMIZED")<br/> warehouse_type = optional(string, "PRO")<br/> permissions = optional(set(object({<br/> group_name = string<br/> permission_level = string<br/> })), [])<br/> }))</pre> | `[]` | no |
| <a name="input_suffix"></a> [suffix](#input\_suffix) | Optional suffix that would be added to the end of resources names. | `string` | `""` | no |
| <a name="input_system_schemas"></a> [system\_schemas](#input\_system\_schemas) | Set of strings with all possible System Schema names | `set(string)` | <pre>[<br> "access",<br> "billing",<br> "compute",<br> "marketplace",<br> "storage"<br>]</pre> | no |
| <a name="input_system_schemas"></a> [system\_schemas](#input\_system\_schemas) | Set of strings with all possible System Schema names | `set(string)` | <pre>[<br/> "access",<br/> "billing",<br/> "compute",<br/> "marketplace",<br/> "storage"<br/>]</pre> | no |
| <a name="input_system_schemas_enabled"></a> [system\_schemas\_enabled](#input\_system\_schemas\_enabled) | System Schemas only works with assigned Unity Catalog Metastore. Boolean flag to enabled this feature | `bool` | `false` | no |
| <a name="input_user_object_ids"></a> [user\_object\_ids](#input\_user\_object\_ids) | Map of AD usernames and corresponding object IDs | `map(string)` | `{}` | no |
| <a name="input_workspace_admins"></a> [workspace\_admins](#input\_workspace\_admins) | Provide users or service principals to grant them Admin permissions in Workspace. | <pre>object({<br> user = list(string)<br> service_principal = list(string)<br> })</pre> | <pre>{<br> "service_principal": null,<br> "user": null<br>}</pre> | no |
| <a name="input_workspace_admins"></a> [workspace\_admins](#input\_workspace\_admins) | Provide users or service principals to grant them Admin permissions in Workspace. | <pre>object({<br/> user = list(string)<br/> service_principal = list(string)<br/> })</pre> | <pre>{<br/> "service_principal": null,<br/> "user": null<br/>}</pre> | no |

## Outputs

Expand Down
17 changes: 17 additions & 0 deletions cluster.tf
Original file line number Diff line number Diff line change
Expand Up @@ -116,6 +116,23 @@ resource "databricks_cluster_policy" "this" {
definition = jsonencode(each.value)
}

resource "databricks_permissions" "this" {
for_each = {
for param in var.custom_cluster_policies : (param.name) => param.can_use
if param.can_use != null
}

cluster_policy_id = databricks_cluster_policy.this[each.key].id

dynamic "access_control" {
for_each = each.value
content {
group_name = access_control.value
permission_level = "CAN_USE"
}
}
}

resource "databricks_cluster_policy" "overrides" {
for_each = {
for param in var.default_cluster_policies_override : (param.name) => param
Expand Down

0 comments on commit 32941cc

Please sign in to comment.