-
Notifications
You must be signed in to change notification settings - Fork 384
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEATURE] databricks_mount to unify mounts across clouds #497
Milestone
Comments
This was referenced Feb 10, 2021
nfx
pushed a commit
that referenced
this issue
Nov 2, 2021
Resource implements generic cloud independent resource for mounts in two approaches: You specify two main options `uri` - URL to mount (service dependent), and `extra_configs` - map of Spark configurations that is necessary to mount the resource. This is most flexible way to perform mounts - see example for Azure passthrough mounts in documentation. For example: ``` resource "databricks_mount" "this" { name = "tf-test" uri = "abfss://${local.container}@${local.storage_acc}.dfs.core.windows.net" extra_configs = { "fs.azure.account.auth.type": "OAuth", "fs.azure.account.oauth.provider.type": "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider", "fs.azure.account.oauth2.client.id": local.client_id, "fs.azure.account.oauth2.client.secret": "{{secrets/${local.secret_scope}/${local.secret_key}}}", "fs.azure.account.oauth2.client.endpoint": "https://login.microsoftonline.com/${local.tenant_id}/oauth2/token", "fs.azure.createRemoteFileSystemDuringInitialization": "false", } } ``` You specify block that is specific to cloud storage with almost same content as existing mount resources. Currently we support `abfss`, `adl`, `wasbs`, and `s3` blocks. Using these blocks it will be easier to migrate existing resources, and also simplify configuration, as users won't need to care for configuration options with fixed values, etc. For example (same effect as previous example): ``` resource "databricks_mount" "this2" { name = "tf-test" abfss { container_name = local.container storage_account_name = local.storage_acc tenant_id = local.tenant_id client_id = local.client_id client_secret_scope = local.secret_scope client_secret_key = local.secret_key initialize_file_system = false } } ``` This should fix #497
Merged
michael-berk
pushed a commit
to michael-berk/terraform-provider-databricks
that referenced
this issue
Feb 15, 2023
Resource implements generic cloud independent resource for mounts in two approaches: You specify two main options `uri` - URL to mount (service dependent), and `extra_configs` - map of Spark configurations that is necessary to mount the resource. This is most flexible way to perform mounts - see example for Azure passthrough mounts in documentation. For example: ``` resource "databricks_mount" "this" { name = "tf-test" uri = "abfss://${local.container}@${local.storage_acc}.dfs.core.windows.net" extra_configs = { "fs.azure.account.auth.type": "OAuth", "fs.azure.account.oauth.provider.type": "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider", "fs.azure.account.oauth2.client.id": local.client_id, "fs.azure.account.oauth2.client.secret": "{{secrets/${local.secret_scope}/${local.secret_key}}}", "fs.azure.account.oauth2.client.endpoint": "https://login.microsoftonline.com/${local.tenant_id}/oauth2/token", "fs.azure.createRemoteFileSystemDuringInitialization": "false", } } ``` You specify block that is specific to cloud storage with almost same content as existing mount resources. Currently we support `abfss`, `adl`, `wasbs`, and `s3` blocks. Using these blocks it will be easier to migrate existing resources, and also simplify configuration, as users won't need to care for configuration options with fixed values, etc. For example (same effect as previous example): ``` resource "databricks_mount" "this2" { name = "tf-test" abfss { container_name = local.container storage_account_name = local.storage_acc tenant_id = local.tenant_id client_id = local.client_id client_secret_scope = local.secret_scope client_secret_key = local.secret_key initialize_file_system = false } } ``` This should fix databricks#497
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
All of the mounting should go through unified resource. In scope of this, we'll come up with design, approximate resource config layout and way to support all of existing mounts until 0.4.x.
dbutils.fs.updateMount
to re-mount Azure Storage after rotating SPN secret #513extra_configs
parameterThe text was updated successfully, but these errors were encountered: