Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ISSUE] Issue with databricks_library resource when cluster is automatically deleted #3679

Closed
andreamarin opened this issue Jun 17, 2024 · 2 comments · Fixed by #3909
Closed

Comments

@andreamarin
Copy link

Configuration

resource "databricks_cluster" "this" {
  provider      = databricks.workspace
  cluster_name  = var.cluster_name
  # ...
}

resource "databricks_permissions" "this" {
  provider   = databricks.workspace

  cluster_id = databricks_cluster.this.cluster_id
  # ...
}

resource "databricks_library" "cluster-library" {
  provider  = databricks.workspace
  for_each = var.cluster_libraries

  cluster_id = databricks_cluster.this.cluster_id

  # ...
}

Expected Behavior

When a cluster is deleted automatically after being terminated for more than 30 days (databricks default behaviour) the next run of a terraform plan should recreate the cluster and all of the libraries attached

Actual Behavior

The plan does detect that the cluster and the permissions were deleted and need to be recreated but fails with the databricks_library resource because the old cluster no longer exists:

Error: cannot read library: Cluster xxxx does not exist

Steps to Reproduce

  1. Create a cluster with the databricks_cluster resource and add libraries to it using the databricks_library resource
  2. Permanently delete the cluster
  3. Run a tf plan again

Terraform and provider versions

Terraform v1.7.4
+ provider registry.terraform.io/databricks/databricks v1.47.0
+ provider registry.terraform.io/hashicorp/aws v4.33.0

Is it a regression?

Yes, from <= v1.39 this worked correctly, i.e. all the resources (cluster, permissions and libraries) were recreated after they were permanently deleted

Debug Output

NA

Important Factoids

NA

Would you like to implement a fix?

No

@janher
Copy link

janher commented Jul 2, 2024

I can confirm this occurs with version 1.48.2 after I manually delete a cluster (that was also created manually, but had libraries added by Terraform).
When I downgrade to databricks provider 1.38.0, the same script works.
Terraform version is 1.4.2

So to reproduce this:

  • create a cluster manually from the databricks UI
  • install a library (e.g. with the snippet below)
  • delete the cluster again (manually from the databrick UI)
data "databricks_clusters" "all" {
}

resource "databricks_library" "azure-identity" {
  for_each   = data.databricks_clusters.all.ids
  cluster_id = each.key
  pypi {
    package = "azure-identity==1.16.0"
  }
}

@alexeiser
Copy link

Seems like the bug reported with #1737 and fixed in #1745 has been re-introduced.

alexott added a commit that referenced this issue Aug 15, 2024
We now don't return an error when reading a library that belongs to the cluster deleted
outside of the Terraform.

Resolves #3679
alexott added a commit that referenced this issue Aug 15, 2024
We now don't return an error when reading a library that belongs to the cluster deleted
outside of the Terraform.

Resolves #3679
github-merge-queue bot pushed a commit that referenced this issue Sep 3, 2024
## Changes
<!-- Summary of your changes that are easy to understand -->

We now don't return an error when reading a library that belongs to the
cluster deleted outside of the Terraform.

Resolves #3679

## Tests
<!-- 
How is this tested? Please see the checklist below and also describe any
other relevant tests
-->

- [x] `make test` run locally
- [ ] relevant change in `docs/` folder
- [ ] covered with integration tests in `internal/acceptance`
- [ ] relevant acceptance tests are passing
- [ ] using Go SDK
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants