Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ISSUE] Recreating VPC for workspace fails on apply #732

Closed
steve148 opened this issue Jul 20, 2021 · 3 comments · Fixed by #734
Closed

[ISSUE] Recreating VPC for workspace fails on apply #732

steve148 opened this issue Jul 20, 2021 · 3 comments · Fixed by #734
Labels
wontfix This will not be worked on
Milestone

Comments

@steve148
Copy link
Contributor

Terraform Version

Terraform version 0.140
Provider version 0.34.0

Affected Resource(s)

Please list the resources as a list, for example:

  • databricks_mws_workspace
  • databricks_mws_networks

Environment variable names

n/a

Terraform Configuration Files

module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "2.70.0"

  name = local.prefix
  cidr = var.cidr_block
  azs  = data.aws_availability_zones.available.names
  tags = var.tags

  enable_dns_hostnames = true
  enable_nat_gateway   = true
  create_igw           = true

  public_subnets  = var.public_subnets
  private_subnets = var.private_subnets

  default_security_group_egress = [{
    cidr_blocks = "0.0.0.0/0"
  }]

  default_security_group_ingress = [{
    description = "Allow all internal TCP and UDP"
    self        = true
  }]
}

# Register VPC and its components with databricks.
resource "databricks_mws_networks" "this" {
  provider           = databricks.mws
  account_id         = var.databricks_account_id
  network_name       = "${local.prefix}-network"
  security_group_ids = [module.vpc.default_security_group_id]
  subnet_ids         = module.vpc.private_subnets
  vpc_id             = module.vpc.vpc_id
}

resource "databricks_mws_workspaces" "this" {
  provider   = databricks.mws
  account_id = var.databricks_account_id
  aws_region = var.region

  # Name the workspace and its deploy
  workspace_name  = local.prefix
  deployment_name = local.prefix

  credentials_id                           = databricks_mws_credentials.this.credentials_id
  storage_configuration_id                 = databricks_mws_storage_configurations.this.storage_configuration_id
  network_id                               = databricks_mws_networks.this.network_id
  managed_services_customer_managed_key_id = databricks_mws_customer_managed_keys.this.customer_managed_key_id
}

Debug Output

n/a

Panic Output

n/a

Expected Behavior

The end goal was to change the CIDR block for the VPC. The plan showed the following for databricks related resources (minus the specific IDs).

  # databricks_mws_networks.this must be replaced
-/+ resource "databricks_mws_networks" "this" {
      ~ creation_time      = 1616179359287 -> (known after apply)
      ~ id                 = "4/c" -> (known after apply)
      ~ network_id         = "C" -> (known after apply)
      ~ security_group_ids = [
          - "sg-0",
        ] -> (known after apply) # forces replacement
      ~ subnet_ids         = [
          - "subnet-a",
          - "subnet-b",
          - "subnet-c
          - "subnet-d
          - "subnet-e
          - "subnet-F",
        ] -> (known after apply) # forces replacement
      ~ vpc_id             = "vpc-0" -> (known after apply) # forces replacement
      ~ vpc_status         = "VALID" -> (known after apply)
      ~ workspace_id       = 3 -> (known after apply)
        # (2 unchanged attributes hidden)

      + error_messages {
          + error_message = (known after apply)
          + error_type    = (known after apply)
        }

      + vpc_endpoints {
          + dataplane_relay = (known after apply)
          + rest_api        = (known after apply)
        }
    }

  # databricks_mws_workspaces.this will be updated in-place
  ~ resource "databricks_mws_workspaces" "this" {
        id                                       = "4/3"
      ~ network_id                               = "c" -> (known after apply)
        # (14 unchanged attributes hidden)
    }

Ideally, the new VPC and its subcomponents would have been created first, registered with the databricks workspace as the new network configuration, and then the old VPC and sub-components would have been cleaned up.

Actual Behavior

When running the plan, it failed with the following error.

Error: INVALID_STATE: Unable to delete, Network is being used by active workspace 3612852022183645

I later found out this is because a workspace can have it's network configuration updated but not deleted (while the workspace is active).

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply

Important Factoids

@nfx
Copy link
Contributor

nfx commented Jul 20, 2021

@steve148 Please do workspace update in multiple steps. E.g. create a second network, commit, apply. Then in the next commit/apply change the network. Then in the third commit/apply remove the older network.

This is currently a limitation of the platform. Would you also be able to PR in the documentation bit about it? :)

@nfx nfx added the wontfix This will not be worked on label Jul 20, 2021
@nfx
Copy link
Contributor

nfx commented Jul 20, 2021

There's also somewhat related bug #649

@steve148 steve148 changed the title [ISSUE] Provider bug [ISSUE] Recreating VPC for workspace fails on apply Jul 20, 2021
@nfx
Copy link
Contributor

nfx commented Jul 21, 2021

@steve148 i've raised this issue internally. meanwhile, if you want to do an update, please test out update changes from #734, as current update is broken.

@nfx nfx added this to the v0.3.7 milestone Jul 21, 2021
nfx added a commit that referenced this issue Jul 27, 2021
@nfx nfx linked a pull request Jul 27, 2021 that will close this issue
@nfx nfx closed this as completed in #734 Jul 27, 2021
nfx added a commit that referenced this issue Jul 27, 2021
@nfx nfx mentioned this issue Jul 30, 2021
michael-berk pushed a commit to michael-berk/terraform-provider-databricks that referenced this issue Feb 15, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
wontfix This will not be worked on
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants