~ / posts /

Terraform crashes on plan after proxmox provider upgrade

· 5 min read
Terraform crashes on plan after proxmox provider upgrade

I upgraded the bpg/proxmox Terraform provider from 0.66 to 0.69 last week and immediately hit a crash during terraform plan. No useful error — just a panic traceback and a non-zero exit. The state was fine, Proxmox was fine, but Terraform refused to render the diff.

This is a known rough edge in Terraform’s provider upgrade path. When a provider renames, removes, or restructures attributes between versions, Terraform’s plan renderer can crash trying to display “relevant attributes” for a resource that has stale state shape. The fix in Terraform 1.14.8 addresses the crash itself, but understanding why it happens makes you more careful about how you upgrade providers.

Infrastructure as code - Terraform managing Proxmox resources

Provider upgrades silently invalidate your state schema

The terraform.tfstate file stores attribute values keyed by the schema that was active when terraform apply last ran. When you bump the provider version, the schema can change — attributes removed, types narrowed, nested blocks flattened. Terraform doesn’t automatically migrate state on init. It waits until you run plan or apply, at which point the provider’s new schema is used to decode the old state blob.

If the old state has an attribute the new provider schema no longer declares, the crash happens in the plan renderer, not in the provider itself. That’s what makes it confusing — the provider is healthy, but Terraform’s display layer panics trying to annotate which attributes changed.

Terraform 1.14.8 patches the renderer to handle this gracefully instead of crashing. But if you’re on an older Terraform binary, you need a workaround.

The crash reproduces reliably on older binaries

Here’s the setup that triggered it for me. I’m managing a few LXC containers and VMs on a single Proxmox node:

terraform {
  required_providers {
    proxmox = {
      source  = "bpg/proxmox"
      version = "~> 0.69"
    }
  }
  required_version = ">= 1.3"
}

provider "proxmox" {
  endpoint  = "https://192.168.1.10:8006/"
  api_token = var.proxmox_api_token
  insecure  = true
}

After bumping the provider version in required_providers and running terraform init -upgrade, a terraform plan crashed with:

panic: interface conversion: interface {} is nil, not map[string]interface {}

goroutine 1 [running]:
github.com/hashicorp/terraform/internal/command/views...

The crash came from the plan view layer trying to walk the prior state’s attribute tree using the new schema as a guide.

Upgrade Terraform first, then the provider

The cleanest fix: upgrade your Terraform binary to 1.14.8 before you touch the provider version. The patch lands in Terraform core, not in the provider, so you need the new binary regardless of which provider you’re using.

# If you're on tfenv
tfenv install 1.14.8
tfenv use 1.14.8
terraform version
# Or direct download on Linux
curl -fsSL https://releases.hashicorp.com/terraform/1.14.8/terraform_1.14.8_linux_amd64.zip \
  -o /tmp/tf.zip
unzip /tmp/tf.zip -d /tmp/tf
sudo mv /tmp/tf/terraform /usr/local/bin/terraform
terraform version

After that, bump the provider and run init -upgrade again.

If you can’t upgrade Terraform immediately

There are two practical escape hatches when you’re stuck on an older binary.

Option 1 — pin the provider version and defer the upgrade. If the new provider version isn’t bringing features you need right now, stay on the last known-good version:

terraform {
  required_providers {
    proxmox = {
      source  = "bpg/proxmox"
      version = "= 0.66.3"  # exact pin, not a range
    }
  }
}

Run terraform init -upgrade to lock to that exact version. This buys you time to upgrade the binary first.

Option 2 — targeted state surgery. If you’ve already upgraded the provider and can’t roll it back, you can remove the crashing resource from state, let the plan run cleanly, then import it back. This is destructive in terms of drift tracking, but it’s non-destructive to the actual Proxmox resource.

# Identify which resource is causing the crash
# (usually in the traceback, look for the resource address)
terraform state rm proxmox_virtual_environment_container.homelab_dns

# Run plan -- it will now show the resource as "to be created"
terraform plan

# Import it back so Terraform tracks it again
terraform import proxmox_virtual_environment_container.homelab_dns 100

The terraform state rm + terraform import dance forces a fresh state snapshot using the current provider schema, which eliminates the shape mismatch.

Upgrade sequencing for provider-heavy Proxmox setups

If you’re managing more than a handful of resources, this kind of issue is inevitable across provider major bumps. The pattern I follow now:

flowchart TD
    A[Pin provider version in git] --> B[Upgrade Terraform binary]
    B --> C[Run terraform plan -- confirm clean]
    C --> D[Bump provider version]
    D --> E[terraform init -upgrade]
    E --> F{plan succeeds?}
    F -- yes --> G[Review diff, apply]
    F -- no --> H[terraform state rm crashing resource]
    H --> I[terraform import resource back]
    I --> G

The key step most people skip is C — verifying the current state is clean before changing the provider. A pre-existing drift or state inconsistency will compound the provider upgrade crash and make diagnosis much harder.

bpg/proxmox schema changes worth knowing

Between 0.66 and 0.69, a few attributes on proxmox_virtual_environment_container and proxmox_virtual_environment_vm changed in ways that produce stale state:

  • network_interface blocks had some computed fields added; if your state has them absent, the renderer trips on nil
  • disk blocks on VMs saw type narrowing on the size attribute — was accepting bare integers, now expects a string with unit suffix ("20G")
  • startup on containers: the nested block structure was flattened into a single string attribute in newer versions

These aren’t breaking changes that fail apply, but they’re enough to confuse the plan diff renderer on older Terraform binaries.

Check your state for these before upgrading:

terraform state show proxmox_virtual_environment_vm.your_vm | grep -E 'size|startup|network'

If size values are bare numbers (20 instead of "20G"), update your .tf files to match the new expected format before running plan post-upgrade.

# Old (pre-0.68)
disk {
  size = 20
}

# New (0.68+)
disk {
  size = "20G"
}

After aligning the config with the new schema expectations, terraform plan with 1.14.8 handles the rest without crashing.