Terraform Stacks became generally available in late 2025 and represent the biggest new Terraform feature since workspaces. Stacks let you deploy multiple Terraform configurations as a single, coordinated unit — solving the “how do I manage networking + compute + database together?” problem that teams have worked around for years.
The Problem Stacks Solve
Without Stacks, deploying a full environment requires multiple terraform apply commands in the right order:
# Manual orchestration — error-prone
cd networking/
terraform apply # 1. Create VPC first
cd ../database/
terraform apply # 2. Create RDS (needs VPC)
cd ../compute/
terraform apply # 3. Create ECS (needs VPC + DB endpoint)
cd ../monitoring/
terraform apply # 4. Create dashboards (needs all of the above)
Problems:
- Manual ordering — you must know the dependency graph
- Partial failures — if step 3 fails, steps 1-2 are applied but step 4 isn’t
- No coordinated destroy — tearing down requires reverse order
- CI/CD complexity — pipelines need multi-step orchestration
How Stacks Work
A Stack defines components (individual Terraform configurations) and their dependencies:
my-stack/
├── stack.tfstack.hcl # Stack definition
├── deployments.tfdeploy.hcl # Deploy targets (dev, prod)
├── components/
│ ├── networking/
│ │ ├── main.tf
│ │ ├── variables.tf
│ │ └── outputs.tf
│ ├── database/
│ │ ├── main.tf
│ │ ├── variables.tf
│ │ └── outputs.tf
│ └── compute/
│ ├── main.tf
│ ├── variables.tf
│ └── outputs.tf
Stack Definition
# stack.tfstack.hcl
variable "region" {
type = string
}
variable "environment" {
type = string
}
# Component 1: Networking
component "networking" {
source = "./components/networking"
inputs = {
region = var.region
environment = var.environment
vpc_cidr = "10.0.0.0/16"
}
}
# Component 2: Database (depends on networking)
component "database" {
source = "./components/database"
inputs = {
region = var.region
environment = var.environment
vpc_id = component.networking.vpc_id # ← Dependency
subnet_ids = component.networking.private_subnet_ids
}
}
# Component 3: Compute (depends on networking + database)
component "compute" {
source = "./components/compute"
inputs = {
region = var.region
environment = var.environment
vpc_id = component.networking.vpc_id
subnet_ids = component.networking.private_subnet_ids
db_endpoint = component.database.endpoint # ← Dependency
db_secret_arn = component.database.secret_arn
}
}
Deployment Targets
# deployments.tfdeploy.hcl
deployment "development" {
inputs = {
region = "us-east-1"
environment = "dev"
}
}
deployment "staging" {
inputs = {
region = "us-east-1"
environment = "staging"
}
}
deployment "production" {
inputs = {
region = "us-east-1"
environment = "prod"
}
}
Component Example
Each component is a regular Terraform configuration:
# components/networking/main.tf
variable "region" { type = string }
variable "environment" { type = string }
variable "vpc_cidr" { type = string }
resource "aws_vpc" "main" {
cidr_block = var.vpc_cidr
tags = { Name = "${var.environment}-vpc" }
}
resource "aws_subnet" "private" {
count = 3
vpc_id = aws_vpc.main.id
cidr_block = cidrsubnet(var.vpc_cidr, 8, count.index)
tags = { Name = "${var.environment}-private-${count.index}" }
}
output "vpc_id" { value = aws_vpc.main.id }
output "private_subnet_ids" { value = aws_subnet.private[*].id }
Stacks vs Workspaces
| Feature | Workspaces | Stacks |
|---|---|---|
| Purpose | Same config, different state | Multiple configs, coordinated lifecycle |
| Components | Single root module | Multiple root modules |
| Dependencies | Manual (remote state data sources) | Declared (component.X.output) |
| Deployment order | Manual | Automatic (dependency graph) |
| Multi-env | One workspace per env | Deployment targets per env |
| Destroy order | Manual (reverse) | Automatic (reverse dependency) |
| Requires HCP | No (CLI or HCP) | Yes (HCP Terraform only) |
| OpenTofu support | ✅ Yes | ❌ No |
When to Use Workspaces
# Same config, different environments
terraform workspace select dev
terraform apply
terraform workspace select prod
terraform apply
Best for: isolated environments using the same Terraform code.
When to Use Stacks
Best for: multi-component systems where networking, database, compute, and monitoring need coordinated deployment and destruction.
Deferred Changes
Stacks support deferred changes — when one component can’t be fully planned because it depends on a not-yet-created resource:
Component "networking" → Plan: +5 resources
Component "database" → Plan: +3 resources (deferred: subnet_ids unknown until networking applies)
Component "compute" → Plan: +8 resources (deferred: db_endpoint unknown until database applies)
Stacks automatically handle this: apply networking first, then plan database with real values, then apply database, then plan and apply compute.
Requirements
- HCP Terraform (Terraform Cloud) — Stacks require HCP; they don’t work with the CLI alone
- Terraform 1.9+ — minimum version for Stack support
- terraform-stacks-cli or HCP UI for stack operations
Getting Started
# Create stack structure
mkdir -p my-stack/components/{networking,database,compute}
# Create stack definition
cat > my-stack/stack.tfstack.hcl << 'EOF'
component "networking" {
source = "./components/networking"
inputs = { region = var.region }
}
EOF
# Create deployment targets
cat > my-stack/deployments.tfdeploy.hcl << 'EOF'
deployment "dev" {
inputs = { region = "us-east-1" }
}
EOF
# Push to HCP Terraform
# Configure stack in HCP UI or API
Hands-On Courses
- Terraform for Beginners on CopyPasteLearn
- Terraform By Example — practical code examples
Conclusion
Terraform Stacks solve the multi-component orchestration problem: declare components with explicit dependencies, and Stacks handles the apply/destroy order automatically. The tradeoff is that Stacks require HCP Terraform — they’re not available in the open-source CLI or OpenTofu. For teams already on HCP managing complex environments (networking + database + compute + monitoring), Stacks eliminate the glue scripts and manual ordering that made multi-component deployments fragile.