Skip to main content

Terraform Bulk Import: Bring Existing AWS Resources Under Management

Key Takeaway

Import dozens of existing AWS resources into Terraform at once using import blocks, for_each, and generate-config-out. Complete bulk import workflow with real examples.

Table of Contents

You have 50 AWS resources created manually in the console. You need them in Terraform. The old terraform import CLI required importing one resource at a time. With import blocks (Terraform 1.5+), you can import everything in a single terraform apply and auto-generate the HCL configuration.

The Bulk Import Workflow

1. Inventory existing resources (AWS CLI / console)
2. Write import blocks (one per resource)
3. Generate config automatically (-generate-config-out)
4. Review and clean up generated code
5. terraform apply → all resources imported
6. Remove import blocks

Step 1: Inventory Your Resources

List EC2 Instances

aws ec2 describe-instances \
  --query 'Reservations[].Instances[].{ID:InstanceId,Name:Tags[?Key==`Name`].Value|[0],Type:InstanceType,State:State.Name}' \
  --filters "Name=instance-state-name,Values=running" \
  --output table

List S3 Buckets

aws s3api list-buckets --query 'Buckets[].Name' --output text

List VPC Resources

# VPCs
aws ec2 describe-vpcs --query 'Vpcs[].{ID:VpcId,CIDR:CidrBlock,Name:Tags[?Key==`Name`].Value|[0]}' --output table

# Subnets
aws ec2 describe-subnets --query 'Subnets[].{ID:SubnetId,VPC:VpcId,AZ:AvailabilityZone,CIDR:CidrBlock}' --output table

# Security Groups
aws ec2 describe-security-groups --query 'SecurityGroups[].{ID:GroupId,Name:GroupName,VPC:VpcId}' --output table

# Route Tables
aws ec2 describe-route-tables --query 'RouteTables[].{ID:RouteTableId,VPC:VpcId}' --output table

List RDS Instances

aws rds describe-db-instances \
  --query 'DBInstances[].{ID:DBInstanceIdentifier,Engine:Engine,Class:DBInstanceClass,Status:DBInstanceStatus}' \
  --output table

Step 2: Write Import Blocks

Import a Complete VPC Stack

# imports.tf — all VPC resources

# VPC
import {
  to = aws_vpc.main
  id = "vpc-0a1b2c3d4e5f6g7h8"
}

# Subnets
import {
  to = aws_subnet.public["us-east-1a"]
  id = "subnet-pub1a"
}

import {
  to = aws_subnet.public["us-east-1b"]
  id = "subnet-pub1b"
}

import {
  to = aws_subnet.public["us-east-1c"]
  id = "subnet-pub1c"
}

import {
  to = aws_subnet.private["us-east-1a"]
  id = "subnet-priv1a"
}

import {
  to = aws_subnet.private["us-east-1b"]
  id = "subnet-priv1b"
}

import {
  to = aws_subnet.private["us-east-1c"]
  id = "subnet-priv1c"
}

# Internet Gateway
import {
  to = aws_internet_gateway.main
  id = "igw-abc123"
}

# NAT Gateways
import {
  to = aws_nat_gateway.az_a
  id = "nat-aaa111"
}

import {
  to = aws_nat_gateway.az_b
  id = "nat-bbb222"
}

# Route Tables
import {
  to = aws_route_table.public
  id = "rtb-pub123"
}

import {
  to = aws_route_table.private["us-east-1a"]
  id = "rtb-priv1a"
}

import {
  to = aws_route_table.private["us-east-1b"]
  id = "rtb-priv1b"
}

# Security Groups
import {
  to = aws_security_group.web
  id = "sg-web123"
}

import {
  to = aws_security_group.app
  id = "sg-app456"
}

import {
  to = aws_security_group.db
  id = "sg-db789"
}

Import Using for_each

For resources that follow a pattern:

# Import multiple S3 buckets
locals {
  existing_buckets = {
    logs      = "company-logs-prod"
    artifacts = "company-artifacts-prod"
    backups   = "company-backups-prod"
    data      = "company-data-prod"
    static    = "company-static-prod"
  }
}

import {
  for_each = local.existing_buckets
  to       = aws_s3_bucket.this[each.key]
  id       = each.value
}

resource "aws_s3_bucket" "this" {
  for_each = local.existing_buckets
  bucket   = each.value
}
# Import multiple EC2 instances
locals {
  existing_instances = {
    web1 = "i-0abc111"
    web2 = "i-0abc222"
    web3 = "i-0abc333"
    api1 = "i-0def444"
    api2 = "i-0def555"
  }
}

import {
  for_each = local.existing_instances
  to       = aws_instance.servers[each.key]
  id       = each.value
}

Step 3: Auto-Generate Configuration

Instead of writing all the resource blocks by hand:

terraform plan -generate-config-out=generated.tf

Terraform reads each import block, fetches the real resource from AWS, and writes the HCL:

Planning...

aws_vpc.main: Preparing import...
aws_subnet.public["us-east-1a"]: Preparing import...
aws_subnet.public["us-east-1b"]: Preparing import...
...

Generated configuration written to generated.tf

Step 4: Review and Clean Up

The generated config includes everything — including computed attributes you should remove:

# generated.tf — BEFORE cleanup
resource "aws_vpc" "main" {
  # Remove computed attributes
  # arn                  = "arn:aws:ec2:us-east-1:123456789:vpc/vpc-0a1b2c3d"
  # id                   = "vpc-0a1b2c3d"
  # default_network_acl_id = "acl-abc"
  # default_route_table_id = "rtb-abc"
  # default_security_group_id = "sg-abc"
  # dhcp_options_id      = "dopt-abc"
  # main_route_table_id  = "rtb-main"
  # owner_id             = "123456789012"

  cidr_block           = "10.0.0.0/16"
  enable_dns_hostnames = true
  enable_dns_support   = true

  tags = {
    Name        = "production-vpc"
    Environment = "prod"
  }
}

Cleanup Checklist

  1. Remove computed attributes (arn, id, owner_id, etc.)
  2. Replace hardcoded values with variables
  3. Add references between resources (aws_subnet.public.vpc_id = aws_vpc.main.id)
  4. Organize into logical files (vpc.tf, subnets.tf, security_groups.tf)
  5. Add lifecycle blocks if needed (prevent_destroy, ignore_changes)

After Cleanup

# vpc.tf
resource "aws_vpc" "main" {
  cidr_block           = var.vpc_cidr
  enable_dns_hostnames = true
  enable_dns_support   = true

  tags = {
    Name        = "${var.environment}-vpc"
    Environment = var.environment
  }
}

# subnets.tf
resource "aws_subnet" "public" {
  for_each = var.public_subnets

  vpc_id            = aws_vpc.main.id
  cidr_block        = each.value.cidr
  availability_zone = each.value.az

  tags = {
    Name = "${var.environment}-public-${each.key}"
  }
}

Step 5: Apply

# Verify plan shows only imports, no changes
terraform plan
# Plan: 16 to import, 0 to add, 0 to change, 0 to destroy

# Apply the import
terraform apply
# aws_vpc.main: Importing... [id=vpc-0a1b2c3d]
# aws_subnet.public["us-east-1a"]: Importing... [id=subnet-pub1a]
# ...
# Apply complete! Resources: 16 imported, 0 added, 0 changed, 0 destroyed.

Step 6: Remove Import Blocks

After successful import, delete imports.tf — the blocks are one-time use:

rm imports.tf
terraform plan
# No changes. Your infrastructure matches the configuration.

Script: Generate Import Blocks from AWS CLI

Automate the tedious part — generating import blocks from existing resources:

#!/bin/bash
# generate-imports.sh — Create import blocks for all EC2 instances

echo '# Auto-generated import blocks' > imports.tf
echo '' >> imports.tf

# EC2 Instances
aws ec2 describe-instances \
  --filters "Name=instance-state-name,Values=running" \
  --query 'Reservations[].Instances[].{ID:InstanceId,Name:Tags[?Key==`Name`].Value|[0]}' \
  --output json | jq -r '.[] | "import {\n  to = aws_instance.\(.Name | gsub("[^a-zA-Z0-9_]"; "_"))\n  id = \"\(.ID)\"\n}\n"' >> imports.tf

# S3 Buckets
aws s3api list-buckets --query 'Buckets[].Name' --output json | jq -r '.[] | "import {\n  to = aws_s3_bucket.\(gsub("[^a-zA-Z0-9_]"; "_"))\n  id = \"\(.)\"\n}\n"' >> imports.tf

echo "Generated $(grep -c 'import {' imports.tf) import blocks"

Common Issues

“Resource already managed”

Error: Resource already managed by Terraform
aws_instance.web is already in the state

The resource is already imported. Remove that import block.

“Configuration for import target does not exist”

Error: Configuration for import target does not exist

You need either a resource block or -generate-config-out flag.

Plan Shows Changes After Import

Normal — your written config may differ from the actual resource. Either update your HCL to match reality, or accept the planned change.

Hands-On Courses

Conclusion

Bulk importing existing AWS infrastructure into Terraform is now practical: write import blocks (use for_each for patterns), run terraform plan -generate-config-out to auto-generate HCL, clean up the generated code, and apply. For large estates with 50+ resources, combine the AWS CLI inventory script with import blocks to go from manual infrastructure to fully Terraform-managed in an afternoon.

🚀

Level Up Your Terraform Skills

Hands-on courses, books, and resources from Luca Berton

Luca Berton
Written by

Luca Berton

DevOps Engineer, AWS Partner, Terraform expert, and author. Creator of Ansible Pilot, Terraform Pilot, and CopyPasteLearn.