Quick Answer
# Import the existing log group
terraform import aws_cloudwatch_log_group.lambda /aws/lambda/my-function
The Error
Error: creating CloudWatch Logs Log Group (/aws/lambda/my-function):
ResourceAlreadyExistsException: The specified log group already exists
Error: creating CloudWatch Logs Log Group (/ecs/my-service):
ResourceAlreadyExistsException: The specified log group already exists
What Causes This
- Lambda auto-created it — Lambda automatically creates
/aws/lambda/FUNCTION_NAMEon first invocation - ECS auto-created it — ECS awslogs driver creates log groups automatically
- Previous deployment — log group survived a
terraform destroy(noforce_destroy) - Another Terraform config — different project manages the same log group
- Manual creation — someone created it in the AWS Console
The most common cause: Lambda creates the log group before Terraform does, because Lambda runs and logs before terraform apply finishes.
Solution 1: Import the Existing Log Group
terraform import aws_cloudwatch_log_group.lambda /aws/lambda/my-function
terraform plan # Should show no changes (or just retention update)
With import block (Terraform 1.5+):
import {
to = aws_cloudwatch_log_group.lambda
id = "/aws/lambda/my-function"
}
Solution 2: Create Log Group BEFORE Lambda
Order matters — create the log group first so Lambda finds it instead of creating its own:
resource "aws_cloudwatch_log_group" "lambda" {
name = "/aws/lambda/${aws_lambda_function.api.function_name}"
retention_in_days = 14
# Ensure log group exists before Lambda runs
depends_on = [] # Created before Lambda by default if referenced
}
resource "aws_lambda_function" "api" {
function_name = "my-api"
# ...
depends_on = [aws_cloudwatch_log_group.lambda]
}
The key: Terraform creates the log group first → Lambda finds it and uses it → no conflict.
Solution 3: Use Unique Log Group Names
resource "aws_cloudwatch_log_group" "app" {
name = "/app/${var.environment}/${var.service_name}"
retention_in_days = 30
}
# For ECS
resource "aws_cloudwatch_log_group" "ecs" {
name = "/ecs/${var.environment}/${var.service_name}"
retention_in_days = 14
}
Avoid generic names that multiple services might create independently.
Solution 4: Manage Log Group Lifecycle
resource "aws_cloudwatch_log_group" "lambda" {
name = "/aws/lambda/${var.function_name}"
retention_in_days = 14
# skip_destroy = true means "don't delete this log group on terraform destroy"
# Useful when you want to keep logs after infrastructure removal
skip_destroy = false # Default: actually delete on destroy
tags = {
Environment = var.environment
ManagedBy = "terraform"
}
}
skip_destroy vs force_destroy
| Setting | On terraform destroy |
|---|---|
skip_destroy = false (default) | Deletes the log group and all logs |
skip_destroy = true | Leaves the log group (orphans it) |
skip_destroy = true causes the “already exists” error on next deploy — use carefully.
Solution 5: Delete the Orphaned Log Group
If it’s empty or contains no important logs:
# Check log group contents
aws logs describe-log-streams \
--log-group-name /aws/lambda/my-function \
--order-by LastEventTime --descending --limit 5
# Delete if safe
aws logs delete-log-group --log-group-name /aws/lambda/my-function
# Then apply Terraform
terraform apply
Complete Lambda + Log Group Pattern
# Log group created first with retention policy
resource "aws_cloudwatch_log_group" "api" {
name = "/aws/lambda/${local.function_name}"
retention_in_days = 14
}
# Lambda IAM role with log permissions
resource "aws_iam_role_policy_attachment" "lambda_logs" {
role = aws_iam_role.lambda.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
}
# Lambda function depends on log group
resource "aws_lambda_function" "api" {
function_name = local.function_name
role = aws_iam_role.lambda.arn
handler = "index.handler"
runtime = "nodejs20.x"
filename = "lambda.zip"
depends_on = [
aws_cloudwatch_log_group.api,
aws_iam_role_policy_attachment.lambda_logs
]
}
locals {
function_name = "my-api-${var.environment}"
}
Hands-On Courses
- Terraform for Beginners on CopyPasteLearn
- Terraform By Example — practical code examples
Conclusion
CloudWatch log group conflicts happen because Lambda and ECS auto-create log groups. Fix it by importing the existing group, or prevent it by creating the log group before the Lambda function with depends_on. Always set retention_in_days to avoid unlimited log storage costs.