AWS CloudFront CDN with Terraform - Complete Guide
Deploy AWS CloudFront distributions with Terraform. S3 origin, ALB origin, custom domains, SSL certificates, cache policies, and WAF integration.
Terraform
Deploy AWS Kinesis Data Streams with Terraform. Stream configuration, shard management, Lambda consumers, Firehose delivery, and encryption settings.
resource "aws_kinesis_stream" "events" {
name = "events-stream"
shard_count = 1
retention_period = 24
stream_mode_details {
stream_mode = "PROVISIONED"
}
}resource "aws_kinesis_stream" "events" {
name = "${var.project}-events"
retention_period = 72 # Hours (24-8760)
stream_mode_details {
stream_mode = "ON_DEMAND" # Auto-scales, no shard management
}
encryption_type = "KMS"
kms_key_id = "alias/aws/kinesis"
tags = { Environment = var.environment }
}resource "aws_kinesis_stream" "high_throughput" {
name = "${var.project}-high-throughput"
shard_count = 4 # 4 MB/s write, 8 MB/s read
retention_period = 168 # 7 days
stream_mode_details {
stream_mode = "PROVISIONED"
}
encryption_type = "KMS"
kms_key_id = aws_kms_key.kinesis.arn
shard_level_metrics = [
"IncomingBytes",
"OutgoingBytes",
"IncomingRecords",
"OutgoingRecords",
"WriteProvisionedThroughputExceeded",
"ReadProvisionedThroughputExceeded",
"IteratorAgeMilliseconds",
]
tags = { Environment = var.environment }
}resource "aws_lambda_event_source_mapping" "kinesis" {
event_source_arn = aws_kinesis_stream.events.arn
function_name = aws_lambda_function.processor.arn
starting_position = "LATEST"
batch_size = 100
maximum_batching_window_in_seconds = 5
maximum_retry_attempts = 3
bisect_batch_on_function_error = true
parallelization_factor = 2
destination_config {
on_failure {
destination_arn = aws_sqs_queue.dlq.arn
}
}
function_response_types = ["ReportBatchItemFailures"]
}
resource "aws_iam_role_policy" "lambda_kinesis" {
name = "kinesis-access"
role = aws_iam_role.lambda.id
policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Effect = "Allow"
Action = [
"kinesis:GetRecords",
"kinesis:GetShardIterator",
"kinesis:DescribeStream",
"kinesis:ListShards",
"kinesis:SubscribeToShard",
]
Resource = aws_kinesis_stream.events.arn
}]
})
}resource "aws_kinesis_firehose_delivery_stream" "s3" {
name = "${var.project}-to-s3"
destination = "extended_s3"
kinesis_source_configuration {
kinesis_stream_arn = aws_kinesis_stream.events.arn
role_arn = aws_iam_role.firehose.arn
}
extended_s3_configuration {
role_arn = aws_iam_role.firehose.arn
bucket_arn = aws_s3_bucket.data_lake.arn
prefix = "events/year=!{timestamp:yyyy}/month=!{timestamp:MM}/day=!{timestamp:dd}/"
buffering_size = 64 # MB
buffering_interval = 300 # Seconds
compression_format = "GZIP"
cloudwatch_logging_options {
enabled = true
log_group_name = aws_cloudwatch_log_group.firehose.name
log_stream_name = "S3Delivery"
}
}
}resource "aws_iam_policy" "kinesis_producer" {
name = "${var.project}-kinesis-producer"
policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Effect = "Allow"
Action = [
"kinesis:PutRecord",
"kinesis:PutRecords",
]
Resource = aws_kinesis_stream.events.arn
}]
})
}| Feature | Provisioned | On-Demand |
|---|---|---|
| Capacity | Fixed shard count | Auto-scales |
| Write | 1 MB/s per shard | Up to 200 MB/s |
| Read | 2 MB/s per shard | Up to 400 MB/s |
| Cost | Per shard-hour | Per GB + per million records |
| Best for | Predictable traffic | Variable/unpredictable |
Use on-demand mode for variable workloads (no shard management). Use provisioned mode with shard-level metrics when you need cost predictability. Always enable encryption, configure Lambda consumers with bisect_batch_on_function_error and dead-letter queues, and use Firehose for S3/data lake delivery.
Deploy AWS CloudFront distributions with Terraform. S3 origin, ALB origin, custom domains, SSL certificates, cache policies, and WAF integration.
Deploy AWS ElastiCache Redis with Terraform. Cluster mode, replication groups, subnet groups, encryption, and parameter group configuration.
Deploy AWS Lambda functions with Terraform. Complete guide with IAM roles, API Gateway triggers, S3 triggers, layers, environment variables, and VPC...
Deploy AWS MSK (Managed Streaming for Kafka) with Terraform. Cluster configuration, MSK Serverless, encryption, monitoring, and topic management.