Skip to main content

Terraform for AI Security: Guardrails, Model Access Control, and Threat Detection

Key Takeaway

Secure AI workloads with Terraform. Deploy Bedrock guardrails, model access IAM policies, prompt injection detection, and AI-specific CloudWatch monitoring on AWS.

Table of Contents

AI security platforms are a core 2026 trend according to Gartner — as AI gets embedded everywhere, securing model access, preventing data leakage, and monitoring AI-specific threats becomes its own infrastructure category.

This guide shows how to use Terraform to build security guardrails around AI workloads on AWS.

AI Security Architecture

Users/Applications
        │
        ▼
┌──────────────────┐
│  API Gateway      │ ← Rate limiting, auth
│  + WAF            │ ← Prompt injection filter
└────────┬─────────┘
         ▼
┌──────────────────┐
│  Bedrock          │ ← Guardrails (content filter)
│  Guardrails       │ ← PII redaction
└────────┬─────────┘
         ▼
┌──────────────────┐
│  Foundation Model │ ← Access-controlled
└────────┬─────────┘
         ▼
┌──────────────────┐
│  CloudWatch       │ ← Token usage, latency, cost
│  + CloudTrail     │ ← Audit trail
└──────────────────┘

Bedrock Guardrails

resource "aws_bedrock_guardrail" "ai_safety" {
  name                      = "production-ai-guardrails"
  blocked_input_messaging   = "Your request was blocked by our safety filters."
  blocked_outputs_messaging = "The response was filtered for safety."
  description               = "Content filtering and PII protection for production AI"

  # Content filters — block harmful content
  content_policy_config {
    filters_config {
      type            = "HATE"
      input_strength  = "HIGH"
      output_strength = "HIGH"
    }
    filters_config {
      type            = "VIOLENCE"
      input_strength  = "HIGH"
      output_strength = "HIGH"
    }
    filters_config {
      type            = "SEXUAL"
      input_strength  = "HIGH"
      output_strength = "HIGH"
    }
    filters_config {
      type            = "INSULTS"
      input_strength  = "MEDIUM"
      output_strength = "MEDIUM"
    }
    filters_config {
      type            = "MISCONDUCT"
      input_strength  = "HIGH"
      output_strength = "HIGH"
    }
    filters_config {
      type            = "PROMPT_ATTACK"
      input_strength  = "HIGH"
      output_strength = "NONE"
    }
  }

  # PII detection and redaction
  sensitive_information_policy_config {
    pii_entities_config {
      type   = "EMAIL"
      action = "ANONYMIZE"
    }
    pii_entities_config {
      type   = "PHONE"
      action = "ANONYMIZE"
    }
    pii_entities_config {
      type   = "US_SOCIAL_SECURITY_NUMBER"
      action = "BLOCK"
    }
    pii_entities_config {
      type   = "CREDIT_DEBIT_CARD_NUMBER"
      action = "BLOCK"
    }
  }

  # Topic restrictions
  topic_policy_config {
    topics_config {
      name       = "financial-advice"
      definition = "Providing specific investment advice, stock recommendations, or financial planning guidance"
      type       = "DENY"
    }
    topics_config {
      name       = "medical-diagnosis"
      definition = "Diagnosing medical conditions or recommending specific treatments"
      type       = "DENY"
    }
  }

  tags = {
    Component   = "ai-security"
    Environment = var.environment
  }
}

resource "aws_bedrock_guardrail_version" "v1" {
  guardrail_arn = aws_bedrock_guardrail.ai_safety.guardrail_arn
  description   = "Initial production version"
}

Model Access IAM Policies

# Least-privilege access to specific models
resource "aws_iam_policy" "bedrock_readonly" {
  name = "bedrock-inference-only"

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Sid    = "AllowSpecificModels"
        Effect = "Allow"
        Action = [
          "bedrock:InvokeModel",
          "bedrock:InvokeModelWithResponseStream"
        ]
        Resource = [
          "arn:aws:bedrock:${var.region}::foundation-model/anthropic.claude-3-5-sonnet*",
          "arn:aws:bedrock:${var.region}::foundation-model/amazon.titan-embed*"
        ]
      },
      {
        Sid    = "DenyExpensiveModels"
        Effect = "Deny"
        Action = "bedrock:InvokeModel"
        Resource = [
          "arn:aws:bedrock:${var.region}::foundation-model/anthropic.claude-3-opus*"
        ]
      },
      {
        Sid    = "RequireGuardrails"
        Effect = "Deny"
        Action = "bedrock:InvokeModel"
        Resource = "*"
        Condition = {
          StringNotEquals = {
            "bedrock:GuardrailArn" = aws_bedrock_guardrail.ai_safety.guardrail_arn
          }
        }
      }
    ]
  })
}

# SageMaker endpoint access
resource "aws_iam_policy" "sagemaker_inference" {
  name = "sagemaker-inference-only"

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Effect = "Allow"
        Action = [
          "sagemaker:InvokeEndpoint",
          "sagemaker:InvokeEndpointAsync"
        ]
        Resource = "arn:aws:sagemaker:${var.region}:${data.aws_caller_identity.current.account_id}:endpoint/production-*"
      },
      {
        Sid    = "DenyModelModification"
        Effect = "Deny"
        Action = [
          "sagemaker:CreateModel",
          "sagemaker:DeleteModel",
          "sagemaker:DeleteEndpoint",
          "sagemaker:UpdateEndpoint"
        ]
        Resource = "*"
      }
    ]
  })
}

WAF for Prompt Injection

resource "aws_wafv2_web_acl" "ai_api" {
  name  = "ai-api-protection"
  scope = "REGIONAL"

  default_action { allow {} }

  # Rate limiting per IP
  rule {
    name     = "rate-limit"
    priority = 1

    action { block {} }

    statement {
      rate_based_statement {
        limit              = 100
        aggregate_key_type = "IP"
      }
    }

    visibility_config {
      sampled_requests_enabled   = true
      cloudwatch_metrics_enabled = true
      metric_name                = "ai-rate-limit"
    }
  }

  # Block known prompt injection patterns
  rule {
    name     = "prompt-injection-filter"
    priority = 2

    action { block {} }

    statement {
      or_statement {
        statement {
          byte_match_statement {
            search_string = "ignore previous instructions"
            field_to_match { body { oversize_handling = "MATCH" } }
            text_transformation {
              priority = 0
              type     = "LOWERCASE"
            }
            positional_constraint = "CONTAINS"
          }
        }
        statement {
          byte_match_statement {
            search_string = "you are now"
            field_to_match { body { oversize_handling = "MATCH" } }
            text_transformation {
              priority = 0
              type     = "LOWERCASE"
            }
            positional_constraint = "CONTAINS"
          }
        }
      }
    }

    visibility_config {
      sampled_requests_enabled   = true
      cloudwatch_metrics_enabled = true
      metric_name                = "prompt-injection"
    }
  }

  visibility_config {
    sampled_requests_enabled   = true
    cloudwatch_metrics_enabled = true
    metric_name                = "ai-api-waf"
  }
}

AI-Specific Monitoring

# CloudWatch dashboard for AI metrics
resource "aws_cloudwatch_dashboard" "ai_security" {
  dashboard_name = "ai-security"

  dashboard_body = jsonencode({
    widgets = [
      {
        type   = "metric"
        x      = 0
        y      = 0
        width  = 12
        height = 6
        properties = {
          title   = "Bedrock Invocations & Guardrail Blocks"
          metrics = [
            ["AWS/Bedrock", "Invocations", "ModelId", "anthropic.claude-3-5-sonnet"],
            ["AWS/Bedrock", "GuardrailBlocked", "GuardrailId", aws_bedrock_guardrail.ai_safety.id]
          ]
          period = 300
          stat   = "Sum"
        }
      },
      {
        type   = "metric"
        x      = 12
        y      = 0
        width  = 12
        height = 6
        properties = {
          title   = "Token Usage (Cost Proxy)"
          metrics = [
            ["AWS/Bedrock", "InputTokenCount", "ModelId", "anthropic.claude-3-5-sonnet"],
            ["AWS/Bedrock", "OutputTokenCount", "ModelId", "anthropic.claude-3-5-sonnet"]
          ]
          period = 3600
          stat   = "Sum"
        }
      }
    ]
  })
}

# Alarm on unusual AI usage
resource "aws_cloudwatch_metric_alarm" "ai_cost_spike" {
  alarm_name          = "ai-token-usage-spike"
  comparison_operator = "GreaterThanThreshold"
  evaluation_periods  = 1
  metric_name         = "InputTokenCount"
  namespace           = "AWS/Bedrock"
  period              = 3600
  statistic           = "Sum"
  threshold           = 1000000  # 1M tokens/hour
  alarm_description   = "Unusual AI token consumption — possible abuse"
  alarm_actions       = [aws_sns_topic.security_alerts.arn]

  dimensions = {
    ModelId = "anthropic.claude-3-5-sonnet"
  }
}

CloudTrail for AI Audit

resource "aws_cloudtrail" "ai_audit" {
  name                       = "ai-operations-trail"
  s3_bucket_name             = aws_s3_bucket.audit_logs.id
  include_global_service_events = false
  is_multi_region_trail      = false

  event_selector {
    read_write_type           = "All"
    include_management_events = true

    data_resource {
      type   = "AWS::Bedrock::Model"
      values = ["arn:aws:bedrock:${var.region}::foundation-model/*"]
    }
  }

  tags = {
    Component = "ai-security-audit"
  }
}

Hands-On Courses

Conclusion

AI security requires guardrails at every layer: content filtering and PII redaction in Bedrock, least-privilege IAM policies per model, WAF rules for prompt injection, and CloudWatch monitoring for usage anomalies. Terraform makes these security controls version-controlled and consistent across environments — essential as AI workloads move from experiments to production.

🚀

Level Up Your Terraform Skills

Hands-on courses, books, and resources from Luca Berton

Luca Berton
Written by

Luca Berton

DevOps Engineer, AWS Partner, Terraform expert, and author. Creator of Ansible Pilot, Terraform Pilot, and CopyPasteLearn.