INFRASTRUCTURE
AS CODE

// Define your infrastructure. Deploy with confidence.

TERRAFORM CHANGED THE GAME.

Manual server provisioning is a relic of the past. In a world where infrastructure can scale instantly, where cloud resources spin up and down by the minute, you need a way to define your entire infrastructure in code. You need Terraform.

WHY TERRAFORM?

Terraform uses a declarative approach—you describe what you want, not how to get there. It figures out the implementation. This means reproducible, version-controlled, auditable infrastructure that you can deploy to AWS, GCP, Azure, or your own on-premises servers with the same configuration language.

BECOME INFRASTRUCTURE-AS-CODE NATIVE.

Learn HCL (HashiCorp Configuration Language), understand providers and modules, master state management, and build production-grade infrastructure that scales. Whether you're managing one VPS or ten thousand servers, Terraform gives you the power to control it all.

BEGIN YOUR JOURNEY →

// The Path to Infrastructure Mastery

10 lessons. Complete Terraform control.

LESSON 01

Introduction to Terraform

What is Infrastructure as Code? Installing Terraform and writing your first configuration.

Beginner
LESSON 02

HCL Fundamentals

Syntax, variables, outputs, and the building blocks of Terraform configurations.

Beginner
LESSON 03

Providers & Resources

Understanding providers, resources, and data sources. Working with AWS, GCP, Azure.

Beginner
LESSON 04

Variables & Outputs

Input variables, output values, and creating reusable configurations.

Intermediate
LESSON 05

State Management

Understanding Terraform state, backends, and remote state for teams.

Intermediate
LESSON 06

Modules

Creating reusable infrastructure modules. Modular design patterns.

Intermediate
LESSON 07

Workspaces

Managing multiple environments. Dev, staging, production isolation.

Intermediate
LESSON 08

Provisioning & Automation

Provisioners, remote-exec, cloud-init, and bootstrapping instances.

Advanced
LESSON 09

Testing & Security

Terratest, Sentinel policies, secrets management, and secure configurations.

Advanced
LESSON 10

Production Patterns

CI/CD integration,Atlantis, Terraform Cloud, and enterprise patterns.

Advanced
← Back to Lessons

// Lesson 1: Introduction to Terraform

What is Infrastructure as Code?

Infrastructure as Code (IaC) is the practice of managing and provisioning computer data centers through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools. Think of it like this: instead of manually clicking through a cloud console to create servers, networks, and storage, you write code that describes exactly what you want—and the IaC tool makes it happen.

Terraform is HashiCorp's answer to IaC. It was released in 2014 and has since become the industry standard for multi-cloud infrastructure provisioning. Unlike cloud-specific tools (like AWS CloudFormation or Azure Resource Manager), Terraform is provider-agnostic—you can manage resources across AWS, GCP, Azure, Kubernetes, and even on-premises infrastructure using the same language and workflow.

The Declarative Paradigm

Here's what makes Terraform special: it's declarative. When you write a Terraform configuration, you're not writing a script that says "do this, then do that." Instead, you're declaring the desired end state of your infrastructure. Terraform then figures out how to achieve that state.

Consider the difference:

The declarative approach means Terraform can analyze your current infrastructure state, compare it to your desired state, and generate an execution plan. You review the plan, make sure it looks right, and then Terraform applies only the changes needed. This is called the Terraform workflow.

The Terraform Workflow

Every Terraform project follows the same basic workflow:

  1. Write: You create configuration files that describe your infrastructure.
  2. Init: Run terraform init to initialize the working directory and download necessary providers.
  3. Plan: Run terraform plan to see what changes Terraform will make. This is your chance to review before applying.
  4. Apply: Run terraform apply to make the changes happen.
  5. Destroy: Run terraform destroy when you want to tear down the infrastructure.

This workflow is idempotent—running it multiple times with the same configuration produces the same result. Terraform is smart enough to know what's already there and only modify what's changed.

Installing Terraform

On Linux or macOS, the easiest way to install Terraform is using the official install script or your package manager:

# Using the official install script
curl -fsSL https://get.terraform.io | sh

# Or on macOS with Homebrew
brew install terraform

# On Debian/Ubuntu
wget -O- https://apt.releases.hashicorp.com/gpg | gpg --dearmor > /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | tee /etc/apt/sources.list.d/hashicorp.list
apt update && apt install terraform

On Windows, you can use Chocolatey, Scoop, or download the binary directly from terraform.io. Verify the installation:

terraform version

Your First Terraform Configuration

Let's create something simple—a local file. This demonstrates Terraform without requiring cloud credentials:

# main.tf
resource "local_file" "hello" {
  filename = "hello.txt"
  content  = "Hello from Terraform!"
}

Now let's walk through the workflow:

# Initialize Terraform (downloads providers)
terraform init

# See what will be created
terraform plan

# Apply the configuration
terraform apply

# Verify the file was created
cat hello.txt

Congratulations—you've just created infrastructure with code! That local_file resource is now managed by Terraform. If you change the content and run terraform apply again, Terraform will update the file. If you run terraform destroy, it will delete the file.

Understanding the Directory

After running terraform init, you'll notice some new files and directories:

The .terraform.lock.hcl file is important—commit it to version control to ensure everyone on your team uses the same provider versions.

Next: HCL Fundamentals →
← Back to Lessons

// Lesson 2: HCL Fundamentals

HashiCorp Configuration Language (HCL) is Terraform's domain-specific language. It's designed to be both human-readable and machine-parseable. If you've worked with JSON, you'll find HCL familiar—but much more pleasant to write.

Blocks and Arguments

The fundamental building blocks of HCL are blocks and arguments:

# This is a block
resource "aws_instance" "web" {
  # These are arguments
  ami           = "ami-0c55b159cbfafe1f0"
  instance_type = "t2.micro"
  
  tags = {
    Name = "HelloWorld"
  }
}

Blocks have a type (resource), zero or more labels ("aws_instance" and "web"), and a body containing arguments. Arguments are key-value pairs that configure the resource.

Identifiers and Strings

Identifiers (names of resources, variables, etc.) can contain letters, digits, underscores, and hyphens. They must start with a letter:

# Valid identifiers
my_resource
aws_instance
web_server_01

# Invalid
123resource  # Can't start with number
my-resource  # Hyphens not allowed in names

Strings are wrapped in double quotes. HCL supports escape sequences:

content = "Hello\nWorld"  # Newline
path    = "C:\\Users\\name"  # Windows path

Data Types

HCL supports several data types:

Strings

name = "web-server"

Numbers

count = 5
timeout = 300

Booleans

enable_monitoring = true

Lists (Arrays)

# Implicit list
ports = [80, 443, 8080]

# Or with explicit type (rarely needed)
ports = list(number) [80, 443, 8080]

Maps (Objects)

tags = {
  Name = "web-server"
  Environment = "production"
  Owner = "team-infra"
}

Sets

# Sets are like lists but with no duplicates and no order
availability_zones = toset(["us-east-1a", "us-east-1b"])

References

One of HCL's most powerful features is the ability to reference values from other resources:

resource "aws_security_group" "web" {
  name = "web-sg"
  
  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

resource "aws_instance" "web" {
  ami           = "ami-0c55b159cbfafe1f0"
  instance_type = "t2.micro"
  
  # Reference the security group
  vpc_security_group_ids = [aws_security_group.web.id]
}

The syntax resource_type.resource_name.attribute lets you access resource attributes. In this case, we're grabbing the ID of the security group we created.

Expressions and Functions

HCL supports a rich expression syntax including arithmetic, comparisons, and built-in functions:

# String interpolation
name = "server-${var.environment}"

# Arithmetic
total = var.count * 2

# Conditionals
instance_type = var.is_production ? "t3.large" : "t3.micro"

# Built-in functions
upper_name = upper(var.name)
joined_ids = join(",", aws_instance.web[*].id)

Terraform includes hundreds of functions for string manipulation, arithmetic, IP calculations, and more. Check the docs for the full list.

Comments

HCL supports three comment styles:

# Single-line comment (hash style)
// Single-line comment (C++ style)
/* Multi-line
   comment */
Next: Providers & Resources →
← Back to Lessons

// Lesson 3: Providers & Resources

Providers are plugins that Terraform uses to interact with cloud platforms, SaaS providers, and other services. Each provider exposes resources that you can manage with Terraform.

How Providers Work

When you run terraform init, Terraform downloads the providers specified in your configuration. These are typically distributed as plugins—binary executables that Terraform calls out to when managing resources.

Providers are configured in a provider block:

provider "aws" {
  region = "us-east-1"
  
  # You can also configure credentials here
  # But prefer using environment variables or shared config
}

Common ways to provide credentials (in order of precedence):

  1. Environment variables (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
  2. Shared credentials file (~/.aws/credentials)
  3. Shared config file (~/.aws/config)
  4. Provider configuration

Common Providers

AWS Provider

provider "aws" {
  region = "us-east-1"
}

Google Cloud Provider

provider "google" {
  project = "my-project"
  region  = "us-central1"
}

Azure Provider

provider "azurerm" {
  features {}
}

Resources

Resources are the most important element in Terraform. Each resource block describes one or more infrastructure objects:

resource "aws_instance" "web" {
  ami           = "ami-0c55b159cbfafe1f0"
  instance_type = "t2.micro"
  
  tags = {
    Name = "web-server"
  }
}

The resource type (aws_instance) tells Terraform which provider to use. The first label (web) is a local name you use to reference this resource within your configuration. The arguments depend on the resource type.

Resource Behavior

Terraform's lifecycle for each resource:

  1. Create: Resource doesn't exist—Terraform creates it
  2. Read: Terraform reads the current state
  3. Update: Configuration changed—Terraform updates in place if supported
  4. Destroy: Resource removed—Terraform destroys it

Update-in-Place

Many resources support in-place updates. For example, changing tags on an AWS instance:

resource "aws_instance" "web" {
  # Changing this will update in place
  tags = {
    Name = "updated-name"
  }
}

Recreate

Some changes require destroying and recreating the resource. For example, changing an EC2 instance type:

resource "aws_instance" "web" {
  # This will destroy and recreate the instance
  instance_type = "t3.small"
}

Terraform will warn you when this happens in the plan phase.

Data Sources

Data sources let you fetch information from existing infrastructure—resources that weren't created by Terraform:

# Get information about an existing AMI
data "aws_ami" "ubuntu" {
  most_recent = true
  owners      = ["099720109477"]  # Canonical
  
  filter {
    name   = "name"
    values = ["ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-*"]
  }
}

# Use it in a resource
resource "aws_instance" "web" {
  ami           = data.aws_ami.ubuntu.id
  instance_type = "t2.micro"
}

Data sources are read-only—they don't create or modify anything.

Next: Variables & Outputs →
← Back to Lessons

// Lesson 4: Variables & Outputs

Variables and outputs make your Terraform configurations reusable and composable. They allow you to parameterize your infrastructure and expose important values for use elsewhere.

Input Variables

Input variables let you customize Terraform configurations without modifying the code. Think of them like function parameters.

Defining Variables

# variables.tf
variable "instance_type" {
  description = "EC2 instance type"
  type        = string
  default     = "t2.micro"
}

variable "environment" {
  description = "Environment name"
  type        = string
}

Variable Types

# String
variable "name" {
  type    = string
  default = "web-server"
}

# Number
variable "count" {
  type    = number
  default = 3
}

# Boolean
variable "enable_monitoring" {
  type    = bool
  default = true
}

# List
variable "ports" {
  type    = list(number)
  default = [80, 443]
}

# Map
variable "tags" {
  type = map(string)
  default = {
    Environment = "production"
    Team        = "infra"
  }
}

# Object
variable "server_config" {
  type = object({
    cpu    = number
    memory = number
    disk   = number
  })
  default = {
    cpu    = 2
    memory = 4096
    disk   = 50
  }
}

# Set
variable "availability_zones" {
  type    = set(string)
  default = ["us-east-1a", "us-east-1b"]
}

Using Variables

resource "aws_instance" "web" {
  # Reference variable with var.variable_name
  instance_type = var.instance_type
  
  tags = var.tags
}

Passing Variables

# Via command line
terraform apply -var="environment=prod" -var="instance_type=t3.large"

# Via file
terraform apply -var-file="prod.tfvars"

# Via environment variables
export TF_VAR_environment=prod

Variable Files

# dev.tfvars
environment  = "development"
instance_type = "t3.micro"

# prod.tfvars
environment  = "production"
instance_type = "t3.large"

Output Values

Output values expose information about your infrastructure to the CLI, or for use by other Terraform configurations:

# outputs.tf
output "instance_id" {
  description = "The ID of the EC2 instance"
  value       = aws_instance.web.id
}

output "instance_public_ip" {
  description = "Public IP address"
  value       = aws_instance.web.public_ip
  sensitive   = true  # Hide from output
}

output "all_instance_ids" {
  description = "IDs of all instances"
  value       = aws_instance.web[*].id
}

After running terraform apply, Terraform displays outputs:

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Outputs:

instance_id        = i-0abc123def456789
instance_public_ip  = 54.123.45.67

You can also retrieve outputs later:

terraform output
terraform output instance_id

Local Values

Local values let you define intermediate values that are calculated once per configuration:

locals {
  # Common tags applied to all resources
  common_tags = {
    Project     = "my-app"
    Environment = var.environment
    ManagedBy   = "terraform"
  }
  
  # Derived values
  name_prefix = "${var.environment}-${var.project}"
}

resource "aws_instance" "web" {
  # Use local value
  tags = merge(local.common_tags, { Name = local.name_prefix })
}
Next: State Management →
← Back to Lessons

// Lesson 5: State Management

Terraform uses state to map your configuration to real-world resources. Understanding state is crucial for working effectively with Terraform, especially in team environments.

What is State?

When Terraform reads your configuration, it doesn't know what resources currently exist. Instead, it looks at its state file—a JSON document that maps resource addresses to their real-world identifiers and attributes.

{
  "version": 4,
  "terraform_version": "1.5.0",
  "resources": {
    "aws_instance.web": {
      "type": "aws_instance",
      "depends_on": [],
      "primary": {
        "id": "i-0abc123def456789",
        "attributes": {
          "id": "i-0abc123def456789",
          "instance_type": "t2.micro",
          "public_ip": "54.123.45.67",
          ...
        }
      }
    }
  }
}

Without state, Terraform wouldn't know that aws_instance.web corresponds to the EC2 instance i-0abc123def456789.

Local State

By default, Terraform stores state in a local file called terraform.tfstate:

# Default behavior - local state
terraform {
  # This is actually the default
  backend "local" {
    path = "terraform.tfstate"
  }
}

Local state works for individual development but has problems:

Remote Backends

Remote backends store state in a shared location accessible by your team:

S3 Backend (AWS)

terraform {
  backend "s3" {
    bucket         = "my-terraform-state"
    key            = "prod/terraform.tfstate"
    region         = "us-east-1"
    encrypt        = true
    dynamodb_table = "terraform-locks"
  }
}

The DynamoDB table enables state locking—preventing concurrent modifications:

# Create DynamoDB table for locking
resource "aws_dynamodb_table" "terraform_locks" {
  name         = "terraform-locks"
  billing_mode = "PAY_PER_REQUEST"
  hash_key     = "LockID"

  attribute {
    name = "LockID"
    type = "S"
  }
}

Google Cloud Storage

terraform {
  backend "gcs" {
    bucket = "my-terraform-state"
    prefix = "prod/state"
  }
}

Azure Blob Storage

terraform {
  backend "azurerm" {
    resource_group_name  = "terraform-rg"
    storage_account_name = "terraformstate"
    container_name       = "terraform-state"
    key                  = "prod.terraform.tfstate"
  }
}

Terraform Cloud/Enterprise

terraform {
  backend "remote" {
    organization = "my-org"
    workspaces {
      name = "prod"
    }
  }
}

State Locking

When using remote backends with locking, Terraform acquires a lock before performing any operation:

$ terraform apply
Acquiring state lock... (ID: abc123)
Acquired state lock.

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.

Releasing state lock... 
Released state lock.

If another user or process tries to run Terraform while it's locked, they'll get an error:

Error: Error acquiring the state lock

Error message: 2 errors occurred:
    * Lock Info:
        ID:        abc123
        Path:      my-terraform-state/prod/terraform.tfstate
        Operation: OperationTypeApply
        Who:       user@hostname
        Version:   1.5.0
        Created:   2024-01-15 10:30:00.0000000Z
        Info:      

    * Lock Info:
        ID:        def456
        Path:      my-terraform-state/prod/terraform.tfstate
        Operation: OperationTypeApply
        Who:       another-user@hostname
        Version:   1.5.0
        Created:   2024-01-15 10:31:00.0000000Z

    ...

State Manipulation

Sometimes you need to manipulate state directly (with caution):

# List resources in state
terraform state list

# Show a specific resource
terraform state show aws_instance.web

# Move a resource to a new name
terraform state mv aws_instance.web aws_instance.app

# Remove a resource from state (doesn't destroy the actual resource)
terraform state rm aws_instance.old

# Pull current state (for debugging)
terraform state pull

Warning: Direct state manipulation is dangerous. Always make a backup first:

cp terraform.tfstate terraform.tfstate.backup
Next: Modules →
← Back to Lessons

// Lesson 6: Modules

Modules are containers for multiple resources that are used together. They're the key to creating reusable, composable infrastructure code.

Why Modules?

Without modules, your Terraform configuration grows unwieldy. With modules, you can:

Creating Modules

A module is simply a collection of .tf files in a directory:

# modules/ec2/main.tf
variable "instance_type" {
  description = "EC2 instance type"
  type        = string
  default     = "t2.micro"
}

variable "ami" {
  description = "AMI ID"
  type        = string
}

variable "tags" {
  description = "Tags to apply"
  type        = map(string)
  default     = {}
}

resource "aws_instance" "this" {
  ami           = var.ami
  instance_type = var.instance_type
  tags          = var.tags
}

output "instance_id" {
  description = "EC2 instance ID"
  value       = aws_instance.this.id
}

output "public_ip" {
  description = "Public IP address"
  value       = aws_instance.this.public_ip
}

Using Modules

Call a module using the module block:

# main.tf
module "web_server" {
  source = "./modules/ec2"
  
  ami           = "ami-0c55b159cbfafe1f0"
  instance_type = "t2.micro"
  
  tags = {
    Name = "web-server"
  }
}

# Reference module outputs
resource "aws_security_group" "web" {
  # Use module.web_server.public_ip
}

Module Sources

Modules can be sourced from various locations:

# Local path
module "local" {
  source = "./modules/ec2"
}

# Terraform Registry (public)
module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "3.0.0"
}

# GitHub
module "example" {
  source = "github.com/user/repo//path/to/module"
}

# GitLab
module "example" {
  source = "gitlab.com/user/repo//path/to/module"
}

# Bitbucket
module "example" {
  source = "bitbucket.org/user/repo//path/to/module"
}

# Generic Git repository
module "example" {
  source = "git::https://example.com/repo.git"
}

# S3 bucket (for private modules)
module "example" {
  source = "s3::https://s3.amazonaws.com/bucket/module.zip"
}

Module Versioning

For modules from external sources, specify versions:

# From Terraform Registry
module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "~> 3.0"  # Any 3.x version
}

# From Git
module "example" {
  source = "github.com/user/repo//module?ref=v1.2.3"
}

Child Modules

Modules can contain other modules:

# modules/networking/vpc.tf
module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "~> 3.0"
  
  name = "my-vpc"
  cidr = "10.0.0.0/16"
}

# modules/networking/subnets.tf
module "subnets" {
  source  = "terraform-aws-modules/subnets/aws"
  version = "~> 3.0"
  
  depends_on = [module.vpc]
  
  cidr_variants          = module.vpc.vpc_cidr_block
  availability_zones    = ["us-east-1a", "us-east-1b"]
  map_public_ip_on_launch = true
}

Module Composition

Build higher-level modules from lower-level ones:

# modules/application/main.tf
module "networking" {
  source = "../networking"
}

module "compute" {
  source = "../compute"
  
  vpc_id = module.networking.vpc_id
}

module "database" {
  source = "../database"
  
  subnet_ids = module.networking.database_subnet_ids
}
Next: Workspaces →
← Back to Lessons

// Lesson 7: Workspaces

Workspaces let you manage multiple environments (dev, staging, production) with a single Terraform configuration. They provide isolation without code duplication.

How Workspaces Work

Each workspace has its own state file. When you switch workspaces, Terraform uses a different state:

# Default workspace is "default"
$ terraform workspace list
* default

# Create a new workspace
$ terraform workspace new staging
Created and switched to workspace "staging"!

# List workspaces
$ terraform workspace list
  default
* staging
  production

# Switch workspace
$ terraform workspace select production
Switched to workspace "production".

Workspace-Specific Configuration

Use the terraform.workspace variable to customize behavior per workspace:

variable "environment" {
  description = "Environment name"
  type        = string
}

locals {
  environment = terraform.workspace == "default" ? "dev" : terraform.workspace
  
  # Environment-specific configuration
  config = {
    development = {
      instance_type = "t3.micro"
      desired_size  = 1
    }
    staging = {
      instance_type = "t3.small"
      desired_size  = 2
    }
    production = {
      instance_type = "t3.large"
      desired_size  = 5
    }
  }
  
  # Look up config for current environment
  env_config = local.config[local.environment]
}

resource "aws_instance" "web" {
  instance_type = local.env_config.instance_type
  # ...
}

Workspaces vs. Modules

Use workspaces for environment isolation within the same configuration:

# Same code, different workspaces
# Dev: terraform workspace new dev
# Stage: terraform workspace new staging
# Prod: terraform workspace new production

resource "aws_instance" "server" {
  # This instance exists in each workspace's state
  tags = {
    Environment = terraform.workspace
  }
}

Use separate configurations (or modules) for complete infrastructure separation:

# Separate directories for complete isolation
# infrastructure/
#   dev/
#     main.tf (workspace = "dev")
#   prod/
#     main.tf (workspace = "prod")

Workspace Gotchas

Be aware of these common pitfalls:

Next: Provisioning & Automation →
← Back to Lessons

// Lesson 8: Provisioning & Automation

While Terraform is great for creating infrastructure, provisioners let you customize instances after creation—installing software, configuring systems, and bootstrapping applications.

Provisioner Types

local-exec

Runs commands on the machine running Terraform:

resource "aws_instance" "web" {
  ami           = "ami-0c55b159cbfafe1f0"
  instance_type = "t2.micro"
  
  provisioner "local-exec" {
    command = "echo 'Instance ${self.public_ip} created!' > created.txt"
  }
}

remote-exec

Runs commands on the created resource via SSH or WinRM:

resource "aws_instance" "web" {
  ami           = "ami-0c55b159cbfafe1f0"
  instance_type = "t2.micro"
  
  connection {
    type        = "ssh"
    host        = self.public_ip
    user        = "ubuntu"
    private_key = file("~/.ssh/id_rsa")
  }
  
  provisioner "remote-exec" {
    inline = [
      "sudo apt-get update",
      "sudo apt-get install -y nginx",
      "sudo systemctl enable nginx",
      "sudo systemctl start nginx"
    ]
  }
}

File Provisioner

Upload files or directories to the created resource:

resource "aws_instance" "web" {
  # ... connection config ...
  
  provisioner "file" {
    content      = file("templates/nginx.conf")
    destination  = "/tmp/nginx.conf"
  }
  
  provisioner "file" {
    source       = "scripts/"
    destination  = "/home/ubuntu/scripts/"
  }
}

Provisioner Lifecycle

By default, provisioners run after resource creation. Use when to change this:

# Run on destroy
provisioner "local-exec" {
  when    = "destroy"
  command = "echo 'Cleaning up...'"
}

Use on_failure to control behavior when provisioners fail:

provisioner "remote-exec" {
  on_failure = "continue"  # Don't fail the whole apply
  inline     = ["some command"]
}

Cloud-Init

For cloud instances, cloud-init is often better than Terraform provisioners:

resource "aws_instance" "web" {
  ami           = "ami-0c55b159cbfafe1f0"
  instance_type = "t2.micro"
  
  user_data = <<-EOF
              #!/bin/bash
              apt-get update
              apt-get install -y nginx
              systemctl enable nginx
              systemctl start nginx
              EOF
}

Alternative: EC2 Instance Connect

Modern AWS accounts support EC2 Instance Connect for passwordless SSH:

resource "aws_instance" "web" {
  ami           = "ami-0c55b159cbfafe1f0"
  instance_type = "t2.micro"
  
  # EC2 Instance Connect - no key management needed
  metadata_options {
    http_endpoint               = "enabled"
    http_tokens                 = "required"
    http_put_response_hop_limit = 1
  }
}

Provisioner Alternatives

Consider these alternatives for production use:

Next: Testing & Security →
← Back to Lessons

// Lesson 9: Testing & Security

Infrastructure as Code needs the same testing rigor as application code. Additionally, Terraform configurations often contain sensitive data that requires careful handling.

Static Analysis

terraform validate

# Validates syntax and internal consistency
terraform validate

tfsec

# Security scanning
tfsec .
# Install: brew install tfsec

# Example output:
# 
# Result #1 HIGH Security group allows SSH from internet
# 
# https://aqua.twist/tafsec/terraform/aws/
# 
# 1 tests, 0 passed, 1 high, 0 medium, 0 low
# 
# terraform/aws/main.tf:6

Checkov

# Another security scanner
checkov -d .
# Install: pip install checkov

Automated Testing with Terratest

Terratest is a Go library for testing Terraform:

package test

import (
    "testing"
    "github.com/gruntwork-io/terratest/modules/terraform"
    "github.com/stretchr/testify/assert"
)

func TestTerraformWebServer(t *testing.T) {
    terraformOptions := &terraform.Options{
        TerraformDir: "../examples/web-server",
        Vars: map[string]interface{}{
            "environment": "test",
        },
    }
    
    defer terraform.Destroy(t, terraformOptions)
    terraform.InitAndApply(t, terraformOptions)
    
    instanceID := terraform.Output(t, terraformOptions, "instance_id")
    assert.NotEmpty(t, instanceID)
}

Sensitive Variables

Mark variables as sensitive to prevent exposure:

variable "db_password" {
  description = "Database password"
  type        = string
  sensitive   = true  # Won't show in CLI output
}

variable "api_key" {
  description = "Third-party API key"
  type        = string
  sensitive   = true
}

Note: Sensitive variables still appear in state file. Use appropriate access controls.

Secrets in State

Terraform state can contain sensitive data. Protect accordingly:

Terraform Cloud/Enterprise

For enterprise use, Terraform Cloud provides:

Next: Production Patterns →
← Back to Lessons

// Lesson 10: Production Patterns

Running Terraform in production requires careful consideration of workflows, collaboration, and automation. This lesson covers patterns for teams.

CI/CD Integration

GitHub Actions

name: Terraform

on:
  push:
    branches: [main]
  pull_request:

jobs:
  terraform:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      
      - uses: hashicorp/setup-terraform@v2
        with:
          terraform_version: 1.5.0
          
      - name: Terraform Init
        run: terraform init
        env:
          AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
          AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          
      - name: Terraform Plan
        run: terraform plan
        env:
          AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
          AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          
      - name: Terraform Apply
        if: github.ref == 'refs/heads/main'
        run: terraform apply -auto-approve
        env:
          AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
          AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}

Atlantis

Atlantis is a self-hosted tool for Terraform CI/CD:

# atlantis.yaml
version: 1
projects:
  - name: production
    dir: .
    workflow: production
    terraform_version: 1.5.0

workflows:
  production:
    plan:
      steps:
        - init
        - plan
    apply:
      steps:
        - apply

Terraform Cloud

Terraform Cloud (formerly Terraform Enterprise) provides hosted infrastructure:

Repository Structure

Recommended repository layout:

infrastructure/
├── .gitignore
├── .terraform-version
├── main.tf
├── variables.tf
├── outputs.tf
├── versions.tf
├── environments/
│   ├── dev/
│   │   ├── main.tf
│   │   └── terraform.tfvars
│   ├── staging/
│   │   ├── main.tf
│   │   └── terraform.tfvars
│   └── prod/
│       ├── main.tf
│       └── terraform.tfvars
├── modules/
│   ├── networking/
│   ├── compute/
│   └── database/
└── Makefile

The Road Ahead

Congratulations on completing this guide! You've learned:

Continue your journey with:

← Back to Lessons