// Define your infrastructure. Deploy with confidence.
TERRAFORM CHANGED THE GAME.
Manual server provisioning is a relic of the past. In a world where infrastructure can scale instantly, where cloud resources spin up and down by the minute, you need a way to define your entire infrastructure in code. You need Terraform.
WHY TERRAFORM?
Terraform uses a declarative approach—you describe what you want, not how to get there. It figures out the implementation. This means reproducible, version-controlled, auditable infrastructure that you can deploy to AWS, GCP, Azure, or your own on-premises servers with the same configuration language.
BECOME INFRASTRUCTURE-AS-CODE NATIVE.
Learn HCL (HashiCorp Configuration Language), understand providers and modules, master state management, and build production-grade infrastructure that scales. Whether you're managing one VPS or ten thousand servers, Terraform gives you the power to control it all.
10 lessons. Complete Terraform control.
What is Infrastructure as Code? Installing Terraform and writing your first configuration.
BeginnerSyntax, variables, outputs, and the building blocks of Terraform configurations.
BeginnerUnderstanding providers, resources, and data sources. Working with AWS, GCP, Azure.
BeginnerInput variables, output values, and creating reusable configurations.
IntermediateUnderstanding Terraform state, backends, and remote state for teams.
IntermediateCreating reusable infrastructure modules. Modular design patterns.
IntermediateManaging multiple environments. Dev, staging, production isolation.
IntermediateProvisioners, remote-exec, cloud-init, and bootstrapping instances.
AdvancedTerratest, Sentinel policies, secrets management, and secure configurations.
AdvancedCI/CD integration,Atlantis, Terraform Cloud, and enterprise patterns.
AdvancedInfrastructure as Code (IaC) is the practice of managing and provisioning computer data centers through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools. Think of it like this: instead of manually clicking through a cloud console to create servers, networks, and storage, you write code that describes exactly what you want—and the IaC tool makes it happen.
Terraform is HashiCorp's answer to IaC. It was released in 2014 and has since become the industry standard for multi-cloud infrastructure provisioning. Unlike cloud-specific tools (like AWS CloudFormation or Azure Resource Manager), Terraform is provider-agnostic—you can manage resources across AWS, GCP, Azure, Kubernetes, and even on-premises infrastructure using the same language and workflow.
Here's what makes Terraform special: it's declarative. When you write a Terraform configuration, you're not writing a script that says "do this, then do that." Instead, you're declaring the desired end state of your infrastructure. Terraform then figures out how to achieve that state.
Consider the difference:
The declarative approach means Terraform can analyze your current infrastructure state, compare it to your desired state, and generate an execution plan. You review the plan, make sure it looks right, and then Terraform applies only the changes needed. This is called the Terraform workflow.
Every Terraform project follows the same basic workflow:
terraform init to initialize the working directory and download necessary providers.terraform plan to see what changes Terraform will make. This is your chance to review before applying.terraform apply to make the changes happen.terraform destroy when you want to tear down the infrastructure.This workflow is idempotent—running it multiple times with the same configuration produces the same result. Terraform is smart enough to know what's already there and only modify what's changed.
On Linux or macOS, the easiest way to install Terraform is using the official install script or your package manager:
# Using the official install script
curl -fsSL https://get.terraform.io | sh
# Or on macOS with Homebrew
brew install terraform
# On Debian/Ubuntu
wget -O- https://apt.releases.hashicorp.com/gpg | gpg --dearmor > /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | tee /etc/apt/sources.list.d/hashicorp.list
apt update && apt install terraform
On Windows, you can use Chocolatey, Scoop, or download the binary directly from terraform.io. Verify the installation:
terraform version
Let's create something simple—a local file. This demonstrates Terraform without requiring cloud credentials:
# main.tf
resource "local_file" "hello" {
filename = "hello.txt"
content = "Hello from Terraform!"
}
Now let's walk through the workflow:
# Initialize Terraform (downloads providers)
terraform init
# See what will be created
terraform plan
# Apply the configuration
terraform apply
# Verify the file was created
cat hello.txt
Congratulations—you've just created infrastructure with code! That local_file resource is now managed by Terraform. If you change the content and run terraform apply again, Terraform will update the file. If you run terraform destroy, it will delete the file.
After running terraform init, you'll notice some new files and directories:
.terraform/ - Contains downloaded providers and modules.terraform.lock.hcl - Locks provider versionsThe .terraform.lock.hcl file is important—commit it to version control to ensure everyone on your team uses the same provider versions.
HashiCorp Configuration Language (HCL) is Terraform's domain-specific language. It's designed to be both human-readable and machine-parseable. If you've worked with JSON, you'll find HCL familiar—but much more pleasant to write.
The fundamental building blocks of HCL are blocks and arguments:
# This is a block
resource "aws_instance" "web" {
# These are arguments
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
tags = {
Name = "HelloWorld"
}
}
Blocks have a type (resource), zero or more labels ("aws_instance" and "web"), and a body containing arguments. Arguments are key-value pairs that configure the resource.
Identifiers (names of resources, variables, etc.) can contain letters, digits, underscores, and hyphens. They must start with a letter:
# Valid identifiers
my_resource
aws_instance
web_server_01
# Invalid
123resource # Can't start with number
my-resource # Hyphens not allowed in names
Strings are wrapped in double quotes. HCL supports escape sequences:
content = "Hello\nWorld" # Newline
path = "C:\\Users\\name" # Windows path
HCL supports several data types:
name = "web-server"
count = 5
timeout = 300
enable_monitoring = true
# Implicit list
ports = [80, 443, 8080]
# Or with explicit type (rarely needed)
ports = list(number) [80, 443, 8080]
tags = {
Name = "web-server"
Environment = "production"
Owner = "team-infra"
}
# Sets are like lists but with no duplicates and no order
availability_zones = toset(["us-east-1a", "us-east-1b"])
One of HCL's most powerful features is the ability to reference values from other resources:
resource "aws_security_group" "web" {
name = "web-sg"
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_instance" "web" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
# Reference the security group
vpc_security_group_ids = [aws_security_group.web.id]
}
The syntax resource_type.resource_name.attribute lets you access resource attributes. In this case, we're grabbing the ID of the security group we created.
HCL supports a rich expression syntax including arithmetic, comparisons, and built-in functions:
# String interpolation
name = "server-${var.environment}"
# Arithmetic
total = var.count * 2
# Conditionals
instance_type = var.is_production ? "t3.large" : "t3.micro"
# Built-in functions
upper_name = upper(var.name)
joined_ids = join(",", aws_instance.web[*].id)
Terraform includes hundreds of functions for string manipulation, arithmetic, IP calculations, and more. Check the docs for the full list.
HCL supports three comment styles:
# Single-line comment (hash style)
// Single-line comment (C++ style)
/* Multi-line
comment */
Providers are plugins that Terraform uses to interact with cloud platforms, SaaS providers, and other services. Each provider exposes resources that you can manage with Terraform.
When you run terraform init, Terraform downloads the providers specified in your configuration. These are typically distributed as plugins—binary executables that Terraform calls out to when managing resources.
Providers are configured in a provider block:
provider "aws" {
region = "us-east-1"
# You can also configure credentials here
# But prefer using environment variables or shared config
}
Common ways to provide credentials (in order of precedence):
AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)~/.aws/credentials)~/.aws/config)provider "aws" {
region = "us-east-1"
}
provider "google" {
project = "my-project"
region = "us-central1"
}
provider "azurerm" {
features {}
}
Resources are the most important element in Terraform. Each resource block describes one or more infrastructure objects:
resource "aws_instance" "web" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
tags = {
Name = "web-server"
}
}
The resource type (aws_instance) tells Terraform which provider to use. The first label (web) is a local name you use to reference this resource within your configuration. The arguments depend on the resource type.
Terraform's lifecycle for each resource:
Many resources support in-place updates. For example, changing tags on an AWS instance:
resource "aws_instance" "web" {
# Changing this will update in place
tags = {
Name = "updated-name"
}
}
Some changes require destroying and recreating the resource. For example, changing an EC2 instance type:
resource "aws_instance" "web" {
# This will destroy and recreate the instance
instance_type = "t3.small"
}
Terraform will warn you when this happens in the plan phase.
Data sources let you fetch information from existing infrastructure—resources that weren't created by Terraform:
# Get information about an existing AMI
data "aws_ami" "ubuntu" {
most_recent = true
owners = ["099720109477"] # Canonical
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-*"]
}
}
# Use it in a resource
resource "aws_instance" "web" {
ami = data.aws_ami.ubuntu.id
instance_type = "t2.micro"
}
Data sources are read-only—they don't create or modify anything.
Variables and outputs make your Terraform configurations reusable and composable. They allow you to parameterize your infrastructure and expose important values for use elsewhere.
Input variables let you customize Terraform configurations without modifying the code. Think of them like function parameters.
# variables.tf
variable "instance_type" {
description = "EC2 instance type"
type = string
default = "t2.micro"
}
variable "environment" {
description = "Environment name"
type = string
}
# String
variable "name" {
type = string
default = "web-server"
}
# Number
variable "count" {
type = number
default = 3
}
# Boolean
variable "enable_monitoring" {
type = bool
default = true
}
# List
variable "ports" {
type = list(number)
default = [80, 443]
}
# Map
variable "tags" {
type = map(string)
default = {
Environment = "production"
Team = "infra"
}
}
# Object
variable "server_config" {
type = object({
cpu = number
memory = number
disk = number
})
default = {
cpu = 2
memory = 4096
disk = 50
}
}
# Set
variable "availability_zones" {
type = set(string)
default = ["us-east-1a", "us-east-1b"]
}
resource "aws_instance" "web" {
# Reference variable with var.variable_name
instance_type = var.instance_type
tags = var.tags
}
# Via command line
terraform apply -var="environment=prod" -var="instance_type=t3.large"
# Via file
terraform apply -var-file="prod.tfvars"
# Via environment variables
export TF_VAR_environment=prod
# dev.tfvars
environment = "development"
instance_type = "t3.micro"
# prod.tfvars
environment = "production"
instance_type = "t3.large"
Output values expose information about your infrastructure to the CLI, or for use by other Terraform configurations:
# outputs.tf
output "instance_id" {
description = "The ID of the EC2 instance"
value = aws_instance.web.id
}
output "instance_public_ip" {
description = "Public IP address"
value = aws_instance.web.public_ip
sensitive = true # Hide from output
}
output "all_instance_ids" {
description = "IDs of all instances"
value = aws_instance.web[*].id
}
After running terraform apply, Terraform displays outputs:
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Outputs:
instance_id = i-0abc123def456789
instance_public_ip = 54.123.45.67
You can also retrieve outputs later:
terraform output
terraform output instance_id
Local values let you define intermediate values that are calculated once per configuration:
locals {
# Common tags applied to all resources
common_tags = {
Project = "my-app"
Environment = var.environment
ManagedBy = "terraform"
}
# Derived values
name_prefix = "${var.environment}-${var.project}"
}
resource "aws_instance" "web" {
# Use local value
tags = merge(local.common_tags, { Name = local.name_prefix })
}
Terraform uses state to map your configuration to real-world resources. Understanding state is crucial for working effectively with Terraform, especially in team environments.
When Terraform reads your configuration, it doesn't know what resources currently exist. Instead, it looks at its state file—a JSON document that maps resource addresses to their real-world identifiers and attributes.
{
"version": 4,
"terraform_version": "1.5.0",
"resources": {
"aws_instance.web": {
"type": "aws_instance",
"depends_on": [],
"primary": {
"id": "i-0abc123def456789",
"attributes": {
"id": "i-0abc123def456789",
"instance_type": "t2.micro",
"public_ip": "54.123.45.67",
...
}
}
}
}
}
Without state, Terraform wouldn't know that aws_instance.web corresponds to the EC2 instance i-0abc123def456789.
By default, Terraform stores state in a local file called terraform.tfstate:
# Default behavior - local state
terraform {
# This is actually the default
backend "local" {
path = "terraform.tfstate"
}
}
Local state works for individual development but has problems:
Remote backends store state in a shared location accessible by your team:
terraform {
backend "s3" {
bucket = "my-terraform-state"
key = "prod/terraform.tfstate"
region = "us-east-1"
encrypt = true
dynamodb_table = "terraform-locks"
}
}
The DynamoDB table enables state locking—preventing concurrent modifications:
# Create DynamoDB table for locking
resource "aws_dynamodb_table" "terraform_locks" {
name = "terraform-locks"
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
}
terraform {
backend "gcs" {
bucket = "my-terraform-state"
prefix = "prod/state"
}
}
terraform {
backend "azurerm" {
resource_group_name = "terraform-rg"
storage_account_name = "terraformstate"
container_name = "terraform-state"
key = "prod.terraform.tfstate"
}
}
terraform {
backend "remote" {
organization = "my-org"
workspaces {
name = "prod"
}
}
}
When using remote backends with locking, Terraform acquires a lock before performing any operation:
$ terraform apply
Acquiring state lock... (ID: abc123)
Acquired state lock.
Apply complete! Resources: 0 added, 1 changed, 0 destroyed.
Releasing state lock...
Released state lock.
If another user or process tries to run Terraform while it's locked, they'll get an error:
Error: Error acquiring the state lock
Error message: 2 errors occurred:
* Lock Info:
ID: abc123
Path: my-terraform-state/prod/terraform.tfstate
Operation: OperationTypeApply
Who: user@hostname
Version: 1.5.0
Created: 2024-01-15 10:30:00.0000000Z
Info:
* Lock Info:
ID: def456
Path: my-terraform-state/prod/terraform.tfstate
Operation: OperationTypeApply
Who: another-user@hostname
Version: 1.5.0
Created: 2024-01-15 10:31:00.0000000Z
...
Sometimes you need to manipulate state directly (with caution):
# List resources in state
terraform state list
# Show a specific resource
terraform state show aws_instance.web
# Move a resource to a new name
terraform state mv aws_instance.web aws_instance.app
# Remove a resource from state (doesn't destroy the actual resource)
terraform state rm aws_instance.old
# Pull current state (for debugging)
terraform state pull
Warning: Direct state manipulation is dangerous. Always make a backup first:
cp terraform.tfstate terraform.tfstate.backup
Modules are containers for multiple resources that are used together. They're the key to creating reusable, composable infrastructure code.
Without modules, your Terraform configuration grows unwieldy. With modules, you can:
A module is simply a collection of .tf files in a directory:
# modules/ec2/main.tf
variable "instance_type" {
description = "EC2 instance type"
type = string
default = "t2.micro"
}
variable "ami" {
description = "AMI ID"
type = string
}
variable "tags" {
description = "Tags to apply"
type = map(string)
default = {}
}
resource "aws_instance" "this" {
ami = var.ami
instance_type = var.instance_type
tags = var.tags
}
output "instance_id" {
description = "EC2 instance ID"
value = aws_instance.this.id
}
output "public_ip" {
description = "Public IP address"
value = aws_instance.this.public_ip
}
Call a module using the module block:
# main.tf
module "web_server" {
source = "./modules/ec2"
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
tags = {
Name = "web-server"
}
}
# Reference module outputs
resource "aws_security_group" "web" {
# Use module.web_server.public_ip
}
Modules can be sourced from various locations:
# Local path
module "local" {
source = "./modules/ec2"
}
# Terraform Registry (public)
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "3.0.0"
}
# GitHub
module "example" {
source = "github.com/user/repo//path/to/module"
}
# GitLab
module "example" {
source = "gitlab.com/user/repo//path/to/module"
}
# Bitbucket
module "example" {
source = "bitbucket.org/user/repo//path/to/module"
}
# Generic Git repository
module "example" {
source = "git::https://example.com/repo.git"
}
# S3 bucket (for private modules)
module "example" {
source = "s3::https://s3.amazonaws.com/bucket/module.zip"
}
For modules from external sources, specify versions:
# From Terraform Registry
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 3.0" # Any 3.x version
}
# From Git
module "example" {
source = "github.com/user/repo//module?ref=v1.2.3"
}
Modules can contain other modules:
# modules/networking/vpc.tf
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 3.0"
name = "my-vpc"
cidr = "10.0.0.0/16"
}
# modules/networking/subnets.tf
module "subnets" {
source = "terraform-aws-modules/subnets/aws"
version = "~> 3.0"
depends_on = [module.vpc]
cidr_variants = module.vpc.vpc_cidr_block
availability_zones = ["us-east-1a", "us-east-1b"]
map_public_ip_on_launch = true
}
Build higher-level modules from lower-level ones:
# modules/application/main.tf
module "networking" {
source = "../networking"
}
module "compute" {
source = "../compute"
vpc_id = module.networking.vpc_id
}
module "database" {
source = "../database"
subnet_ids = module.networking.database_subnet_ids
}
Workspaces let you manage multiple environments (dev, staging, production) with a single Terraform configuration. They provide isolation without code duplication.
Each workspace has its own state file. When you switch workspaces, Terraform uses a different state:
# Default workspace is "default"
$ terraform workspace list
* default
# Create a new workspace
$ terraform workspace new staging
Created and switched to workspace "staging"!
# List workspaces
$ terraform workspace list
default
* staging
production
# Switch workspace
$ terraform workspace select production
Switched to workspace "production".
Use the terraform.workspace variable to customize behavior per workspace:
variable "environment" {
description = "Environment name"
type = string
}
locals {
environment = terraform.workspace == "default" ? "dev" : terraform.workspace
# Environment-specific configuration
config = {
development = {
instance_type = "t3.micro"
desired_size = 1
}
staging = {
instance_type = "t3.small"
desired_size = 2
}
production = {
instance_type = "t3.large"
desired_size = 5
}
}
# Look up config for current environment
env_config = local.config[local.environment]
}
resource "aws_instance" "web" {
instance_type = local.env_config.instance_type
# ...
}
Use workspaces for environment isolation within the same configuration:
# Same code, different workspaces
# Dev: terraform workspace new dev
# Stage: terraform workspace new staging
# Prod: terraform workspace new production
resource "aws_instance" "server" {
# This instance exists in each workspace's state
tags = {
Environment = terraform.workspace
}
}
Use separate configurations (or modules) for complete infrastructure separation:
# Separate directories for complete isolation
# infrastructure/
# dev/
# main.tf (workspace = "dev")
# prod/
# main.tf (workspace = "prod")
Be aware of these common pitfalls:
While Terraform is great for creating infrastructure, provisioners let you customize instances after creation—installing software, configuring systems, and bootstrapping applications.
Runs commands on the machine running Terraform:
resource "aws_instance" "web" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
provisioner "local-exec" {
command = "echo 'Instance ${self.public_ip} created!' > created.txt"
}
}
Runs commands on the created resource via SSH or WinRM:
resource "aws_instance" "web" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
connection {
type = "ssh"
host = self.public_ip
user = "ubuntu"
private_key = file("~/.ssh/id_rsa")
}
provisioner "remote-exec" {
inline = [
"sudo apt-get update",
"sudo apt-get install -y nginx",
"sudo systemctl enable nginx",
"sudo systemctl start nginx"
]
}
}
Upload files or directories to the created resource:
resource "aws_instance" "web" {
# ... connection config ...
provisioner "file" {
content = file("templates/nginx.conf")
destination = "/tmp/nginx.conf"
}
provisioner "file" {
source = "scripts/"
destination = "/home/ubuntu/scripts/"
}
}
By default, provisioners run after resource creation. Use when to change this:
# Run on destroy
provisioner "local-exec" {
when = "destroy"
command = "echo 'Cleaning up...'"
}
Use on_failure to control behavior when provisioners fail:
provisioner "remote-exec" {
on_failure = "continue" # Don't fail the whole apply
inline = ["some command"]
}
For cloud instances, cloud-init is often better than Terraform provisioners:
resource "aws_instance" "web" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
user_data = <<-EOF
#!/bin/bash
apt-get update
apt-get install -y nginx
systemctl enable nginx
systemctl start nginx
EOF
}
Modern AWS accounts support EC2 Instance Connect for passwordless SSH:
resource "aws_instance" "web" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
# EC2 Instance Connect - no key management needed
metadata_options {
http_endpoint = "enabled"
http_tokens = "required"
http_put_response_hop_limit = 1
}
}
Consider these alternatives for production use:
Infrastructure as Code needs the same testing rigor as application code. Additionally, Terraform configurations often contain sensitive data that requires careful handling.
# Validates syntax and internal consistency
terraform validate
# Security scanning
tfsec .
# Install: brew install tfsec
# Example output:
#
# Result #1 HIGH Security group allows SSH from internet
#
# https://aqua.twist/tafsec/terraform/aws/
#
# 1 tests, 0 passed, 1 high, 0 medium, 0 low
#
# terraform/aws/main.tf:6
# Another security scanner
checkov -d .
# Install: pip install checkov
Terratest is a Go library for testing Terraform:
package test
import (
"testing"
"github.com/gruntwork-io/terratest/modules/terraform"
"github.com/stretchr/testify/assert"
)
func TestTerraformWebServer(t *testing.T) {
terraformOptions := &terraform.Options{
TerraformDir: "../examples/web-server",
Vars: map[string]interface{}{
"environment": "test",
},
}
defer terraform.Destroy(t, terraformOptions)
terraform.InitAndApply(t, terraformOptions)
instanceID := terraform.Output(t, terraformOptions, "instance_id")
assert.NotEmpty(t, instanceID)
}
Mark variables as sensitive to prevent exposure:
variable "db_password" {
description = "Database password"
type = string
sensitive = true # Won't show in CLI output
}
variable "api_key" {
description = "Third-party API key"
type = string
sensitive = true
}
Note: Sensitive variables still appear in state file. Use appropriate access controls.
Terraform state can contain sensitive data. Protect accordingly:
For enterprise use, Terraform Cloud provides:
Running Terraform in production requires careful consideration of workflows, collaboration, and automation. This lesson covers patterns for teams.
name: Terraform
on:
push:
branches: [main]
pull_request:
jobs:
terraform:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: hashicorp/setup-terraform@v2
with:
terraform_version: 1.5.0
- name: Terraform Init
run: terraform init
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
- name: Terraform Plan
run: terraform plan
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
- name: Terraform Apply
if: github.ref == 'refs/heads/main'
run: terraform apply -auto-approve
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
Atlantis is a self-hosted tool for Terraform CI/CD:
# atlantis.yaml
version: 1
projects:
- name: production
dir: .
workflow: production
terraform_version: 1.5.0
workflows:
production:
plan:
steps:
- init
- plan
apply:
steps:
- apply
Terraform Cloud (formerly Terraform Enterprise) provides hosted infrastructure:
Recommended repository layout:
infrastructure/
├── .gitignore
├── .terraform-version
├── main.tf
├── variables.tf
├── outputs.tf
├── versions.tf
├── environments/
│ ├── dev/
│ │ ├── main.tf
│ │ └── terraform.tfvars
│ ├── staging/
│ │ ├── main.tf
│ │ └── terraform.tfvars
│ └── prod/
│ ├── main.tf
│ └── terraform.tfvars
├── modules/
│ ├── networking/
│ ├── compute/
│ └── database/
└── Makefile
Congratulations on completing this guide! You've learned:
Continue your journey with: