// Amazon Web Services powers the internet.
AWS IS THE LARGEST CLOUD PROVIDER.
From startups to Fortune 500 companies, AWS provides the infrastructure that powers the modern internet. With over 200 services, AWS offers solutions for compute, storage, databases, machine learning, and more.
WHY AWS?
AWS pioneered cloud computing and continues to lead in market share, service offerings, and innovation. Learning AWS opens doors to some of the highest-paying tech jobs and enables you to build applications that scale to billions of users.
BECOME CLOUD NATIVE
Master the core AWS services. Understand how to provision infrastructure, deploy applications, and manage costs. Learn security best practices and how to design for reliability. AWS is the foundation of modern cloud architecture.
12 lessons. Complete AWS control.
What is cloud computing? AWS overview and setting up your account.
BeginnerVirtual servers, instances, AMIs, and instance types.
BeginnerBuckets, objects, versioning, and lifecycle policies.
BeginnerNetworking, subnets, security groups, and NAT gateways.
IntermediateManaged databases, Multi-AZ, read replicas, and backup.
IntermediateFunctions, triggers, layers, and serverless architecture.
IntermediateUsers, roles, policies, and security best practices.
IntermediateDNS, domains, CDN, and global content delivery.
AdvancedLoad balancing, auto scaling groups, and high availability.
AdvancedInfrastructure as code, templates, and automated provisioning.
AdvancedCloudWatch, GuardDuty, Security Hub, and compliance.
AdvancedWell-architected framework, cost optimization, and best practices.
AdvancedCloud computing is the delivery of computing services—servers, storage, databases, networking, software, analytics, and intelligence—over the Internet ("the cloud"). Instead of owning and maintaining physical data centers, you can access technology services on-demand from a cloud provider.
AWS has geographic regions around the world. Each region is a separate geographic area:
Each region has multiple Availability Zones (AZs)—physically separated data centers connected with low-latency networking:
us-east-1a, us-east-1b, us-east-1c, us-east-1d, us-east-1e, us-east-1f
Over 400 edge locations worldwide for content delivery (CloudFront) and DNS (Route 53).
# 1. Go to aws.amazon.com
# 2. Click "Create an AWS Account"
# 3. Enter email, password, account name
# 4. Enter payment method (required even for free tier)
# 5. Verify phone number
# 6. Choose Support Plan (Basic is free)
# Enable MFA for root account
# 1. Go to IAM Dashboard
# 2. Activate MFA on root account
# 3. Use virtual MFA device (Google Authenticator, Authy)
AWS offers a free tier for new customers:
The web-based interface for managing AWS resources:
# Console URL
https://console.aws.amazon.com
# Services used in this guide:
# - EC2: Virtual servers
# - S3: Object storage
# - RDS: Managed databases
# - Lambda: Serverless functions
# - VPC: Virtual network
# - IAM: Identity management
# - CloudFormation: Infrastructure as code
# - CloudWatch: Monitoring
# Install AWS CLI (Linux/macOS)
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
# Configure with credentials
aws configure
# AWS Access Key ID: [Your Access Key]
# AWS Secret Access Key: [Your Secret]
# Default region name: us-east-1
# Default output format: json
# Verify installation
aws --version
# aws-cli/2.x.x
EC2 provides resizable compute capacity in the cloud. It's the foundational service for running applications on AWS.
# Launch an instance (via console or CLI)
aws ec2 run-instances \
--image-id ami-0c55b159cbfafe1f0 \
--instance-type t3.micro \
--key-name my-key-pair \
--security-group-ids sg-0123456789abcdef0 \
--subnet-id subnet-0123456789abcdef0
# Common AMIs
# Amazon Linux 2 (recommended for AWS)
ami-0c55b159cbfafe1f0
# Ubuntu
ami-0d7552e33cf1a4a9c
# Red Hat Enterprise Linux
ami-0b0af00a1d2e74e52
# Windows Server 2019
ami-0ab4d6e5b52a48d4c
# Create custom AMI from instance
aws ec2 create-image \
--instance-id i-1234567890abcdef0 \
--name "my-custom-ami" \
--description "My custom application AMI"
# Create key pair
aws ec2 create-key-pair --key-name my-key-pair
# Or import existing public key
aws ec2 import-key-pair \
--key-name my-existing-key \
--public-key-material fileb://~/.ssh/id_rsa.pub
# .pem file permissions (Linux/Mac)
chmod 400 my-key-pair.pem
# Create security group
aws ec2 create-security-group \
--group-name web-server-sg \
--description "Security group for web servers" \
--vpc-id vpc-0123456789abcdef0
# Add rules
aws ec2 authorize-security-group-ingress \
--group-id sg-0123456789abcdef0 \
--protocol tcp \
--port 22 \
--cidr-ip 0.0.0.0/0
aws ec2 authorize-security-group-ingress \
--group-id sg-0123456789abcdef0 \
--protocol tcp \
--port 80 \
--cidr-ip 0.0.0.0/0
aws ec2 authorize-security-group-ingress \
--group-id sg-0123456789abcdef0 \
--protocol tcp \
--port 443 \
--cidr-ip 0.0.0.0/0
Local storage attached to the host, ephemeral (lost on stop/terminate)
# Create EBS volume
aws ec2 create-volume \
--size 20 \
--volume-type gp3 \
--availability-zone us-east-1a
# Attach to instance
aws ec2 attach-volume \
--volume-id vol-0123456789abcdef0 \
--instance-id i-1234567890abcdef0 \
--device /dev/sdf
# EBS types
# gp3: General Purpose SSD (default)
# gp2: General Purpose SSD (older)
# io1/io2: Provisioned IOPS SSD (high performance)
# st1: Throughput Optimized HDD (big data)
# sc1: Cold HDD (archive)
# Start instance
aws ec2 start-instances --instance-ids i-1234567890abcdef0
# Stop instance
aws ec2 stop-instances --instance-ids i-1234567890abcdef0
# Reboot instance
aws ec2 reboot-instances --instance-ids i-1234567890abcdef0
# Terminate instance
aws ec2 terminate-instances --instance-ids i-1234567890abcdef0
# Note: EBS volumes can be preserved with --no-terminate-with-maintenance
S3 provides object storage built to store and retrieve any amount of data from anywhere on the web.
# Create bucket (must be globally unique)
aws s3 mb s3://my-unique-bucket-name
# List buckets
aws s3 ls
# Upload file
aws s3 cp myfile.txt s3://my-bucket/
# Download file
aws s3 cp s3://my-bucket/myfile.txt ./
# Sync directory
aws s3 sync ./my-folder s3://my-bucket/
| Class | Use Case | Cost |
|---|---|---|
| Standard | Frequent access | $$$ |
| IA | Infrequent access | $$ |
| Glacier | Archive/backup | $ |
| Intelligent | Unknown access patterns | Variable |
# Upload with specific storage class
aws s3 cp myfile.txt s3://my-bucket/ \
--storage-class STANDARD_IA
# Storage class options:
# STANDARD - Default
# STANDARD_IA - Infrequent Access
# INTELLIGENT_TIERING - Auto-optimize
# GLACIER - Archive
# GLACIER_DEEP_ARCHIVE - Long-term archive
# REDUCED_REDUNDANCY - Non-critical (deprecated)
# Enable versioning
aws s3api put-bucket-versioning \
--bucket my-bucket \
--versioning-configuration Status=Enabled
# List object versions
aws s3api list-object-versions \
--bucket my-bucket
# Delete (creates delete marker)
aws s3 rm s3://my-bucket/myfile.txt
# Permanently delete specific version
aws s3api delete-object \
--bucket my-bucket \
--key myfile.txt \
--version-id abc123
# Lifecycle configuration (JSON)
{
"Rules": [
{
"ID": "Move to Glacier after 90 days",
"Status": "Enabled",
"Filter": {
"Prefix": "logs/"
},
"Transitions": [
{
"Days": 90,
"StorageClass": "GLACIER"
},
{
"Days": 365,
"StorageClass": "GLACIER_DEEP_ARCHIVE"
}
],
"Expiration": {
"Days": 2555"
}
}
]
}
# Apply lifecycle policy
aws s3api put-bucket-lifecycle-configuration \
--bucket my-bucket \
--lifecycle-configuration file://lifecycle.json
# Block public access (recommended)
aws s3 put-public-access-block \
--bucket my-bucket \
--public-access-block-configuration \
"BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true"
# Bucket policy (for programmatic access)
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-bucket/*"
}
]
}
# Enable static website hosting
aws s3 website s3://my-bucket \
--index-document index.html \
--error-document error.html
# Upload website files
aws s3 sync ./public s3://my-bucket/
# Website URL format:
# http://my-bucket.s3-website-us-east-1.amazonaws.com
VPC lets you provision a logically isolated section of the AWS cloud where you can launch AWS resources in a virtual network.
# Create VPC with CIDR block
aws ec2 create-vpc \
--cidr-block 10.0.0.0/16
# Enable DNS hostnames
aws ec2 modify-vpc-attribute \
--vpc-id vpc-0123456789abcdef0 \
--enable-dns-hostnames "Value=true"
# Enable DNS support
aws ec2 modify-vpc-attribute \
--vpc-id vpc-0123456789abcdef0 \
--enable-dns-support "Value=true"
# Create public subnet
aws ec2 create-subnet \
--vpc-id vpc-0123456789abcdef0 \
--cidr-block 10.0.1.0/24 \
--availability-zone us-east-1a
# Create private subnet
aws ec2 create-subnet \
--vpc-id vpc-0123456789abcdef0 \
--cidr-block 10.0.2.0/24 \
--availability-zone us-east-1a
# Enable auto-assign public IP for public subnet
aws ec2 modify-subnet-attribute \
--subnet-id subnet-0123456789abcdef0 \
--map-public-ip-on-launch
# Create IGW
aws ec2 create-internet-gateway
# Attach to VPC
aws ec2 attach-internet-gateway \
--internet-gateway-id igw-0123456789abcdef0 \
--vpc-id vpc-0123456789abcdef0
# Create route table for public subnet
aws ec2 create-route-table \
--vpc-id vpc-0123456789abcdef0
# Add route to internet
aws ec2 create-route \
--route-table-id rtb-0123456789abcdef0 \
--destination-cidr-block 0.0.0.0/0 \
--gateway-id igw-0123456789abcdef0
# Associate route table with subnet
aws ec2 associate-route-table \
--route-table-id rtb-0123456789abcdef0 \
--subnet-id subnet-0123456789abcdef0
# Create Elastic IP
aws ec2 allocate-address
# Create NAT Gateway (in public subnet)
aws ec2 create-nat-gateway \
--subnet-id subnet-0123456789abcdef0 \
--allocation-id eipalloc-0123456789abcdef0
# Add route in private subnet route table
aws ec2 create-route \
--route-table-id rtb-private-0123456789abcdef0 \
--destination-cidr-block 0.0.0.0/0 \
--nat-gateway-id nat-0123456789abcdef0
# Security group example
aws ec2 authorize-security-group-ingress \
--group-id sg-0123456789abcdef0 \
--protocol tcp \
--port 443 \
--cidr-ip 10.0.0.0/16
# NACL example
aws ec2 create-network-acl-entry \
--network-acl-id acl-0123456789abcdef0 \
--rule-number 100 \
--protocol tcp \
--rule-action allow \
--cidr-block 10.0.0.0/16 \
--port-range From=443,To=443
# Create VPC peering connection
aws ec2 create-vpc-peering-connection \
--vpc-id vpc-a \
--peer-vpc-id vpc-b
# Accept VPC peering connection
aws ec2 accept-vpc-peering-connection \
--vpc-peering-connection-id pcx-0123456789abcdef0
# Add routes in both VPCs
# Route in VPC A to reach VPC B: 10.1.0.0/16 via pcx-0123456789abcdef0
# Route in VPC B to reach VPC A: 10.0.0.0/16 via pcx-0123456789abcdef0
RDS makes it easy to set up, operate, and scale a relational database in the cloud.
# Create DB subnet group
aws rds create-db-subnet-group \
--db-subnet-group-name my-subnet-group \
--subnet-ids subnet-1 subnet-2 \
--description "Subnet group for RDS"
# Create security group for RDS
aws ec2 create-security-group \
--group-name rds-sg \
--description "Security group for RDS" \
--vpc-id vpc-0123456789abcdef0
# Allow MySQL/Aurora port 3306
aws ec2 authorize-security-group-ingress \
--group-id sg-rds-0123456789abcdef0 \
--protocol tcp \
--port 3306 \
--cidr-ip 10.0.0.0/16
# Create RDS instance (MySQL)
aws rds create-db-instance \
--db-instance-identifier mydatabase \
--db-instance-class db.t3.micro \
--engine mysql \
--engine-version 8.0 \
--allocated-storage 20 \
--storage-type gp3 \
--master-username admin \
--master-user-password mysecurepassword \
--db-subnet-group-name my-subnet-group \
--vpc-security-group-ids sg-rds-0123456789abcdef0 \
--publicly-accessible \
--backup-retention-period 7
# Create Multi-AZ instance (console or CLI)
aws rds create-db-instance \
--db-instance-identifier mydatabase \
--db-instance-class db.t3.micro \
--engine postgres \
--engine-version 14 \
--allocated-storage 20 \
--master-username admin \
--master-user-password mysecurepassword \
--db-subnet-group-name my-subnet-group \
--vpc-security-group-ids sg-rds-0123456789abcdef0 \
--multi-az
# Benefits:
# - Automatic failover to standby
# - Synchronous replication
# - Automated backups from standby
# - Zero data loss (typically)
# Create read replica
aws rds create-db-instance-read-replica \
--db-instance-identifier mydatabase-replica \
--source-db-instance-identifier mydatabase \
--db-instance-class db.t3.micro \
--availability-zone us-east-1a
# Cross-region read replica
aws rds create-db-instance-read-replica \
--db-instance-identifier mydatabase-eu \
--source-db-instance-identifier mydatabase \
--db-instance-class db.t3.micro \
--region eu-west-1
# Use read replica for read operations
# Connection string: mydatabase-replica.xxxx.us-east-1.rds.amazonaws.com
# Automated backups (enabled by default, 1-35 days)
aws rds modify-db-instance \
--db-instance-identifier mydatabase \
--backup-retention-period 30
# Create manual snapshot
aws rds create-db-snapshot \
--db-instance-identifier mydatabase \
--db-snapshot-identifier mydatabase-backup-001
# Restore from snapshot
aws rds restore-db-instance-from-db-snapshot \
--db-instance-identifier mydatabase-restored \
--db-snapshot-identifier mydatabase-backup-001
# Copy snapshot to another region
aws rds copy-db-snapshot \
--source-db-snapshot-identifier arn:aws:rds:us-east-1:123456789012:snapshot:mydatabase-backup-001 \
--target-db-snapshot-identifier mydatabase-backup-eu \
--source-region us-east-1
# Create custom parameter group
aws rds create-db-parameter-group \
--db-parameter-group-name mycustom-params \
--db-parameter-group-family mysql8.0 \
--description "Custom MySQL parameters"
# Modify parameters
aws rds modify-db-parameter-group \
--db-parameter-group-name mycustom-params \
--parameters "ParameterName=max_connections,ParameterValue=200,ApplyMethod=immediate"
# Apply to instance
aws rds modify-db-instance \
--db-instance-identifier mydatabase \
--db-parameter-group-name mycustom-params
Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume.
# Lambda function (Python)
def handler(event, context):
return {
'statusCode': 200,
'body': 'Hello from Lambda!'
}
# Handler format: filename.handler_function
# event: dict with input data
# context: runtime information
# Create ZIP package
zip function.zip lambda_function.py
# Create Lambda function
aws lambda create-function \
--function-name my-function \
--runtime python3.9 \
--role arn:aws:iam::123456789012:role/lambda-role \
--handler lambda_function.handler \
--zip-file fileb://function.zip
# Invoke function
aws lambda invoke \
--function-name my-function \
--payload '{"name": "World"}' \
response.json
# Update function code
aws lambda update-function-code \
--function-name my-function \
--zip-file fileb://function.zip
# S3 trigger Lambda (via console or add-permission)
aws lambda add-permission \
--function-name my-function \
--action lambda:InvokeFunction \
--principal s3.amazonaws.com \
--source-arn arn:aws:s3:::my-bucket
# Create S3 event notification (bucket configuration)
aws s3api put-bucket-notification-configuration \
--bucket my-bucket \
--notification-configuration '{
"LambdaFunctionConfigurations": [{
"LambdaFunctionArn": "arn:aws:lambda:us-east-1:123456789012:function:my-function",
"Events": ["s3:ObjectCreated:*"]
}]
}'
# Create layer ZIP (containing shared libraries)
# my-layer/python/lib/python3.9/site-packages/
aws lambda publish-layer-version \
--layer-name my-dependencies \
--zip-file fileb://layer.zip \
--compatible-runtimes python3.9
# Add layer to function
aws lambda update-function-configuration \
--function-name my-function \
--layers arn:aws:lambda:us-east-1:123456789012:layer:my-dependencies:1
# Set environment variables
aws lambda update-function-configuration \
--function-name my-function \
--Environment Variables="{DATABASE_URL=postgres://...,API_KEY=xxx}"
# Access in code
import os
DATABASE_URL = os.environ['DATABASE_URL']
# CloudFront Lambda@Edge (viewer request)
def handler(event, context):
request = event['Records'][0]['cf']['request']
request['headers']['my-custom-header'] = [{'key': 'my-custom-header', 'value': 'test'}]
return request
# Deploy to CloudFront
aws lambda create-function \
--function-name edge-function \
--runtime python3.9 \
# ... configuration ...
--publish
# Then add to CloudFront distribution
# serverless.yml
service: my-serverless-app
provider:
name: aws
runtime: python3.9
environment:
TABLE_NAME: ${self:custom.tableName}
functions:
hello:
handler: handler.hello
events:
- http:
path: /hello
method: get
processImage:
handler: handler.process
events:
- s3:
bucket: my-bucket
event: s3:ObjectCreated:*
resources:
Resources:
MyBucket:
Type: AWS::S3::Bucket
IAM enables you to manage access to AWS services and resources securely.
# Create user
aws iam create-user \
--user-name john
# Create group
aws iam create-group \
--group-name developers
# Add user to group
aws iam add-user-to-group \
--user-name john \
--group-name developers
# Create access keys
aws iam create-access-key \
--user-name john
# Managed policy (AWS provided)
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:Describe*",
"cloudwatch:Describe*",
"cloudwatch:Get*"
],
"Resource": "*"
}
]
}
# Inline policy (user-specific)
aws iam put-user-policy \
--user-name john \
--policy-name my-policy \
--policy-document '{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s:::my-bucket"
}]
}'
# Create role for EC2
aws iam create-role \
--role-name ec2-s3-role \
--assume-role-policy-document '{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {"Service": "ec2.amazonaws.com"},
"Action": "sts:AssumeRole"
}]
}'
# Attach policy to role
aws iam attach-role-policy \
--role-name ec2-s3-role \
--policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess
# Instance profile (for EC2)
aws iam create-instance-profile \
--instance-profile-name ec2-s3-profile
aws iam add-role-to-instance-profile \
--instance-profile-name ec2-s3-profile \
--role-name ec2-s3-role
# Set account password policy
aws iam update-account-password-policy \
--minimum-password-length 16 \
--require-symbols \
--require-numbers \
--require-uppercase-letters \
--require-lowercase-letters \
--allow-users-to-change-password \
--max-password-age 90
# Enable MFA for user
aws iam enable-mfa-device \
--user-name john \
--serial-number arn:aws:iam::123456789012:mfa/john \
--verification-code 123456
# Login with MFA (via API)
aws sts get-session-token \
--serial-number arn:aws:iam::123456789012:mfa/john \
--token-code 123456 \
--duration-seconds 43200
Route 53 provides DNS services, while CloudFront delivers content globally with low latency.
# Create hosted zone
aws route53 create-hosted-zone \
--name example.com \
--caller-reference "unique-$(date +%s)"
# Create record sets
aws route53 change-resource-record-sets \
--hosted-zone-id Z1234567890ABC \
--change-batch '{
"Changes": [{
"Action": "CREATE",
"ResourceRecordSet": {
"Name": "www.example.com",
"Type": "A",
"AliasTarget": {
"HostedZoneId": "Z2FDTNDATAQYWG",
"DNSName": "d1234567890.cloudfront.net",
"EvaluateTargetHealth": false
}
}
}]
}'
# Simple routing (one record)
{
"Name": "example.com",
"Type": "A",
"TTL": 300,
"ResourceRecords": ["1.2.3.4"]
}
# Weighted routing
{
"Name": "example.com",
"Type": "A",
"SetIdentifier": "primary",
"TTL": 300,
"ResourceRecords": ["1.2.3.4"],
"Weight": 80
}
{
"Name": "example.com",
"Type": "A",
"SetIdentifier": "secondary",
"TTL": 300,
"ResourceRecords": ["5.6.7.8"],
"Weight": 20
}
# Latency-based routing
{
"Name": "example.com",
"Type": "A",
"SetIdentifier": "us-east",
"TTL": 300,
"ResourceRecords": ["1.2.3.4"],
"Region": "us-east-1"
}
# Failover routing
{
"Name": "example.com",
"Type": "A",
"Failover": "PRIMARY",
"TTL": 300,
"ResourceRecords": ["1.2.3.4"],
"HealthCheckId": "abc123"
}
# Create CloudFront distribution
aws cloudfront create-distribution \
--origin-domain-name my-bucket.s3.amazonaws.com \
--default-root-object index.html
# CloudFront functions (edge computing)
# viewer-request function
function handler(event) {
var request = event.request;
request.headers['x-forwarded-for'] = {value: request.clientIp};
return request;
}
# Cache policy configuration
{
"QueryStringCacheKeys": [],
"CookiesToForward": "none",
"HeadersToCache": ["Origin"],
"ViewerProtocolPolicy": "redirect-to-https",
"AllowedMethods": ["GET", "HEAD"],
"CachedMethods": ["GET", "HEAD"],
"MinTTL": 0,
"DefaultTTL": 86400,
"MaxTTL": 31536000,
"Compress": true,
"CachePolicyId": "4135ea2d-6df8-44a3-9df3-4b5a84be39"
}
# Request certificate (ACM)
aws acm request-certificate \
--domain-name "*.example.com" \
--validation-method DNS
# Add to CloudFront
# 1. Import certificate to ACM in us-east-1
# 2. Select certificate in CloudFront settings
# 3. Viewer Protocol Policy: "Redirect HTTP to HTTPS"
Auto Scaling ensures you have the right number of EC2 instances. ELB distributes traffic across instances.
# Create security group for ALB
aws ec2 create-security-group \
--group-name alb-sg \
--description "ALB security group" \
--vpc-id vpc-0123456789abcdef0
# Allow HTTP/HTTPS
aws ec2 authorize-security-group-ingress \
--group-id sg-alb-0123456789abcdef0 \
--protocol tcp \
--port 80 \
--cidr-ip 0.0.0.0/0
# Create target group
aws elbv2 create-target-group \
--name my-targets \
--protocol HTTP \
--port 80 \
--vpc-id vpc-0123456789abcdef0
# Register targets
aws elbv2 register-targets \
--target-group-arn arn:aws:elasticloadbalancing:us-east-1:123456789012:targetgroup/my-targets/abc123 \
--targets Id=i-1234567890abcdef0 Id=i-234567890abcdef01
# Create ALB
aws elbv2 create-load-balancer \
--name my-alb \
--scheme internet-facing \
--security-groups sg-alb-0123456789abcdef0 \
--subnets subnet-1 subnet-2
# Create listener
aws elbv2 create-listener \
--load-balancer-arn arn:aws:elasticloadbalancing:us-east-1:123456789012:loadbalancer/app/my-alb/abc123 \
--protocol HTTP \
--port 80 \
--default-actions Type=forward,TargetGroupArn=arn:aws:elasticloadbalancing:us-east-1:123456789012:targetgroup/my-targets/abc123
# Create launch template
aws ec2 create-launch-template \
--launch-template-name my-template \
--version-specification DefaultVersion="1" \
--launch-template-data '{
"ImageId": "ami-0c55b159cbfafe1f0",
"InstanceType": "t3.micro",
"KeyName": "my-key",
"SecurityGroupIds": ["sg-0123456789abcdef0"],
"UserData": "IyEvYmluL2Jhc2gKY3VybCAtbCBodHRwczovL3d3dy5leGFtcGxlLmNvbS90ZXN0Lmh0bWw="
}'
# Create ASG
aws autoscaling create-auto-scaling-group \
--auto-scaling-group-name my-asg \
--launch-template "LaunchTemplateName=my-template,Version=1" \
--min-size 2 \
--max-size 10 \
--desired-capacity 2 \
--vpc-zone-identifier "subnet-1,subnet-2" \
--target-group-arns arn:aws:elasticloadbalancing:us-east-1:123456789012:targetgroup/my-targets/abc123
# Target tracking scaling policy
aws autoscaling put-scaling-policy \
--auto-scaling-group-name my-asg \
--policy-name cpu-target-tracking \
--policy-type TargetTrackingScaling \
--target-tracking-configuration '{
"PredefinedMetricSpecification": {
"PredefinedMetricType": "ASGAverageCPUUtilization"
},
"TargetValue": 70.0
}'
# Step scaling policy
aws autoscaling put-scaling-policy \
--auto-scaling-group-name my-asg \
--policy-name step-scaling-policy \
--policy-type StepScaling \
--step-adjustments '[
{"MetricIntervalLowerBound":0,"ScalingAdjustment":1},
{"MetricIntervalLowerBound":30,"ScalingAdjustment":3}
]' \
--metric-aggregation-type Average \
--estimated-instance-warmup 300
# Create lifecycle hook
aws autoscaling put-lifecycle-hook \
--auto-scaling-group-name my-asg \
--lifecycle-hook-name my-hook \
--lifecycle-transition autoscaling:EC2_INSTANCE_LAUNCHING \
--notification-target-arn arn:aws:sns:us-east-1:123456789012:my-topic \
--role-arn arn:aws:iam::123456789012:role/my-role
# Lifecycle action (in Lambda or instance)
aws autoscaling complete-lifecycle-action \
--lifecycle-action-result CONTINUE \
--lifecycle-action-name my-hook \
--auto-scaling-group-name my-asg \
--lifecycle-hook-name my-hook \
--lifecycle-action-token abc123
Infrastructure as Code lets you provision and manage AWS resources using templates.
# cloudformation.yaml
AWSTemplateFormatVersion: "2010-09-09"
Description: "VPC with Public and Private Subnets"
Resources:
VPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: 10.0.0.0/16
EnableDnsHostnames: true
EnableDnsSupport: true
Tags:
- Key: Name
Value: my-vpc
PublicSubnet1:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref VPC
CidrBlock: 10.0.1.0/24
AvailabilityZone: !Select [0, !GetAZs '']
MapPublicIpOnLaunch: true
Tags:
- Key: Name
Value: public-subnet-1
PrivateSubnet1:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref VPC
CidrBlock: 10.0.2.0/24
AvailabilityZone: !Select [0, !GetAZs '']
Tags:
- Key: Name
Value: private-subnet-1
InternetGateway:
Type: AWS::EC2::InternetGateway
AttachGateway:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
VpcId: !Ref VPC
InternetGatewayId: !Ref InternetGateway
# Create stack
aws cloudformation create-stack \
--stack-name my-vpc-stack \
--template-body file://cloudformation.yaml \
--parameters ParameterKey=Environment,ParameterValue=production
# Update stack
aws cloudformation update-stack \
--stack-name my-vpc-stack \
--template-body file://cloudformation.yaml
# Delete stack
aws cloudformation delete-stack --stack-name my-vpc-stack
# Monitor stack events
aws cloudformation describe-stack-events \
--stack-name my-vpc-stack
# Python CDK app
# app.py
from aws_cdk import (
App,
Stack,
aws_ec2 as ec2,
)
from constructs import Construct
class MyVpcStack(Stack):
def __init__(self, scope: Construct, id: str, **kwargs):
super().__init__(scope, id, **kwargs)
vpc = ec2.Vpc(self, "MyVPC",
cidr="10.0.0.0/16",
max_azs=2,
subnet_configuration=[
ec2.SubnetConfiguration(
name="Public",
subnet_type=ec2.SubnetType.PUBLIC,
cidr_mask=24
),
ec2.SubnetConfiguration(
name="Private",
subnet_type=ec2.SubnetType.PRIVATE_WITH_NAT,
cidr_mask=24
)
]
)
app = App()
MyVpcStack(app, "my-vpc-stack")
app.synth()
# Install CDK
npm install -g aws-cdk
# Initialize project
cdk init app --language python
# Synthesize to CloudFormation
cdk synth
# Deploy
cdk deploy
# List stacks
cdk list
# Destroy
cdk destroy
AWS provides comprehensive monitoring and security services to help you maintain visibility and protect your resources.
# Create CloudWatch alarm
aws cloudwatch put-metric-alarm \
--alarm-name cpu-alarm \
--alarm-description "Alarm when CPU exceeds 80%" \
--metric-name CPUUtilization \
--namespace AWS/EC2 \
--statistic Average \
--period 300 \
--threshold 80 \
--comparison-operator GreaterThanThreshold \
--dimensions Name=InstanceId,Value=i-1234567890abcdef0 \
--evaluation-periods 2 \
--alarm-actions arn:aws:sns:us-east-1:123456789012:my-topic
# Custom metrics
aws cloudwatch put-metric-data \
--metric-name PageViews \
--namespace MyApplication \
--value 100
# Create log group
aws logs create-log-group --log-group-name /aws/ec2/my-instance
# Stream logs from EC2
# 1. Install CloudWatch Agent on instance
# 2. Configure agent
# 3. Start service
# Query logs (CloudWatch Logs Insights)
fields @timestamp, @message
| filter @message like /ERROR/
| sort @timestamp desc
| limit 20
# Enable GuardDuty
aws guardduty create-detector \
--enable
# Create threat IP set
aws guardduty create-ip-set \
--detector-id abcd1234 \
--name "my-threat-list" \
--format CIDR_BLOCKLIST \
--location s3://my-bucket/threats.txt
# Enable additional finding types
aws guardduty update-detector \
--detector-id abcd1234 \
--features '[{"name":"EKS_AUDIT_LOG","status":"ENABLED"}]'
# Enable Security Hub
aws securityhub enable-organization-admin-account \
--admin-account-id 123456789012
# Enable standards
aws securityhub enable-standards \
--standards-arn "arn:aws:securityhub:us-east-1::standards/aws-foundational-security-best-practices/v/1.0.0"
# Get findings
aws securityhub get-findings \
--filters '{"Severity":[{"Value":70,"Comparison":"EQUAL"}]}'
# Enable AWS Config
aws configservice put-configuration-recorder \
--configuration-recorder "name=default","roleARN=arn:aws:iam::123456789012:role/config-role"
# Create rule
aws configservice put-config-rule \
--config-rule '{
"ConfigRuleName": "s3-bucket-public-read-prohibited",
"Source": {
"Owner": "AWS",
"SourceIdentifier": "S3_BUCKET_PUBLIC_READ_PROHIBITED"
},
"Scope": {
"ComplianceResourceTypes": ["AWS::S3::Bucket"]
}
}'
Build reliable, performant, cost-optimized, and secure applications on AWS using the Well-Architected Framework.
# Architecture Components:
#
# Presentation Layer (Public Subnet)
# - Application Load Balancer
# - Auto Scaling Group (Web Servers)
#
# Application Layer (Private Subnet)
# - Auto Scaling Group (Application Servers)
# - EC2 or ECS/Fargate
#
# Data Layer (Isolated Subnet)
# - RDS Multi-AZ
# - ElastiCache
# - S3 (with proper bucket policies)
#
# Additional Components
# - CloudFront CDN
# - Route 53 (DNS)
# - WAF + Shield (DDoS protection)
# - CloudWatch (monitoring)
# - Systems Manager (patching)
# 1. Right-sizing instances
# Use CloudWatch metrics to identify underutilized instances
aws cloudwatch get-metric-statistics \
--namespace AWS/EC2 \
--metric-name CPUUtilization \
--start-time 2024-01-01T00:00:00Z \
--end-time 2024-01-31T23:59:59Z \
--period 86400 \
--statistics Average
# 2. Use Reserved Instances for steady-state
# 3. Use Savings Plans for flexible compute
# 4. Use Spot for stateless workloads
# 5. Enable S3 Intelligent Tiering
# 6. Delete unused resources
# Cost Explorer
aws ce get-cost-and-usage \
--time-period Start=2024-01-01,End=2024-01-31 \
--granularity DAILY \
--metrics UnblendedCost
# Recovery strategies (RTO/RPO requirements):
#
# Backup & Restore (RPO: hours, RTO: hours)
# - Regular RDS snapshots
# - S3 cross-region replication
#
# Pilot Light (RPO: minutes, RTO: minutes)
# - Core services running minimal
# - Scale up on recovery
#
# Warm Standby (RPO: seconds, RTO: minutes)
# - Reduced capacity running
# - Scale up on recovery
#
# Multi-Region Active-Active (RPO: near zero, RTO: seconds)
# - Full capacity in multiple regions
# - Route53 health checks
Congratulations on completing this guide! You've learned:
Continue your journey with: