AWS Beginner Guide | EC2, S3, RDS Practical Usage 2026
Key Takeaways
Covers the complete flow of launching servers with EC2, storing static files in S3, and connecting databases with RDS. Always check free tier limits with documentation.
Table of Contents
- What is AWS?
- AWS Account Creation & Setup
- EC2 - Virtual Server
- S3 - File Storage
- RDS - Managed Database
- Lambda - Serverless
- CloudFront - CDN
- Cost Management
Prerequisites (Basics for Beginners)
1. What is Cloud?
Cloud is renting computing and storage over the internet.
Here’s an implementation example using text. Please review the code to understand the role of each part.
Traditional Method (On-Premises):
- Buy servers directly (tens of thousands of dollars)
- Build data center
- Manage maintenance directly
- High initial cost
Cloud Method:
- Rent only what you need
- Pay only for what you use
- AWS handles maintenance
- Almost no initial cost
On-premises means buying and running equipment yourself, while cloud is closer to renting what you need and returning it. Cost and operation methods differ.
2. Cloud Service Models
Here’s an implementation example using text. Please review the code to understand the role of each part.
IaaS (Infrastructure as a Service)
- Provides virtual servers, storage, networking
- Examples: AWS EC2, S3
PaaS (Platform as a Service)
- Provides application execution environment
- Examples: AWS Elastic Beanstalk, Heroku
SaaS (Software as a Service)
- Provides complete software
- Examples: Gmail, Notion, Slack
3. AWS Main Services
Here’s a detailed implementation using text. Please review the code to understand the role of each part.
Computing:
- EC2: Virtual server
- Lambda: Serverless function
Storage:
- S3: File storage
- EBS: Block storage
Database:
- RDS: Relational DB (MySQL, PostgreSQL)
- DynamoDB: NoSQL DB
Networking:
- VPC: Virtual network
- CloudFront: CDN
- Route 53: DNS
Security:
- IAM: Permission management
- Cognito: User authentication
1. What is AWS?
AWS Advantages
Here’s a detailed implementation using text. Please review the code to understand the role of each part.
1. Scalability
- Auto-scale on traffic increase
- Add servers with few clicks
2. Reliability
- 99.99% uptime guarantee
- Global data centers
3. Cost Efficiency
- Pay only for what you use
- Free tier provided
4. Security
- Physical security
- DDoS protection
- Encryption support
AWS Regions and Availability Zones
Here’s a detailed implementation using text. Please review the code to understand the role of each part.
Region:
- Geographic location (Seoul, Tokyo, US, etc.)
- Each region is independent
Availability Zone:
- Independent data center within region
- Seoul Region: ap-northeast-2a, ap-northeast-2b, ap-northeast-2c
Example:
┌─────────────────────────────────┐
│ Seoul Region (ap-northeast-2) │
│ ┌──────┐ ┌──────┐ ┌──────┐ │
│ │ AZ-a │ │ AZ-b │ │ AZ-c │ │
│ │ DC 1 │ │ DC 2 │ │ DC 3 │ │
│ └──────┘ └──────┘ └──────┘ │
└─────────────────────────────────┘
Even if AZ-a fails, AZ-b and AZ-c operate normally
2. AWS Account Creation & Setup
Account Creation
Here’s an implementation example using text. Try running the code directly to see how it works.
1. Visit https://aws.amazon.com/
2. Click "Create AWS Account"
3. Enter email and password
4. Register credit card (no charge with free tier)
5. Identity verification (phone)
6. Select support plan (Basic - Free)
Free Tier
Here’s an implementation example using text. Please review the code to understand the role of each part.
12 Months Free:
- EC2: t2.micro instance 750 hours/month
- S3: 5GB storage
- RDS: db.t2.micro 750 hours/month
- Lambda: 1 million requests/month
Always Free:
- Lambda: 1 million requests/month
- DynamoDB: 25GB storage
- CloudWatch: 10 metrics
IAM User Creation (Security)
Here’s an implementation example using text. Try running the code directly to see how it works.
⚠️ Never use root account directly!
1. Access IAM console
2. "Users" → "Add user"
3. Username: admin-user
4. Permissions: AdministratorAccess
5. Create access key (for CLI use)
6. Enable MFA (2-factor authentication)
3. EC2 - Virtual Server
What is EC2?
EC2 (Elastic Compute Cloud) is AWS’s virtual server.
Here’s an implementation example using text. Try running the code directly to see how it works.
Analogy: Renting a computer
- Choose desired specs (CPU, memory)
- Choose desired OS (Ubuntu, Amazon Linux)
- Can turn on/off as needed
- Charged for usage time
Creating EC2 Instance
1) Access EC2 Console
1. AWS Console → EC2
2. Click "Launch Instance"
2) Configuration
Here’s a detailed implementation using text. Please review the code to understand the role of each part.
Name: my-web-server
AMI (Operating System):
- Select Ubuntu Server 22.04 LTS
Instance Type:
- t2.micro (free tier)
- vCPU: 1, Memory: 1GB
Key Pair:
- Click "Create new key pair"
- Name: my-key
- Download .pem file (for SSH access)
Network Settings:
- Security group: Allow SSH (22), HTTP (80), HTTPS (443)
Storage:
- 8GB (free tier max 30GB)
Click "Launch Instance"
3) SSH Connection
Here’s an implementation example using bash. Try running the code directly to see how it works.
# Linux/Mac
chmod 400 my-key.pem
ssh -i my-key.pem ubuntu@<Public-IP>
# Windows (PowerShell)
ssh -i my-key.pem ubuntu@<Public-IP>
Building Web Server
Here’s a detailed implementation using bash. Please review the code to understand the role of each part.
# 1. Update packages
sudo apt update && sudo apt upgrade -y
# 2. Install Node.js
curl -fsSL https://deb.nodesource.com/setup_18.x | sudo -E bash -
sudo apt install -y nodejs
# 3. Deploy application
git clone https://github.com/your-repo/app.git
cd app
npm install
npm run build
# 4. Process management with PM2
sudo npm install -g pm2
pm2 start npm --name "my-app" -- start
pm2 startup
pm2 save
# 5. Nginx setup (reverse proxy)
sudo apt install -y nginx
sudo nano /etc/nginx/sites-available/default
Nginx Configuration:
Here’s an implementation example using nginx. Please review the code to understand the role of each part.
server {
listen 80;
server_name your-domain.com;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
# Restart Nginx
sudo systemctl restart nginx
4. S3 - File Storage
What is S3?
S3 (Simple Storage Service) is storage that uploads files as objects. Capacity grows with usage.
Here’s an implementation example using text. Please review the code to understand the role of each part.
Uses:
- Store images and videos
- Static website hosting
- Backup and archive
- Store log files
Features:
- Unlimited capacity
- 99.999999999% (11 nines) durability
- Low cost
Creating S3 Bucket
Here’s an implementation example using text. Try running the code directly to see how it works.
1. S3 Console → "Create bucket"
2. Bucket name: my-app-bucket (must be globally unique)
3. Region: Asia Pacific (Seoul)
4. Block public access: Check (security)
5. Click "Create bucket"
File Upload
AWS CLI Installation:
Here’s a detailed implementation using bash. Please review the code to understand the role of each part.
# macOS
brew install awscli
# Windows
choco install awscli
# Linux
sudo apt install awscli
# Configuration
aws configure
# AWS Access Key ID: [enter]
# AWS Secret Access Key: [enter]
# Default region: ap-northeast-2
# Default output format: json
File Upload:
Here’s an implementation example using bash. Please review the code to understand the role of each part.
# Upload single file
aws s3 cp image.jpg s3://my-app-bucket/images/
# Upload entire folder
aws s3 sync ./dist s3://my-app-bucket/website/
# List files
aws s3 ls s3://my-app-bucket/
# Download file
aws s3 cp s3://my-app-bucket/images/image.jpg ./
Using S3 in Node.js
npm install @aws-sdk/client-s3
Here’s a detailed implementation using JavaScript. Import necessary modules, perform tasks efficiently with async processing. Please review the code to understand the role of each part.
import { S3Client, PutObjectCommand, GetObjectCommand } from '@aws-sdk/client-s3';
import fs from 'fs';
const s3Client = new S3Client({ region: 'ap-northeast-2' });
// Upload file
async function uploadFile(filePath, key) {
const fileContent = fs.readFileSync(filePath);
const command = new PutObjectCommand({
Bucket: 'my-app-bucket',
Key: key,
Body: fileContent,
ContentType: 'image/jpeg'
});
await s3Client.send(command);
console.log('Upload success:', key);
}
// Download file
async function downloadFile(key, outputPath) {
const command = new GetObjectCommand({
Bucket: 'my-app-bucket',
Key: key
});
const response = await s3Client.send(command);
const stream = response.Body;
const writeStream = fs.createWriteStream(outputPath);
stream.pipe(writeStream);
}
// Usage
await uploadFile('./image.jpg', 'uploads/image.jpg');
await downloadFile('uploads/image.jpg', './downloaded.jpg');
S3 Static Website Hosting
Here’s an implementation example using text. Ensure stability with error handling. Try running the code directly to see how it works.
1. S3 Bucket → "Properties" tab
2. Edit "Static website hosting"
3. Select "Enable"
4. Index document: index.html
5. Error document: error.html
6. "Save changes"
7. "Permissions" tab → Edit "Bucket policy"
Bucket Policy (Allow public read):
Here’s an implementation example using JSON. Please review the code to understand the role of each part.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-app-bucket/*"
}
]
}
Here’s an implementation example using bash. Try running the code directly to see how it works.
# Deploy website
aws s3 sync ./dist s3://my-app-bucket --delete
# Access
# http://my-app-bucket.s3-website.ap-northeast-2.amazonaws.com
5. RDS - Managed Database
What is RDS?
RDS (Relational Database Service) is a service where AWS manages MySQL, PostgreSQL, etc., including patches and backups.
Here’s an implementation example using text. Please review the code to understand the role of each part.
Direct Management:
- Install DB
- Configure backups
- Security patches
- Monitoring
→ Complex and time-consuming
Using RDS:
- Create DB with few clicks
- Automatic backups
- Automatic patches
- Built-in monitoring
→ Simple and stable
Creating RDS Instance
Here’s a detailed implementation using text. Please review the code to understand the role of each part.
1. RDS Console → "Create database"
2. Engine Selection:
- PostgreSQL 15.4 (recommended)
- MySQL 8.0
- MariaDB
3. Template:
- Free tier
4. Settings:
- DB instance identifier: my-database
- Master username: admin
- Master password: [strong password]
5. Instance Configuration:
- db.t3.micro (free tier)
6. Storage:
- 20GB (free tier max)
7. Connectivity:
- VPC: Default VPC
- Public access: Yes (for testing)
- Security group: Create new
8. Click "Create database"
Connecting to RDS
Security Group Settings:
Here’s an implementation example using text. Try running the code directly to see how it works.
1. EC2 Console → "Security Groups"
2. Select RDS security group
3. "Edit inbound rules"
4. Allow PostgreSQL (5432)
- Source: My IP or EC2 security group
Node.js Connection:
npm install pg
Here’s a detailed implementation using JavaScript. Perform tasks efficiently with async processing. Please review the code to understand the role of each part.
const { Pool } = require('pg');
const pool = new Pool({
host: 'my-database.xxxxx.ap-northeast-2.rds.amazonaws.com',
port: 5432,
user: 'admin',
password: 'your-password',
database: 'postgres',
ssl: {
rejectUnauthorized: false
}
});
// Execute query
async function getUsers() {
const result = await pool.query('SELECT * FROM users');
return result.rows;
}
// Usage
const users = await getUsers();
console.log(users);
RDS Backup and Restore
Here’s a detailed implementation using text. Please review the code to understand the role of each part.
Automatic Backup:
- Daily automatic backup
- Retention period: 7 days (configurable)
- Point-in-time restore available
Manual Snapshot:
1. RDS Console → Select instance
2. "Actions" → "Take snapshot"
3. Enter snapshot name
4. Click "Take snapshot"
Restore:
1. "Snapshots" menu
2. Select snapshot → "Actions" → "Restore snapshot"
3. Restored as new instance
6. Lambda - Serverless
What is Lambda?
Lambda is a service that executes code only on events without launching servers directly.
Here’s an implementation example using text. Try running the code directly to see how it works.
Traditional Method:
- EC2 instance runs 24 hours
- Cost incurred even without requests
- Server management needed
Lambda:
- Executes only when requests come
- Charged only for execution time
- No server management needed
Creating Lambda Function
Here’s an implementation example using text. Try running the code directly to see how it works.
1. Lambda Console → "Create function"
2. Select "Author from scratch"
3. Function name: hello-world
4. Runtime: Node.js 18.x
5. Click "Create function"
Write Code:
Here’s an implementation example using JavaScript. Perform tasks efficiently with async processing. Please review the code to understand the role of each part.
export const handler = async (event) => {
const name = event.queryStringParameters?.name || 'World';
return {
statusCode: 200,
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({
message: `Hello, ${name}!`
})
};
};
API Gateway Integration
Here’s an implementation example using text. Try running the code directly to see how it works.
1. Lambda function → "Add trigger"
2. Select "API Gateway"
3. Create "HTTP API"
4. Security: Open
5. Click "Add"
API endpoint created:
https://xxxxx.execute-api.ap-northeast-2.amazonaws.com/default/hello-world
Test:
curl "https://xxxxx.execute-api.ap-northeast-2.amazonaws.com/default/hello-world?name=Alice"
# {"message":"Hello, Alice!"}
Lambda Real Example: Image Resizing
Here’s a detailed implementation using JavaScript. Import necessary modules, perform tasks efficiently with async processing. Please review the code to understand the role of each part.
import { S3Client, GetObjectCommand, PutObjectCommand } from '@aws-sdk/client-s3';
import sharp from 'sharp';
const s3Client = new S3Client({ region: 'ap-northeast-2' });
export const handler = async (event) => {
// Extract file info from S3 event
const bucket = event.Records[0].s3.bucket.name;
const key = event.Records[0].s3.object.key;
// Download original image
const getCommand = new GetObjectCommand({ Bucket: bucket, Key: key });
const { Body } = await s3Client.send(getCommand);
// Resize image
const resized = await sharp(Body)
.resize(800, 600, { fit: 'inside' })
.jpeg({ quality: 80 })
.toBuffer();
// Upload resized image
const putCommand = new PutObjectCommand({
Bucket: bucket,
Key: `resized/${key}`,
Body: resized,
ContentType: 'image/jpeg'
});
await s3Client.send(putCommand);
return { statusCode: 200, body: 'Success' };
};
7. CloudFront - CDN
What is CloudFront?
CloudFront is AWS’s CDN that caches at edge to reduce latency.
Here’s an implementation example using text. Try running the code directly to see how it works.
Without CDN:
User (Korea) → Server (US)
- Distance: 10,000km
- Response time: 500ms
With CDN:
User (Korea) → CDN Edge (Seoul) → Server (US)
- Distance: 10km (when cached)
- Response time: 10ms (50x faster!)
Creating CloudFront Distribution
Here’s a detailed implementation using text. Please review the code to understand the role of each part.
1. CloudFront Console → "Create distribution"
2. Origin domain:
- Select S3 bucket: my-app-bucket.s3.amazonaws.com
3. Origin access:
- Create OAI (Origin Access Identity)
- Auto-update S3 bucket policy
4. Viewer protocol policy:
- Redirect HTTP to HTTPS
5. Cache policy:
- CachingOptimized
6. Click "Create distribution"
Distribution domain:
https://d123456.cloudfront.net
Connecting Custom Domain
Here’s an implementation example using text. Try running the code directly to see how it works.
1. Buy or register domain in Route 53
2. CloudFront distribution → "General" tab
3. "Edit" → Add "Alternate domain names"
- cdn.example.com
4. Request SSL certificate (ACM)
5. Add CNAME record in Route 53
- cdn.example.com → d123456.cloudfront.net
8. Real Architecture
3-Tier Architecture
Here’s a detailed implementation using text. Please review the code to understand the role of each part.
┌──────────────────────────────────────────┐
│ CloudFront (CDN) │
│ Static files (images, CSS, JS) │
└────────────────┬─────────────────────────┘
│
┌────────────────▼─────────────────────────┐
│ Application Load Balancer │
│ (Traffic distribution) │
└────┬─────────────────────┬────────────────┘
│ │
┌────▼────┐ ┌─────▼────┐
│ EC2 1 │ │ EC2 2 │ ← Auto Scaling
│ (Web) │ │ (Web) │
└────┬────┘ └─────┬────┘
│ │
└──────────┬───────────┘
│
┌──────▼──────┐
│ RDS (DB) │
│ Multi-AZ │
└─────────────┘
Cost-Optimized Architecture
Here’s a detailed implementation using text. Please review the code to understand the role of each part.
┌──────────────────────────────────────────┐
│ S3 + CloudFront │
│ Static website hosting │
│ (Very cheap, high performance) │
└────────────────┬─────────────────────────┘
│
│ API requests only
│
┌───────▼────────┐
│ Lambda + API │
│ Gateway │
│ (Serverless) │
└───────┬────────┘
│
┌───────▼────────┐
│ DynamoDB │
│ (NoSQL) │
└────────────────┘
Monthly cost: $5~$10 (low traffic)
9. Security Best Practices
IAM Permission Management
Principle of Least Privilege:
Here’s a detailed implementation using JSON. Please review the code to understand the role of each part.
// ❌ Too many permissions
{
"Effect": "Allow",
"Action": "*",
"Resource": "*"
}
// ✅ Only necessary permissions
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::my-app-bucket/*"
}
Security Group Settings
Here’s an implementation example using text. Try running the code directly to see how it works.
❌ Bad example:
- SSH (22): 0.0.0.0/0 (allow worldwide)
- Hacking risk!
✅ Good example:
- SSH (22): Allow only my IP
- HTTP (80): 0.0.0.0/0 (web is public)
- HTTPS (443): 0.0.0.0/0
Secret Management
AWS Secrets Manager:
Here’s an implementation example using bash. Try running the code directly to see how it works.
# Store secret
aws secretsmanager create-secret \
--name myapp/database \
--secret-string '{"username":"admin","password":"secret123"}'
# Retrieve secret
aws secretsmanager get-secret-value --secret-id myapp/database
Here’s an implementation example using JavaScript. Import necessary modules, perform tasks efficiently with async processing. Please review the code to understand the role of each part.
// Use in Node.js
import { SecretsManagerClient, GetSecretValueCommand } from '@aws-sdk/client-secrets-manager';
const client = new SecretsManagerClient({ region: 'ap-northeast-2' });
async function getSecret(secretName) {
const command = new GetSecretValueCommand({ SecretId: secretName });
const response = await client.send(command);
return JSON.parse(response.SecretString);
}
const dbCreds = await getSecret('myapp/database');
console.log(dbCreds.username, dbCreds.password);
10. Cost Management
Free Tier Monitoring
Here’s an implementation example using text. Try running the code directly to see how it works.
1. Billing Console → "Free Tier"
2. Check usage
3. Set alerts:
- Enable "Billing alerts"
- Create CloudWatch alarm
- Budget: $10
Cost Reduction Tips
1) Stop EC2 Instance
Here’s an implementation example using bash. Try running the code directly to see how it works.
# Stop when not in use (only storage cost incurred)
aws ec2 stop-instances --instance-ids i-xxxxx
# Restart
aws ec2 start-instances --instance-ids i-xxxxx
2) S3 Lifecycle Policy
Here’s a detailed implementation using JSON. Please review the code to understand the role of each part.
{
"Rules": [
{
"Id": "DeleteOldLogs",
"Status": "Enabled",
"Prefix": "logs/",
"Expiration": {
"Days": 30
}
},
{
"Id": "ArchiveOldData",
"Status": "Enabled",
"Prefix": "archive/",
"Transitions": [
{
"Days": 90,
"StorageClass": "GLACIER"
}
]
}
]
}
3) RDS Reserved Instances
Purchase reserved instances for long-term use:
- 1-year commitment: 40% discount
- 3-year commitment: 60% discount
11. Monitoring
CloudWatch
Check Logs:
Here’s an implementation example using bash. Try running the code directly to see how it works.
# Send logs to CloudWatch Logs
aws logs put-log-events \
--log-group-name /aws/lambda/my-function \
--log-stream-name 2026/03/31 \
--log-events timestamp=1616239022000,message="Hello World"
Set Alarms:
Here’s an implementation example using text. Please review the code to understand the role of each part.
1. CloudWatch Console → "Alarms"
2. "Create alarm"
3. Select metric:
- EC2 > CPU Utilization
4. Condition:
- CPU > 80% (for 5 minutes)
5. Notification:
- Create SNS topic
- Enter email address
6. "Create alarm"
12. CI/CD Pipeline
GitHub Actions + AWS
.github/workflows/deploy.yml
Here’s a detailed implementation using YAML. Please review the code to understand the role of each part.
name: Deploy to AWS
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: 18
- name: Install dependencies
run: npm ci
- name: Build
run: npm run build
- name: Deploy to S3
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
run: |
aws s3 sync ./dist s3://my-app-bucket --delete
- name: Invalidate CloudFront
run: |
aws cloudfront create-invalidation \
--distribution-id E123456 \
--paths "/*"
13. Real Project: Full-Stack App Deployment
Architecture
Here’s an implementation example using text. Please review the code to understand the role of each part.
┌─────────────────────────────────────────┐
│ CloudFront + S3 (React Frontend) │
└────────────────┬────────────────────────┘
│
┌───────▼────────┐
│ API Gateway │
│ + Lambda │
│ (Backend) │
└───────┬────────┘
│
┌───────▼────────┐
│ RDS │
│ (PostgreSQL) │
└────────────────┘
Step 1: Frontend Deployment (S3 + CloudFront)
Here’s an implementation example using bash. Try running the code directly to see how it works.
# Build React app
npm run build
# Upload to S3
aws s3 sync ./build s3://my-app-frontend
# Create CloudFront distribution (explained above)
Step 2: Backend Deployment (Lambda)
handler.js
Here’s a detailed implementation using JavaScript. Import necessary modules, perform tasks efficiently with async processing, ensure stability with error handling, handle branching with conditionals. Please review the code to understand the role of each part.
import { Pool } from 'pg';
const pool = new Pool({
host: process.env.DB_HOST,
user: process.env.DB_USER,
password: process.env.DB_PASSWORD,
database: process.env.DB_NAME
});
export const handler = async (event) => {
const { httpMethod, path } = event;
// GET /api/users
if (httpMethod === 'GET' && path === '/api/users') {
const result = await pool.query('SELECT * FROM users');
return {
statusCode: 200,
body: JSON.stringify(result.rows)
};
}
// POST /api/users
if (httpMethod === 'POST' && path === '/api/users') {
const { name, email } = JSON.parse(event.body);
const result = await pool.query(
'INSERT INTO users (name, email) VALUES ($1, $2) RETURNING *',
[name, email]
);
return {
statusCode: 201,
body: JSON.stringify(result.rows[0])
};
}
return {
statusCode: 404,
body: JSON.stringify({ error: 'Not Found' })
};
};
Step 3: Database Setup (RDS)
Here’s an implementation example using SQL. Please review the code to understand the role of each part.
-- After connecting to RDS PostgreSQL
CREATE TABLE users (
id SERIAL PRIMARY KEY,
name VARCHAR(100) NOT NULL,
email VARCHAR(100) UNIQUE NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
INSERT INTO users (name, email) VALUES
('Alice', '[email protected]'),
('Bob', '[email protected]');
14. Troubleshooting
Common Issues
1) EC2 SSH Connection Failed
Here’s an implementation example using bash. Try running the code directly to see how it works.
# Cause: Key file permission issue
# Solution:
chmod 400 my-key.pem
# Cause: Security group settings
# Solution: Open SSH (22) port in security group
2) S3 File Access Denied
Here’s a simple text code example. Try running the code directly to see how it works.
Cause: Bucket policy or public access block
Solution:
1. Check "Block public access" settings
2. Add s3:GetObject permission to bucket policy
3) RDS Connection Failed
Here’s an implementation example using text. Try running the code directly to see how it works.
Cause: Security group or network settings
Solution:
1. Open PostgreSQL (5432) port in security group
2. Enable public access (for testing)
3. Check VPC settings
FAQ
Q1. Is AWS free tier really free?
No charges if used within limits. However, charges may apply after exceeding limits, in certain regions, or after 12 months, so it’s safe to enable billing alerts.
Q2. Should I choose EC2 or Lambda?
Here’s an implementation example using text. Please review the code to understand the role of each part.
Choose EC2:
- Need 24-hour execution
- Complex application
- Need to maintain state
Choose Lambda:
- Intermittent execution
- Short execution time (<15 min)
- Don't want to manage servers
Q3. How much will it cost?
Here’s an implementation example using text. Please review the code to understand the role of each part.
Small web app (estimated monthly cost):
- EC2 t2.micro: Free (12 months)
- S3 (5GB): Free
- RDS t3.micro: Free (12 months)
- CloudFront (10GB): $1
- Total: $1~$5
Medium service:
- EC2 t3.medium: $30
- S3 (100GB): $2
- RDS t3.small: $25
- CloudFront (1TB): $85
- Total: $150~$200
Summary
Key Points
EC2:
- Virtual server
- Complete control
- Server management needed
S3:
- File storage
- Unlimited capacity
- Static hosting possible
RDS:
- Managed database
- Automatic backups
- High availability
Lambda:
- Serverless function
- Pay only for usage
- Auto-scaling
AWS Learning Roadmap
After opening SSH with account, IAM, and EC2, create data paths with S3 and RDS. Then expand network with VPC, load balancer, and CloudFront, and add auto-scaling, CI/CD, and observability as service grows.
Recommended Next Articles
Keywords: AWS, Cloud, EC2, S3, RDS, Lambda, CloudFront, Amazon, Serverless, IAM