Linux & Mac Command Cheatsheet for Developers | Terminal, Network & Debugging

Linux & Mac Command Cheatsheet for Developers | Terminal, Network & Debugging

이 글의 핵심

The terminal is where debugging happens. This cheatsheet covers the commands developers actually use every day — from file manipulation and network diagnostics to log analysis and process management.

Directory & File Management

# Current directory
pwd

# Navigate
cd /var/log
cd ~              # Home directory
cd -              # Previous directory
cd ..             # Parent directory

# List files
ls -la            # All files, detailed (permissions, size, date)
ls -lh            # Human-readable sizes (KB, MB, GB)
ls -lt            # Sort by modification time, newest first
ls -lS            # Sort by size, largest first

# Tree view (install: brew install tree)
tree -L 2         # 2 levels deep
tree -d           # Directories only
tree -I 'node_modules|.git'  # Exclude patterns

# Create directories
mkdir -p parent/child/grandchild   # Create all intermediate dirs

# Copy, move, delete
cp -r src/ dest/          # Copy directory recursively
cp -p file backup         # Preserve permissions and timestamps
mv old.txt new.txt        # Rename / move
rm -rf dir/               # Force delete directory (⚠️ irreversible)
rm -ri dir/               # Interactive delete (confirm each file)

# Find files
find . -name "*.log"                        # By name
find . -name "*.js" -not -path "*/node_modules/*"  # Exclude path
find . -newer reference.txt                 # Modified after reference file
find . -size +100M                          # Files over 100 MB
find . -type f -mtime -7                    # Modified in last 7 days

# View file content
cat file.txt                     # Print entire file
head -20 file.txt                # First 20 lines
tail -50 file.txt                # Last 50 lines
tail -f /var/log/app.log         # Follow (live updates)
less file.txt                    # Paginated view (q to quit)

# Search with grep
grep "error" app.log             # Lines containing "error"
grep -i "error" app.log          # Case-insensitive
grep -r "TODO" ./src/            # Recursive search in directory
grep -n "error" app.log          # Show line numbers
grep -v "debug" app.log          # Exclude lines matching pattern
grep -A 3 "error" app.log        # 3 lines AFTER each match
grep -B 3 "error" app.log        # 3 lines BEFORE each match
grep -C 3 "error" app.log        # 3 lines before AND after
grep -l "error" *.log            # Only filenames that match

# Ripgrep (faster, install: brew install ripgrep)
rg "error" ./src/                # Same as grep -r but much faster
rg -i "todo" --type js           # Search only .js files

# Stream editing with sed
sed 's/old/new/g' file.txt       # Replace all occurrences
sed 's/old/new/' file.txt        # Replace first occurrence per line
sed -i 's/old/new/g' file.txt    # Edit file in place (Linux)
sed -i '' 's/old/new/g' file.txt # Edit file in place (macOS)
sed '/^#/d' file.txt             # Delete comment lines
sed -n '10,20p' file.txt         # Print lines 10 to 20

# Data extraction with awk
awk '{print $1}' file.txt        # Print first field (space-delimited)
awk -F',' '{print $2}' data.csv  # Print second column of CSV
awk 'NR > 1' file.txt            # Skip the first line (header)
awk '/error/ {print $0}' log.txt # Print lines matching pattern
awk '{sum += $3} END {print sum}' data.txt  # Sum third column

# Word count
wc -l file.txt                   # Line count
wc -w file.txt                   # Word count
wc -c file.txt                   # Byte count

Network Debugging

# Test connectivity
ping google.com -c 4             # Send 4 pings
traceroute google.com            # Trace network path (Linux)
tracert google.com               # Windows equivalent
mtr google.com                   # Better traceroute (install: brew install mtr)

# DNS lookup
nslookup google.com              # DNS query
dig google.com                   # Detailed DNS info
dig google.com +short            # Just the IP address

# HTTP requests with curl
curl https://api.example.com/data             # GET request
curl -I https://example.com                   # Headers only (HEAD)
curl -X POST https://api.example.com/users \  # POST with JSON
  -H "Content-Type: application/json" \
  -d '{"name": "Alice", "email": "[email protected]"}'
curl -H "Authorization: Bearer TOKEN" \       # With auth header
  https://api.example.com/me
curl -o output.html https://example.com       # Save to file
curl -L https://example.com                   # Follow redirects
curl -s https://api.example.com | jq .        # Pipe to jq for JSON formatting

# Check open ports and connections
ss -tlnp                         # All listening TCP ports + PIDs (Linux)
netstat -tlnp                    # Similar (older, but works on macOS too)
lsof -i :8080                    # What's using port 8080 (macOS/Linux)
lsof -i -P | grep LISTEN         # All listening sockets

# Port scanning (install: brew install nmap)
nmap -p 80,443,8080 hostname     # Check specific ports
nmap -p 1-1000 hostname          # Scan port range

# View network interfaces
ip addr show                     # Linux (modern)
ifconfig                         # Linux/macOS (older)

Process Management

# View running processes
ps aux                           # All processes with details
ps aux | grep node               # Find node processes
ps -ef | grep python             # Alternative syntax

# Interactive process viewer
top                              # Basic (press q to quit)
htop                             # Better UI (install: brew install htop)
btop                             # Modern UI (install: brew install btop)

# Kill processes
kill 12345                       # Send SIGTERM (graceful stop) to PID
kill -9 12345                    # Send SIGKILL (force stop) — use when SIGTERM fails
pkill node                       # Kill all processes named "node"
pkill -f "python app.py"         # Kill by full command string

# Background and foreground
command &                        # Run in background
Ctrl+C                           # Kill foreground process
Ctrl+Z                           # Suspend foreground process
bg                               # Resume suspended process in background
fg                               # Bring background process to foreground
jobs                             # List background jobs

# Process priority
nice -n 10 command               # Run with lower priority (10 = less CPU)
renice -n -5 12345               # Change priority of running process

Log Analysis

# Real-time log following
tail -f /var/log/app.log                    # Follow log file
tail -f /var/log/app.log | grep "ERROR"     # Filter while following
tail -f /var/log/nginx/access.log | grep -v "health"  # Exclude health checks

# Search logs with date filtering
grep "2026-04-15" /var/log/app.log          # Specific date
grep "ERROR\|WARN" /var/log/app.log         # Multiple patterns (OR)
grep "ERROR" /var/log/app.log | tail -100   # Last 100 errors

# Count occurrences
grep -c "ERROR" app.log                     # Count matching lines
grep "ERROR" app.log | sort | uniq -c | sort -rn  # Count unique errors

# Extract fields from structured logs
# JSON logs: cat app.log | jq '.level == "error"'
# Apache access log: awk '{print $7}' access.log | sort | uniq -c | sort -rn

# journalctl (systemd systems)
journalctl -u nginx                          # Logs for nginx service
journalctl -u nginx --since "1 hour ago"    # Last hour
journalctl -u nginx -f                      # Follow
journalctl -p err                           # Only errors

Disk & System Information

# Disk usage
df -h                            # All filesystems, human-readable
df -h /var                       # Specific path

du -sh *                         # Size of each item in current dir
du -sh /var/log/*                # Size of each log file
du -sh * | sort -h               # Sort by size
du -sh * | sort -rh | head -10   # Top 10 largest

# Find large files
find / -size +1G 2>/dev/null     # Files over 1 GB
find /var/log -name "*.log" -size +100M  # Large log files

# System info
uname -a                         # OS and kernel info
uptime                           # System uptime and load averages
free -h                          # Memory usage (Linux)
vm_stat                          # Memory stats (macOS)
nproc                            # CPU core count (Linux)
sysctl -n hw.ncpu                # CPU core count (macOS)

# Load averages (from uptime or top)
# Format: 1min 5min 15min
# If value > CPU cores → overloaded

Permissions & Ownership

# View permissions
ls -la file.txt
# -rw-r--r-- 1 alice staff 1234 Apr 15 10:00 file.txt
# ↑ permissions  ↑owner ↑group

# chmod — change permissions
chmod 755 script.sh    # rwxr-xr-x (owner: rwx, group: r-x, other: r-x)
chmod 644 file.txt     # rw-r--r-- (owner: rw, group: r, other: r)
chmod +x script.sh     # Add execute permission for all
chmod -R 755 ./dir/    # Recursive

# Common permission values
# 7 = rwx  (read + write + execute)
# 6 = rw-  (read + write)
# 5 = r-x  (read + execute)
# 4 = r--  (read only)

# chown — change ownership
chown alice file.txt             # Change owner
chown alice:staff file.txt       # Change owner and group
chown -R alice:staff ./dir/      # Recursive

# sudo — run as root
sudo command                     # Run with root privileges
sudo -u alice command            # Run as specific user

Compression & Archives

# tar — most common on Linux/Mac
tar -czf archive.tar.gz ./dir/   # Create compressed archive
tar -xzf archive.tar.gz          # Extract
tar -tzf archive.tar.gz          # List contents without extracting
tar -xzf archive.tar.gz -C /tmp/ # Extract to specific directory

# zip/unzip
zip -r archive.zip ./dir/        # Create zip
unzip archive.zip                # Extract zip
unzip -l archive.zip             # List contents

# gzip — single file compression
gzip large-file.log              # Compress (removes original, creates .gz)
gunzip large-file.log.gz         # Decompress
gzip -k file.log                 # Keep original while compressing
zcat file.log.gz                 # View compressed file without extracting

Real-World Troubleshooting Scenarios

”Port already in use"

# Find what's using port 8080
lsof -i :8080                    # macOS / Linux
ss -tlnp | grep 8080             # Linux

# Kill it
kill -9 $(lsof -t -i :8080)     # macOS / Linux

"Disk is full"

df -h                                        # Find which filesystem is full
du -sh /var/log/* | sort -rh | head -20      # Find largest log files
find /var/log -name "*.log" -mtime +30 -delete  # Delete logs older than 30 days

"Server is slow — what’s eating CPU/memory?"

top                              # Interactive view, press M for memory sort
ps aux --sort=-%cpu | head -10   # Top CPU consumers (Linux)
ps aux -r | head -10             # Top CPU consumers (macOS)

"API returns unexpected error — check logs"

tail -f /var/log/app.log | grep "ERROR"
# Or for JSON logs:
tail -f /var/log/app.json | jq 'select(.level == "error")'

"Environment variable not set”

env                              # All environment variables
echo $PATH                       # Specific variable
printenv HOME                    # Alternative
export MY_VAR="value"            # Set for current session
echo 'export MY_VAR="value"' >> ~/.zshrc  # Persist across sessions

Quick Reference

TaskCommand
Follow logtail -f app.log
Search in filesgrep -r "pattern" ./src/
Who owns port 8080lsof -i :8080
Disk usagedf -h && du -sh *
Kill process by namepkill -f "my-app"
Find large filesfind / -size +1G 2>/dev/null
HTTP requestcurl -X POST url -H "Content-Type: application/json" -d '{}'
Replace in filesed -i 's/old/new/g' file.txt
Count lines matchinggrep -c "ERROR" app.log
Extract columnawk -F',' '{print $2}' data.csv

Related posts: