Deployment
Scaling
Scale applications horizontally and vertically
Application Scaling
Scale your applications to handle increased load.
Vertical Scaling
Increase resources for a single instance.
Docker Container Scaling
# Increase memory limit
skyport docker update --memory 2g container-name
# Increase CPU
skyport docker update --cpus 2 container-name
# Both together
skyport docker update \
--memory 4g \
--cpus 4 \
container-name
# Verify changes
skyport docker inspect container-name | grep -E "Memory|Cpu"
PM2 Process Scaling
# Increase instances
skyport pm2 scale app-name 4
# Decrease instances
skyport pm2 scale app-name 2
# View current instances
skyport pm2 describe app-name | grep instances
Horizontal Scaling
Run multiple instances with load balancing.
Docker with Load Balancer
# Create network
skyport docker network create lb-network
# Run multiple instances
for i in {1..3}; do
skyport docker run -d \
--name app-$i \
--network lb-network \
-e INSTANCE=$i \
my-app:latest
done
# Run load balancer (Nginx)
skyport docker run -d \
--name nginx-lb \
--network lb-network \
-p 80:80 \
-v /path/to/nginx.conf:/etc/nginx/nginx.conf \
nginx:latest
PM2 Cluster Mode
# Start in cluster mode with multiple instances
skyport pm2 start app.js -i 4 --name "api"
# Scale dynamically
skyport pm2 scale api 8
# Monitor cluster
skyport pm2 list
Auto-Scaling Strategy
Container-Based
Monitor metrics and scale based on:
- CPU usage > 70%
- Memory usage > 80%
- Request rate
Process-Based
Using PM2's built-in monitoring:
module.exports = {
apps: [
{
name: 'api',
script: 'server.js',
instances: 'max',
exec_mode: 'cluster',
max_memory_restart: '500M',
},
],
};
Load Balancing
Reverse Proxy Configuration
Nginx:
upstream app {
server app-1:3000;
server app-2:3000;
server app-3:3000;
}
server {
listen 80;
location / {
proxy_pass http://app;
proxy_set_header Host $host;
}
}
Caddy:
example.com {
reverse_proxy localhost:3001 localhost:3002 localhost:3003
}
Database Scaling
Connection Pooling
Use a connection pool to efficiently use database connections:
# Start PgBouncer for PostgreSQL
skyport docker run -d \
--name pgbouncer \
--network app-network \
edoburu/pgbouncer:latest
Read Replicas
Set up read replicas for high-traffic applications.
Monitoring Scaled Applications
# Monitor all instances
skyport pm2 monit
# View stats for all containers
skyport docker stats
# View logs from all instances
skyport logs app-name -f
# Specific instance logs
skyport docker logs app-1 -f
Session Persistence
For scaled applications, ensure sessions are shared:
Redis Session Store
# Start Redis
skyport docker run -d \
--name redis \
--network app-network \
redis:latest
# Configure app to use Redis
# env: REDIS_URL=redis://redis:6379
Cost Optimization
- Scale down during low traffic
- Use spot instances if available
- Monitor CPU/memory usage
- Remove unused containers
Next: CI/CD Integration | Production Setup
