How to Set Up a Reverse Proxy with Nginx for Multiple Docker Containers
One of the most powerful VPS configurations is running multiple Docker containers — each serving a different application — behind a single Nginx reverse proxy. One IP address, one port 443, but unlimited applications — each accessible by its own domain name. Nginx inspects the incoming hostname and routes the request to the correct container automatically.
This guide builds a complete Nginx reverse proxy setup for multiple Docker containers: domain-based routing, automatic SSL via Certbot, Docker network isolation, and health checks.
Architecture
Internet
│
▼
Nginx (port 80/443) ← One IP, handles all domains
│
├── app1.com → Docker container (port 3000)
├── app2.com → Docker container (port 4000)
├── api.mysite.com → Docker container (port 5000)
└── dashboard.com → Docker container (port 6000)
Requirements
- Ubuntu VPS with Docker and Nginx installed
- Multiple domain names pointing to your VPS IP
- Certbot for SSL
Step 1: Create a Shared Docker Network
All containers that Nginx needs to reach must be on the same Docker network:
docker network create proxy-network
Step 2: Deploy Your Applications
Application 1 — Node.js API
docker run -d \
--name nodeapp \
--network proxy-network \
--restart unless-stopped \
-e NODE_ENV=production \
myapp:latest
Notice: no -p port mapping. The container is only accessible from within the Docker network — not from the public internet. Nginx connects to it internally.
Application 2 — Python Flask API
docker run -d \
--name flaskapp \
--network proxy-network \
--restart unless-stopped \
myflask:latest
Application 3 — WordPress
docker run -d \
--name wordpress \
--network proxy-network \
--restart unless-stopped \
-e WORDPRESS_DB_HOST=db \
-e WORDPRESS_DB_NAME=wp \
-e WORDPRESS_DB_USER=wpuser \
-e WORDPRESS_DB_PASSWORD=wppass \
wordpress:latest
Step 3: Find Container Internal IPs or Use DNS Names
Docker containers on the same network can reach each other by container name — no IP needed:
# Containers resolve by name within the Docker network:
# http://nodeapp:3000
# http://flaskapp:5000
# http://wordpress:80
# Verify containers are on the proxy-network
docker network inspect proxy-network | grep -A3 "Containers"
Step 4: Configure Nginx Server Blocks
Create a configuration for each application
sudo nano /etc/nginx/sites-available/nodeapp
upstream nodeapp_backend {
server nodeapp:3000;
keepalive 32;
}
server {
listen 80;
server_name app1.com www.app1.com;
location / {
proxy_pass http://nodeapp_backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
proxy_connect_timeout 60s;
proxy_read_timeout 60s;
}
access_log /var/log/nginx/app1.access.log;
error_log /var/log/nginx/app1.error.log;
}
sudo nano /etc/nginx/sites-available/flaskapp
upstream flask_backend {
server flaskapp:5000;
}
server {
listen 80;
server_name api.mysite.com;
location / {
proxy_pass http://flask_backend;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
sudo nano /etc/nginx/sites-available/wordpress
upstream wordpress_backend {
server wordpress:80;
}
server {
listen 80;
server_name myblog.com www.myblog.com;
client_max_body_size 64M;
location / {
proxy_pass http://wordpress_backend;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Important for WordPress admin
proxy_set_header X-Forwarded-Host $host;
}
}
Step 5: Enable All Sites
sudo ln -s /etc/nginx/sites-available/nodeapp /etc/nginx/sites-enabled/
sudo ln -s /etc/nginx/sites-available/flaskapp /etc/nginx/sites-enabled/
sudo ln -s /etc/nginx/sites-available/wordpress /etc/nginx/sites-enabled/
sudo nginx -t && sudo systemctl reload nginx
Step 6: Add SSL to All Domains at Once
sudo apt install certbot python3-certbot-nginx -y
# Issue certificates for all domains in one command
sudo certbot --nginx \
-d app1.com -d www.app1.com \
-d api.mysite.com \
-d myblog.com -d www.myblog.com
Certbot automatically updates all three Nginx configs with HTTPS settings and redirects. ✅
Step 7: Docker Compose for the Full Stack
Managing multiple containers with individual docker run commands gets unwieldy. Use Docker Compose instead:
nano /var/deployments/docker-compose.yml
version: '3.8'
networks:
proxy-network:
external: true # Use the network we already created
services:
nodeapp:
image: myapp:latest
container_name: nodeapp
restart: unless-stopped
networks:
- proxy-network
environment:
- NODE_ENV=production
env_file: .env.nodeapp
flaskapp:
image: myflask:latest
container_name: flaskapp
restart: unless-stopped
networks:
- proxy-network
wordpress:
image: wordpress:latest
container_name: wordpress
restart: unless-stopped
networks:
- proxy-network
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_NAME: wp
WORDPRESS_DB_USER: wpuser
WORDPRESS_DB_PASSWORD: wppass
db:
image: mariadb:10.11
container_name: wordpress-db
restart: unless-stopped
networks:
- proxy-network
volumes:
- wp_db_data:/var/lib/mysql
environment:
MYSQL_DATABASE: wp
MYSQL_USER: wpuser
MYSQL_PASSWORD: wppass
MYSQL_ROOT_PASSWORD: rootpass
volumes:
wp_db_data:
cd /var/deployments
docker compose up -d
docker compose ps
Step 8: Rate Limiting Per Application
Apply different rate limits to different applications:
sudo nano /etc/nginx/nginx.conf
http {
# Define rate limit zones
limit_req_zone $binary_remote_addr zone=api:10m rate=30r/m;
limit_req_zone $binary_remote_addr zone=web:10m rate=10r/s;
limit_req_zone $binary_remote_addr zone=strict:10m rate=5r/m;
}
# In the Flask API config (stricter limits for APIs)
location / {
limit_req zone=api burst=10 nodelay;
proxy_pass http://flask_backend;
}
Step 9: Health Monitoring for All Containers
nano ~/check-containers.sh
#!/bin/bash
SERVICES=("nodeapp" "flaskapp" "wordpress" "wordpress-db")
for SERVICE in "${SERVICES[@]}"; do
STATUS=$(docker inspect --format='{{.State.Status}}' $SERVICE 2>/dev/null)
HEALTH=$(docker inspect --format='{{.State.Health.Status}}' $SERVICE 2>/dev/null)
if [ "$STATUS" != "running" ]; then
echo "⚠️ $SERVICE is $STATUS — restarting..."
docker start $SERVICE
elif [ "$HEALTH" == "unhealthy" ]; then
echo "⚠️ $SERVICE is unhealthy — restarting..."
docker restart $SERVICE
else
echo "✅ $SERVICE: running"
fi
done
chmod +x ~/check-containers.sh
# Schedule every 5 minutes
crontab -e
# */5 * * * * /bin/bash /root/check-containers.sh >> /var/log/container-health.log 2>&1
Adding a New Application (Workflow)
The pattern for adding any new containerized application:
- Run the container on the
proxy-network(no-pport exposure) - Create an Nginx server block for the domain
- Enable with symlink to
sites-enabled/ - Test with
sudo nginx -tand reload - Issue SSL with
sudo certbot --nginx -d newdomain.com
The entire process takes under 5 minutes per new application.
Final Thoughts
Nginx as a reverse proxy for Docker containers is one of the cleanest multi-application architectures available. Containers stay isolated on their internal network, Nginx handles all public-facing traffic and SSL, and adding new applications takes minutes. It scales from two containers to twenty without any architectural changes.