How to Deploy Docker Containers on a VPS: A Beginner-Friendly Guide

How to Deploy Docker Containers on a VPS: A Beginner-Friendly Guide

Docker is the most popular way to package and deploy applications on a VPS in 2025. Instead of manually installing dependencies, fighting version conflicts, and writing setup scripts, you bundle your entire app — code, runtime, libraries — into a container that runs identically on any server.

This guide covers everything from installing Docker to running a multi-service app with Docker Compose. No prior Docker experience needed.

Key Concepts Before We Start

📦Image

A read-only blueprint for a container. Like a recipe — it defines what software is installed and how it runs. Downloaded from Docker Hub or built from a Dockerfile.
🚢Container

A running instance of an image. Isolated, lightweight, and disposable. You can run dozens of containers from the same image simultaneously.
🎼Docker Compose

A tool for defining and running multi-container apps. One YAML file describes all your services (app, database, cache) — start everything with one command.
🐳Why Docker on a VPS? Docker lets you deploy any app — Node.js, Python, PHP, Go — without worrying about the server’s installed software. It also makes updates trivial: swap the image version and redeploy. No manual dependency management, no “it works on my machine” problems.

STEP 1:Install Docker on Ubuntu 22.04
Install Docker from the official Docker repository — this gives you the latest version with security patches, unlike the older version in Ubuntu’s default repos.
bash
# Remove old Docker versions if any
$ apt remove docker docker-engine docker.io containerd runc -y

# Install dependencies
$ apt update
$ apt install ca-certificates curl gnupg -y

# Add Docker's official GPG key
$ install -m 0755 -d /etc/apt/keyrings
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
  | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
$ chmod a+r /etc/apt/keyrings/docker.gpg

# Add Docker repository
$ echo "deb [arch=$(dpkg --print-architecture) \
  signed-by=/etc/apt/keyrings/docker.gpg] \
  https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" \
  | tee /etc/apt/sources.list.d/docker.list > /dev/null

# Install Docker Engine
$ apt update
$ apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin -y
bash — verify installation
$ docker --version
# Docker version 26.x.x

# Run the hello-world test container
$ docker run hello-world

Allow non-root user to run Docker

bash
# Add your user to the docker group (no sudo needed)
$ usermod -aG docker $USER
$ newgrp docker

# Verify — should run without sudo
$ docker ps

STEP 2:Run Your First Container

Let’s deploy a real app — Nginx — as a Docker container to see how it works:

bash
# Pull the official Nginx image from Docker Hub
$ docker pull nginx

# Run it — map port 8080 on the host to port 80 inside the container
$ docker run -d \
  --name my-nginx \
  -p 8080:80 \
  nginx

# Check it's running
$ docker ps

Visit http://your-server-ip:8080 — you’ll see the Nginx welcome page served from inside a container.

Basic container management commands

bash
$ docker stop my-nginx      # Stop the container
$ docker start my-nginx     # Start it again
$ docker restart my-nginx   # Restart
$ docker rm my-nginx        # Remove container (must be stopped)
$ docker logs my-nginx      # View container logs
$ docker exec -it my-nginx bash  # Open a shell inside the container

STEP 3:Ports, Volumes & Environment Variables
Essential concepts

These three flags are the building blocks of almost every docker run command you’ll ever write:

-p: Port Mapping (Host:Container)

bash
# -p HOST_PORT:CONTAINER_PORT
$ docker run -p 3000:3000 my-node-app

# Bind to a specific IP (more secure — only localhost can reach it)
$ docker run -p 127.0.0.1:3000:3000 my-node-app
🔒For production, bind to 127.0.0.1 so the container port is only accessible from localhost — then use Nginx as a reverse proxy (covered in our previous guide) to expose it publicly over HTTPS.

-v: Volume Mounting (Persistent Data)

bash
# Mount a host directory into the container
# -v /host/path:/container/path
$ docker run -v /var/www/html:/usr/share/nginx/html nginx

# Named volume (Docker manages the storage location)
$ docker run -v my-db-data:/var/lib/mysql mysql

-e: Environment Variables

bash
# Pass configuration to containers without hardcoding secrets
$ docker run -d \
  -e DB_HOST=localhost \
  -e DB_PASSWORD=secret123 \
  -e NODE_ENV=production \
  my-app

STEP 4:Install Docker Compose
Docker Compose lets you define multiple containers in a single docker-compose.yml file and manage them as a unit. It’s essential for running real applications that need a database, cache, or background worker alongside the main app.
bash
# Docker Compose V2 is included with the Docker installation
# If you installed Docker via the official repo, it's already there:
$ docker compose version
# Docker Compose version v2.x.x

# If not installed, add the plugin manually:
$ apt install docker-compose-plugin -y
⚠️Docker Compose V2 uses docker compose (no hyphen). The older V1 used docker-compose (with hyphen). This guide uses V2 syntax throughout.

STEP 5 :Deploy a Multi-Service App with Docker Compose
Here’s a real-world example: a Node.js app + PostgreSQL database + Redis cache, all defined in one Compose file. This is the kind of stack that powers most production web applications.

version: ‘3.9’
services:

  app:
    image: node:20-alpine
    container_name: my-app
    working_dir: /app
    volumes:
      - ./app:/app          # Mount local code into container
    ports:
      - "127.0.0.1:3000:3000"  # Expose only to localhost
    environment:
      - NODE_ENV=production
      - DB_HOST=postgres      # Service name = hostname
      - DB_PASSWORD=${DB_PASSWORD}  # Read from .env file
      - REDIS_HOST=redis
    command: node server.js
    depends_on:
      - postgres
      - redis
    restart: unless-stopped
    networks:
      - app-network

  postgres:
    image: postgres:16-alpine
    container_name: my-postgres
    environment:
      - POSTGRES_DB=myapp
      - POSTGRES_PASSWORD=${DB_PASSWORD}
    volumes:
      - postgres-data:/var/lib/postgresql/data  # Persist DB data
    restart: unless-stopped
    networks:
      - app-network

  redis:
    image: redis:7-alpine
    container_name: my-redis
    restart: unless-stopped
    networks:
      - app-network

volumes:
  postgres-data:   # Named volume — data survives container restarts

networks:
  app-network:
    driver: bridge

Create a .env file in the same directory to store secrets:

.env
DB_PASSWORD=your-super-secret-password

Start, stop, and manage your stack

bash
# Start all services in the background
$ docker compose up -d

# View running services
$ docker compose ps

# View logs (all services, or one)
$ docker compose logs -f
$ docker compose logs -f app

# Stop all services
$ docker compose down

# Stop and remove volumes (⚠️ deletes DB data)
$ docker compose down -v
Services on the same Docker network can reach each other using the service name as a hostname. Your Node.js app connects to PostgreSQL at postgres:5432 — no IP addresses needed.

STEP 6
Keep Containers Running After Reboot
Production must-do

By default, Docker containers stop when your server reboots. There are two ways to fix this:

Option A: restart policy (simplest)

Add restart: unless-stopped to each service in your Compose file (already included in the example above). This restarts containers automatically after a reboot or crash.

Option B: Enable Docker to start on boot

bash
# Enable Docker daemon to start on system boot
$ systemctl enable docker

# Verify
$ systemctl is-enabled docker
# enabled

With Docker enabled on boot and restart: unless-stopped in your Compose file, your entire stack will automatically come back online after any server restart.

REF
Essential Docker Commands Cheat Sheet
Bookmark this
Command What it Does
docker ps List running containers
docker ps -a List all containers (including stopped)
docker images List downloaded images
docker pull nginx Download an image from Docker Hub
docker logs -f name Stream container logs in real time
docker exec -it name bash Open interactive shell in container
docker stats Live CPU/RAM usage per container
docker system prune Remove stopped containers, unused images
docker compose up -d Start all Compose services detached
docker compose pull Pull latest images for all services
docker compose restart Restart all services
docker volume ls List all volumes

✅ Production Docker Deployment Checklist

Docker installed from official repo (not Ubuntu’s default package)

Non-root user added to docker group

Docker daemon enabled on boot (systemctl enable docker)

App ports bound to 127.0.0.1 — not exposed directly to the internet

Nginx reverse proxy fronting all containers (HTTPS on port 443)

Secrets stored in .env file, not hardcoded in docker-compose.yml

Named volumes used for all persistent data (databases, uploads)

restart: unless-stopped on all production services

Regular volume backups configured

docker system prune scheduled weekly to reclaim disk space

Frequently Asked Questions

How much RAM does Docker itself use?
Docker’s daemon (dockerd) uses about 30–80 MB of RAM on its own. The RAM consumed by your containers depends entirely on what’s running inside them. A Node.js container might use 100–200 MB; a full Postgres database might use 200–500 MB. A 2 GB VPS can comfortably run 3–5 lightweight containerized services.
Should I use Docker or install apps directly on the VPS?
Docker is better for most production deployments: it guarantees consistent behavior across environments, makes updates and rollbacks trivial, and isolates apps from each other. The main downside is a small performance overhead (usually negligible) and the need to understand Docker concepts. For simple single-app setups, direct installation is fine. For anything more complex, Docker pays for itself quickly.
How do I update a running container to a new image version?
Pull the new image, then recreate the container. With Docker Compose: docker compose pull && docker compose up -d. This pulls the latest images and recreates only the containers whose images have changed — with minimal downtime.
How do I back up Docker volumes (database data)?
Use docker run --rm -v postgres-data:/data -v $(pwd):/backup alpine tar czf /backup/backup.tar.gz /data to create a compressed archive of a named volume. For PostgreSQL specifically, use docker exec my-postgres pg_dumpall -U postgres > backup.sql for a proper SQL dump.
Can I run Docker alongside a non-Docker Nginx installation?
Yes, and this is actually a recommended pattern. Install Nginx directly on the host (not in a container) to handle SSL termination and reverse proxying on ports 80 and 443. Then run your application containers bound to localhost ports. Nginx proxies public HTTPS traffic to the containers — clean separation of concerns.

🐳 You’re Ready to Ship with Docker

Docker transforms your VPS from a bare server into a flexible deployment platform. You can now spin up any application in minutes, run multiple services side by side without conflicts, and update or roll back deployments with a single command.

Next steps: write a Dockerfile to containerize your own application, set up a CI/CD pipeline (GitHub Actions → Docker Hub → VPS deploy), and explore Docker Swarm or Portainer for a management UI if you prefer not working from the command line.

 

Fast • Reliable • Affordable VPS - DO It Now!

Get top VPS hosting with VPS.DO’s fast, low-cost plans. Try risk-free with our 7-day no-questions-asked refund and start today!