Install Docker on Linux: A Beginner’s Fast, Step‑by‑Step Installation Guide
Ready to install docker on linux and start running lightweight, consistent apps on your VPS? This fast, step-by-step guide walks through prerequisites, kernel and storage tips, and post-install tweaks so you can deploy containers reliably in minutes.
Installing a container runtime on a Linux VPS is one of the first steps toward modern application deployment. This guide walks you through a fast, practical, and technically detailed installation and configuration process so you can run containers reliably on production or development systems. The instructions cover prerequisites, repository installation, storage and kernel considerations, post-installation configuration, and common troubleshooting tips to help system administrators, developers, and site owners get up and running quickly.
Why containers and what the runtime does
Containers package an application and its dependencies into a lightweight, portable unit that runs on a shared kernel. The container runtime (daemon) manages container lifecycle, image storage, networking namespaces, and resource isolation using kernel features like namespaces and cgroups. When you install the runtime on a VPS, you get:
- Fast startup: containers start in milliseconds compared to VMs.
- Consistent environments: images ensure the same dependencies across environments.
- Efficient resource usage: multiple containers share the host kernel and layers.
Understanding the runtime’s integration with the kernel and storage stack helps avoid common pitfalls (for example, kernel versions, cgroup modes, and storage drivers) that can impact stability on VPS environments.
Prerequisites and kernel considerations
Before installing, verify the environment:
- Kernel version: ideally >= 3.10; for modern features and best compatibility use a recent 4.x or 5.x kernel. Check with
uname -r. - cgroups: many distributions are moving to cgroup v2. Check which version the system uses; some runtimes and orchestration stacks expect cgroup v1. Use
stat -fc %T /sys/fs/cgroupor inspect /proc/cgroups. - Virtualization: KVM/QEMU based VPSs are generally preferred; some container features (like nested namespaces) can be limited on container-hosted VPS platforms.
- Disk and storage: ensure you have enough disk space (images can grow). Prefer SSD-backed block storage for performance.
On some VPS providers, kernel upgrades and features are controlled by the host. If you encounter missing kernel features, check your provider’s documentation or choose a plan with full kernel control (for example, a KVM-based VPS).
Installing from the official repository (Ubuntu / Debian)
Using the vendor’s official repository ensures you get timely updates and the recommended storage drivers. The high-level steps are:
- Update the package index: apt update.
- Install prerequisites: apt install ca-certificates curl gnupg lsb-release.
- Add the official GPG key and repository: use the distribution codename in the apt source entry).
- Install the engine and CLI packages (package names depend on the vendor packaging).
- Enable and start the daemon with systemd: systemctl enable –now docker.service docker.socket.
After installation, verify with a simple container run: docker run –rm hello-world. If that succeeds, the daemon can pull images and run containers.
Key commands (Ubuntu / Debian)
Typical manual steps include adding the repository and installing packages. If you prefer a one-line script, use the official convenience script, but for production servers it’s better to add the repository explicitly so you control updates:
- Add GPG key: curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg –dearmour -o /usr/share/keyrings/docker-archive-keyring.gpg
- Add repo: echo “deb [arch=$(dpkg –print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable” | sudo tee /etc/apt/sources.list.d/docker.list
- Install: sudo apt update && sudo apt install docker-ce docker-ce-cli containerd.io
Installing on CentOS / RHEL / Fedora
For Red Hat-based systems, use the official repository and package manager. On RHEL/CentOS streams you may need to manage SELinux and firewall settings.
- Install prerequisites: yum install -y yum-utils
- Add repo: yum-config-manager –add-repo https://download.docker.com/linux/centos/docker-ce.repo
- Install: yum install docker-ce docker-ce-cli containerd.io
- Start and enable: systemctl enable –now docker
If SELinux is enabled (enforcing), the default packages are built to work with SELinux enabled. Do not disable SELinux as a first resort; instead, read logs from auditd or set the correct file contexts for volumes.
Storage drivers and performance tuning
Storage drivers determine how image layers and writable container layers are implemented. The most common options:
- overlay2 – recommended for modern kernels and ext4/xfs filesystems; offers the best performance and lowest overhead.
- aufs – older driver, not available in all kernels.
- devicemapper – used historically; only use if overlay2 cannot be supported and configure it in direct-lvm mode for stability.
To ensure overlay2 works, use an appropriate backing filesystem (ext4 or xfs without ftype issues) and a kernel that supports overlay. On VPS systems using network-backed filesystems, overlay might not be supported; in that case, test and choose the supported driver, or move container storage to local block devices.
Tuning options
- Configure log rotation for container logs to avoid filling disk: use daemon.json to set “log-opts”: {“max-size”:”10m”,”max-file”:”3″}.
- Limit container resources with –memory and –cpus to avoid noisy neighbor issues on multi-tenant VPSs.
- Use thin-provisioned block devices carefully; monitor I/O and consider adding separate volumes for heavy storage workloads.
Security and user configuration
By default, the runtime daemon listens on a Unix socket owned by root. To allow non-root users to run containers, add them to the docker group: sudo usermod -aG docker username. Be aware that this effectively grants root-equivalent privileges because containers can escalate; use with caution.
For improved security, consider:
- Running rootless mode so the daemon and containers run under an unprivileged user (good for shared environments).
- Enabling user namespaces to map container root to an unprivileged host range.
- Using security profiles like seccomp and AppArmor (where available) to restrict syscalls.
Networking basics and common pitfalls
The runtime creates a bridge network (docker0) by default and uses NAT for outbound connectivity. For production:
- Use macvlan for containers that need direct LAN access, or create user-defined bridge networks for service segmentation.
- Open necessary ports in the VPS firewall (ufw, firewalld, or iptables) and avoid exposing the Docker daemon API over TCP without TLS.
On cloud VPSs, be mindful of provider-level firewall rules (security groups) in addition to the host firewall.
Useful post-installation commands and testing
After installation and starting the service, run these checks:
- Check daemon status: systemctl status docker
- Verify runtime info: docker info (shows storage driver, cgroup driver, kernel, and plugins)
- Run a test image: docker run –rm -it busybox sh or docker run –rm hello-world
- Pull and inspect images: docker pull nginx; docker images; docker run -d –name mynginx -p 8080:80 nginx
Use docker info to verify the chosen storage driver and cgroup driver; mismatches between Kubernetes and Docker cgroup drivers can cause scheduling problems if you later adopt orchestration.
Maintenance, updates, and housekeeping
Manage growth of images and stopped containers with routine cleanup:
- Remove unused containers and images: docker system prune -a (use carefully).
- Prune volumes and networks explicitly when safe: docker volume prune and docker network prune.
- Schedule regular package updates for the runtime and containerd from the repository you added.
On VPS instances, snapshots and backups of important volumes are recommended before major upgrades. For transactional workloads, consider immutable deployments with images and orchestrators to minimize configuration drift.
Advantages compared with alternatives
The container runtime approach offers:
- Widespread ecosystem: rich tooling (images, registries, orchestration) and community support.
- Speed and portability: images run the same on your laptop, CI, and VPS.
- Flexibility: support for multiple storage and networking plugins.
Alternatives like Podman offer daemonless operation and better rootless defaults; however, the Docker ecosystem remains dominant for many CI/CD pipelines and commercial tools. Evaluate the trade-offs: if you need daemonless operation and SELinux-first designs, test Podman; if you require integrations with specific orchestration tools that expect Docker-compatible APIs, the runtime in this guide is often the simpler choice.
Selecting a VPS for container workloads
When choosing a VPS plan for container workloads, consider:
- Virtualization type: KVM/QEMU gives full kernel control and predictable isolation. Container-hosted VPSs can have kernel limitations.
- Disk type: SSD-backed block storage or NVMe for fast image pull and container I/O.
- Memory and CPU: allocate enough RAM and CPU for your containerized applications and consider CPU pinning on heavy workloads.
- Network: predictable bandwidth and public IPs if you need external access to services.
For developers and small production services, a VPS with modest CPU and SSD disks is often sufficient. For multi-service deployments or orchestration, scale to multiple VPS nodes or consider a managed Kubernetes service.
Summary and recommended next steps
Installing a container runtime on Linux is straightforward when you follow repository-based installation and validate kernel, cgroup, and storage driver compatibility. After installation, enable the service, add non-root users cautiously, and test with simple images. Tune storage, logging, and resource limits for your workload, and establish housekeeping routines to manage disk growth.
If you need a reliable VPS to host container workloads, look for KVM-based plans with SSD storage and predictable networking. For example, VPS.DO provides a range of options—if you want to try a U.S.-based VPS, check the USA VPS plans here: https://vps.do/usa/. Evaluating a small test instance is a good way to validate kernel and storage compatibility before moving to production.