Install Kubernetes Nodes on a VPS: A Practical Step-by-Step Guide

Install Kubernetes Nodes on a VPS: A Practical Step-by-Step Guide

Ready to run Kubernetes on VPS? This practical, step-by-step guide shows site admins and DevOps engineers how to prepare your servers, choose runtimes and networking, and provision control-plane and worker nodes for a stable, repeatable cluster.

Deploying Kubernetes clusters on virtual private servers (VPS) is a practical approach for small teams, development environments, and even production workloads when properly sized. This guide walks through the technical principles and step-by-step procedures to install Kubernetes nodes on a VPS, covering prerequisites, container runtimes, networking, security considerations, and operational tips. It is written for site administrators, DevOps engineers, and developers who want a reliable, repeatable setup on a VPS platform.

Introduction

Kubernetes provides powerful orchestration for containerized applications but requires careful preparation of the underlying nodes. When using VPS instances, you control the operating system, kernel settings, and networking — all of which affect cluster stability and performance. In the sections that follow, you will find both the underlying principles and a practical, reproducible sequence of steps to provision and join Kubernetes control-plane and worker nodes on VPS machines.

Principles: What Kubernetes Needs from a VPS

Before installing software, understand the core requirements Kubernetes imposes on a host:

  • Kernel and networking capabilities: Kubernetes uses Linux networking features — namespaces, iptables, and routing — so VPS must expose these capabilities. Check that the hypervisor does not restrict required sysctls like net.ipv4.ip_forward.
  • Swap disabled: kubelet expects swap to be off to make memory scheduling deterministic.
  • Container runtime: Kubernetes requires a container runtime such as containerd or CRI-O. Using containerd is common and well supported.
  • Consistent time and DNS: NTP or systemd-timesyncd for clock sync and a reliable DNS resolver are necessary for cluster components to communicate reliably.
  • Network connectivity between nodes: All nodes must be able to reach each other on required ports (API server, kubelet, etcd if external).

VPS-specific considerations

When selecting VPS for Kubernetes nodes, verify:

  • Virtualization type (KVM is preferable to container-based virtualization for full kernel feature access).
  • CPU and memory allocation — control-plane nodes need more memory for etcd and API server; a minimal control-plane typically requires at least 2 vCPUs and 4GB RAM, though 4GB is a bare minimum.
  • Disk I/O and size — container images and volumes can grow quickly; use SSD-backed storage.
  • Networking — public IPs or private networks for inter-node communication; low-latency, high-throughput links are beneficial for production clusters.

When to Run Kubernetes on VPS: Use Cases

VPS-hosted Kubernetes clusters are ideal for:

  • Development and CI pipelines: lightweight clusters for testing application deployments.
  • Small to medium production workloads: when you control node sizing and performance and don’t require managed Kubernetes features.
  • Edge or regional deployments: VPS providers with multiple datacenter locations allow deploying clusters closer to users.

Advantages and Trade-offs Compared to Managed Kubernetes

Understanding trade-offs helps decide whether to self-manage Kubernetes or use a managed service.

  • Advantages: Full control over OS, kernel, and runtime; ability to tune systems; often lower cost at small scale; flexible networking and storage choices.
  • Disadvantages: Operational overhead (upgrades, backups, high-availability), security patching, and monitoring must be handled by your team. Managed services reduce operational burden but limit low-level configuration.

Step-by-Step: Install Kubernetes Nodes on a VPS

The steps below assume an Ubuntu 22.04 LTS base, but the principles apply to other Linux distributions with minor adjustments.

1. Provision VPS instances

Create at least two instances for a minimal cluster: one control-plane (master) and one worker. For production, use three control-plane nodes for high availability. Ensure each VPS has a reachable IP, SSH access, and open ports as needed.

2. Prepare the OS

On each node, perform the following preparations:

  • Update packages: “sudo apt update && sudo apt upgrade -y”.
  • Disable swap: “sudo swapoff -a” and remove any swap entry from /etc/fstab to make it permanent. Kubernetes will refuse to run with swap enabled by default.
  • Ensure bridged traffic is allowed by setting sysctl:

    “sudo tee /etc/sysctl.d/k8s.conf <<EOF
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_forward = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    EOF
    sudo sysctl –system"

  • Install required packages: “sudo apt install -y apt-transport-https ca-certificates curl gnupg lsb-release”.

3. Install containerd

containerd is a lightweight, Kubernetes-compatible runtime. Steps:

  • Add Docker’s official GPG key and repository or use distribution packages. Example quick install (Ubuntu):

    “sudo apt install -y containerd”

  • Create default config and restart: “sudo mkdir -p /etc/containerd && sudo containerd config default | sudo tee /etc/containerd/config.toml” then “sudo systemctl restart containerd”.
  • Confirm containerd is running: “sudo systemctl status containerd”.

4. Install kubeadm, kubelet, and kubectl

Add the Kubernetes apt repository and install the packages:

  • “curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg”
  • Add repository: “echo ‘deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main’ | sudo tee /etc/apt/sources.list.d/kubernetes.list”
  • “sudo apt update && sudo apt install -y kubelet kubeadm kubectl && sudo apt-mark hold kubelet kubeadm kubectl”

5. Initialize the control plane

On the designated control-plane node, choose a pod network CIDR (for example, 10.244.0.0/16 for Flannel or 192.168.0.0/16 depending on CNI requirements). Initialize with kubeadm:

  • “sudo kubeadm init –pod-network-cidr=10.244.0.0/16 –control-plane-endpoint=”
  • After successful initialization, set up kubeconfig for the admin user:

    “mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config”

  • Save the kubeadm join command printed at the end — it contains the token and discovery token CA cert hash required to join worker nodes.

6. Install a Pod Network (CNI)

Kubernetes requires a CNI plugin to enable pod-to-pod networking. Choose one compatible with your pod CIDR. Example with Flannel:

  • “kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml”
  • For Calico, which provides network policy, use Calico manifests and set IP pool accordingly.
  • Verify CNI pods are running: “kubectl get pods -n kube-system”.

7. Join worker nodes

On each worker VPS, run the kubeadm join command that was output during kubeadm init. Example format:

  • “sudo kubeadm join :6443 –token –discovery-token-ca-cert-hash sha256:”
  • After joining, verify from the control plane: “kubectl get nodes” — new worker nodes should show Ready status once Kubelet and CNI are functional.

8. Post-install hardening and tuning

Consider the following operational best practices:

  • Firewall rules: Use provider firewall or host firewall to restrict access to control plane ports (6443), etcd ports (if external), and only allow SSH from trusted IPs.
  • TLS and authentication: Rotate certificate and token lifetimes as necessary; integrate with an identity provider for RBAC if required.
  • Resource limits and node taints: Mark control-plane nodes with taints to avoid scheduling non-control workloads: “kubectl taint nodes node-role.kubernetes.io/control-plane=:NoSchedule”.
  • Monitoring and logging: Deploy metrics-server, Prometheus, and a centralized logging solution (ELK/EFK) to monitor node health and application logs.
  • Backups: etcd snapshots should be automated for clusters with external etcd or stacked etcd on control-plane nodes.

Choosing the Right VPS Configuration

Select VPS plans based on workload and cluster role:

  • Control-plane nodes: Prioritize CPU and memory (2-4 vCPUs, 8GB+ RAM recommended for production control planes) and stable disk I/O for etcd.
  • Worker nodes: Size according to the aggregate resource requests of your pods; GPUs or enhanced networking may be necessary for ML or high-throughput workloads.
  • Network: If possible, choose VPS instances that support private networking between nodes in the same region to reduce latency and avoid public exposure.
  • Scalability: Verify how quickly you can provision additional VPS instances via API or control panel — essential for autoscaling and rolling updates.

Operational Tips and Troubleshooting

Some common issues and quick diagnostics:

  • Node stuck in NotReady: check kubelet logs (“journalctl -u kubelet”), container runtime status, and CNI pod logs in kube-system namespace.
  • Pods crashlooping: inspect pod describe and logs (“kubectl describe pod” and “kubectl logs”). Verify image pull credentials and node resource availability.
  • Networking issues between pods: ensure CNI is installed correctly and that required sysctls are set on the host. Verify iptables rules and that container runtime network stack is healthy.
  • High etcd latency: monitor disk IOPS and latency; consider provisioning faster disks or moving etcd to dedicated nodes.

Summary

Installing Kubernetes nodes on VPS instances is a cost-effective and flexible approach that provides full control over cluster behavior and resource choices. The process involves preparing the operating system, installing a container runtime like containerd, deploying kubeadm components, initializing a control plane, adding worker nodes with “kubeadm join,” and deploying a CNI plugin for pod networking. While self-hosting requires more operational effort than managed solutions, it offers advantages in customizability and potential cost savings for many use cases.

For teams looking to deploy Kubernetes on reliable VPS infrastructure, consider providers that offer SSD-backed storage, private networking, and flexible plans that match control-plane and worker node sizing requirements. If you’d like to experiment quickly, check out USA VPS plans to provision instances suitable for small Kubernetes clusters — they provide a good balance of performance and price for development and light production use.

Fast • Reliable • Affordable VPS - DO It Now!

Get top VPS hosting with VPS.DO’s fast, low-cost plans. Try risk-free with our 7-day no-questions-asked refund and start today!