K3s is a lightweight Kubernetes distribution ideal for development use. It’s now part of the Cloud Native Computing Foundation (CNCF) but was originally developed by Rancher.
K3s ships as a single binary with a filesize under 50 MB. Despite its diminutive appearance, K3s includes everything you need to run a production-ready Kubernetes cluster. The project focuses on resource-constrained hardware where reliability and ease of maintenance are key concerns. While K3s is now commonly found at the edge on IoT devices, these qualities also make it a good contender for local use by developers.
Getting Started With K3s
Running the K3s binary will start a Kubernetes cluster on the host machine. The main K3s process starts and manages all the Kubernetes components, including the control plane’s API server, a Kubelet worker instance, and the containerd container runtime.
In practice you’ll usually want K3s to start automatically as a service. It’s recommended you use the official installation script to quickly get K3s running on your system. This will download the binary, move it into your path, and register a systemd or openrc service as appropriate for your system. K3s will be configured to automatically restart after its process crashes or your host reboots.
$ curl -sfL https://get.k3s.io | sh -
Confirm the installation succeeded by checking the status of the k3s
service:
$ sudo service k3s status
You’re ready to start using your cluster if active (running)
is displayed in green.
Interacting With Your Cluster
K3s bundles Kubectl if you install it using the provided script. It’s nested under the k3s
command:
$ k3s kubectl get pods No resources found in default namespace.
You might receive an error that looks like this:
$ k3s kubectl get pods WARN[0000] Unable to read /etc/rancher/k3s/k3s.yaml, please start server with --write-kubeconfig-mode to modify kube config permissions error: error loading config file "/etc/rancher/k3s/k3s.yaml": open /etc/rancher/k3s/k3s.yaml: permission denied
You can fix this by adjusting the file permissions of the referenced path:
$ sudo chmod 644 /etc/rancher/k3s/k3s.yaml
Now you should be able to run Kubectl commands without using sudo
.
You can keep using a standalone Kubectl installation if you don’t want to rely on K3s’ integrated version. Use the KUBECONFIG
environment variable or --kubeconfig
flag to reference your K3s configuration file when running the bare kubectl
command:
$ export KUBECONFIG=/etc/rancher/k3s/k3s.yaml $ kubectl get pods No resources found in default namespace.
An Example Workload
You can test your cluster by adding a simple deployment:
$ k3s kubectl create deployment nginx --image=nginx:latest deployment.apps/nginx created $ k3s kubectl expose deployment nginx --type=LoadBalancer --port=80 service/nginx exposed
Use Kubectl to discover the IP address of the service that’s been created:
$ k3s kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 35m nginx LoadBalancer 10.43.49.20 <pending> 80:30968/TCP 17s
In this example, the NGINX service is accessible at 10.43.49.20
. Visit this URL in your web browser to see the default NGINX landing page.
Setting Kubernetes Options
You can set custom arguments for individual Kubernetes components when you run K3s. Values should be supplied as command-line flags to the K3s binary. Environment variables are also supported but the conversion from flag to variable name is not always consistent.
Here are some commonly used flags for configuring your installation:
Many other options are available to customize the operation of K3s and your Kubernetes cluster. These include facilities for disabling bundled components such as the Traefik Ingress controller (--disable traefik
) so you can replace them with alternative implementations.
Besides flags and variables, K3s also supports a YAML config file that’s much more maintainable. Deposit this at /etc/rancher/k3s/config.yaml
to have K3s automatically use it each time it starts. The field names should be CLI arguments stripped of their --
prefix.
node-name: first-worker bind-address: 1.2.3.4
Multi-Node Clusters
K3s has full support for multi-node clusters. You can add nodes to your cluster by setting the K3S_URL
and K3S_TOKEN
environment variables before you run the installation script.
$ curl -sfL https://get.k3s.io | K3S_URL=https://192.168.0.1:6443 K3S_TOKEN=token sh -
This script will install K3s and configure it as a worker node that connects to the IP address 192.168.0.1
. To find your token, copy the value of the /var/lib/rancher/k3s/server/node-token
file from the machine which is running your K3s server.
Using Images In Private Registries
K3s has good integrated support for images in private registries. You can provide a special config file to inject registry credentials into your cluster. These credentials will be read when the K3s server starts. It’ll automatically share them with your worker nodes.
Create an /etc/rancher/k3s/registries.yaml
file with the following content:
mirrors: example-registry.com: endpoint: - "https://example-registry.com:5000"
This will let your cluster pull images such as example-registry.com/example-image:latest
from the server at example-registry.com:5000
. You can specify multiple URLs under the endpoint
field; they’ll be used as fallbacks in the written order until a successful pull occurs.
Supply user credentials for your registries using the following syntax:
mirrors: example-registry.com: endpoint: - "https://example-registry.com:5000" configs: "example-registry.com:5000": auth: username: <username> password: <password>
Credentials are defined on a per-endpoint basis. Registries defined with multiple endpoints need individual entries in the config
field for each one.
Endpoints that use SSL need to be assigned a TLS configuration too:
configs: "example-registry.com:5000": auth: username: <username> password: <password tls: cert_file: /tls/cert key_file: /tls/key ca_file: /tls/ca
Set the cert_file
, key_file
, and ca_file
fields to reference the correct certificate files for your registry.
Upgrading Your Cluster
You can upgrade to new K3s releases by running the latest version of the installation script. This will automatically detect your existing cluster and migrate it to the new version.
$ curl -sfL https://get.k3s.io | sh -
If you customized your cluster by setting installer environment variables, repeat them when you run the upgrade command:
$ curl -sfL https://get.k3s.io | INSTALL_K3S_BIN_DIR=/usr/bin sh -
Multi-node clusters are upgraded using the same procedure. You should upgrade each worker node individually, after the server’s running the new release.
You can install a specific Kubernetes version by setting the INSTALL_K3S_VERSION
variable before you run the script:
$ curl -sFL https://get.k3s.io | INSTALL_K3S_VERSION=v1.23.0 sh -
The INSTALL_K3S_CHANNEL
version can select unstable versions and pre-release builds:
$ curl -sFL https://get.k3s.io | INSTALL_K3S_CHANNEL=latest sh -
K3s will default to running the newest stable Kubernetes release when these variables aren’t set.
Uninstalling K3s
As K3s is packaged as a self-contained binary, it’s easy to clean up if you want to stop using it. The install process provides an uninstall script that will remove system services, delete the binary, and clear all the data created by your cluster.
$ /usr/local/bin/k3s-uninstall.sh
You should use the script at /usr/local/bin/k3s-agent-uninstall.sh
instead when you’re decommissioning a K3s worker node.
Conclusion
K3s is a single-binary Kubernetes distribution which is light on system resources and easy to maintain. This doesn’t come at the expense of capabilities: K3s is billed as production-ready and has full support for the Kubernetes API objects, persistent storage, and load balanced networking.
K3s is a good alternative to other developer-oriented Kubernetes flavors such as Minikube and MicroK8s. You don’t need to run virtual machines, install other software, or perform any advanced configuration to set up your cluster. It’s particularly well-suited when you’re already running K3s in production, letting you iron out disparities between your environments.