Rancher Kubernetes: A Quick Installation Guide for RKE2

TL;DR: RKE2, also known as RKE Government, is Rancher's next-generation Kubernetes distribution. In this article we'll learn to set up an RKE2 cluster and how we can visualize and manage workloads using Kubernetes dashboard by Devtron.

2 months ago   •   11 min read

By Bhushan Nemade
In this article

Racher is an open-source platform designed to address the complexities of managing multiple Kubernetes clusters. Along with a platform for managing Kubernetes, Rancher also has something called Rancher Kubernetes Engine (RKE2), a security-focused Kubernetes distribution. The Kubernetes distributions are wrappers around the core functionality of original Kubernetes aka vanilla Kubernetes aimed to provide some additional functionality. The first version of Kubernetes distribution by Rancher Labs was Rancher Kubernetes Engine (RKE). Later a security-focused and lightweight version of Rancher Kubernetes Engine (RKE) was rolled out which is currently known as RKE2. In this blog, we will go through a step-by-step guide for RKE2 installation.

Rancher VS Kubernetes

Kubernetes

Kubernetes is an open-source orchestration for containers that helps in scaling, deploying, and managing containerized applications and services. Kubernetes comes with a set of features like load balancing, storage management, configuration handling, and self-healing. These features help organizations to achieve large-scale deployments and high availability for applications/services across multiple regions.

Rancher

Rancher Kubernetes Engine (RKE/RKE2) is a container orchestration platform built upon  Kubernetes. RKE inherits the core functionality of Kubernetes and provides add-ons to enhance the security, as well as some features that are aimed to facilitate a quicker RKE installation and ease in upgrading Kubernetes versions.

Rancher also provides a dashboard to manage multiple Kubernetes clusters, helping teams to effectively manage their Kubernetes infrastructure and complex tasks like access management through its UI. 

What is Rancher 

Rancher provides a web-based Kubernetes dashboard that simplifies the management of multiple Kubernetes clusters spinning across multiple regions. Rancher provides an intuitive interface for teams from where they can manage their Kubernetes infrastructure and cloud-native applications. Rancher dashboard eliminates the complex and tedious process of managing Kubernetes through a command line interface. 

Rancher Kubernetes Engine (RKE2) is a Kubernetes distribution developed by Rancher Labs. RKE2 is a lightweight yet powerful Kubernetes distribution known for its security and ease of performing operations like setting up a production-grade Kubernetes cluster and upgrades. RKE2 eliminates the tedious and complex process of provisioning self-managed Kubernetes clusters, with a couple of commands we get a production-grade Kubernetes cluster that is ready with the essential components like kubectl.

Why RKE2

The very first point in “Why RKE2” I will be mentioning is the security aspect, RKE2 is well known for the security it provides for its services. To ensure the security of the Kubernetes cluster RKE2 uses Hardened Images of the components, where each image is scanned for vulnerabilities and built on top of a minimal base image. There is a lot more RKE2 do in security we will discuss it later in this section.

Some of the other reasons why I would prefer using an RKE2 cluster over vanilla Kubernetes clusters in my production servers are:

Simplified Installation

Provisioning of the RKE2 is quite simpler than vanilla Kubernetes, I was able to provision my RKE2 cluster with just a single binary file. Whereas, provisioning a vanilla Kubernetes cluster using Kubeadm takes a lot more effort.

Ease of Upgrades

Every DevOps/Developer guy is well known for the pain of Kubernetes version upgrades, RKE2 provides two ways for upgrading the RKE2 clusters.

Manual Upgrades

To upgrade the RKE2 cluster manually we get three ways, i.e. use an installation script by RKE2, manually install the binary of the desired version, and use rpm upgrades in case of rpm type of installation. Refer to the documentation for manual cluster upgrades.

Automated Upgrades

RKE2 also provides an automated cluster upgrade, they are handled by Rancher’s system upgrade controller. Refer to the documentation for automated cluster upgrades.

Production Ready

RKE2 comes with multiple preinstalled components which make it a production-ready cluster within few minutes, even in few minutes you can connect multiple nodes. The essential components like kubectl, canal, coredns, ingress-nginx, and metrics-server came preinstall with the RKE2 cluster. Once the RKE2 cluster is up we can use these components for production server operations.

Security

As I have mentioned above RKE2 is well known for its security, we already have discussed how RKE2 component images are hardened and scanned repeatedly for vulnerabilities. For production environments, RKE2 provides security that passes the CIS security benchmarks. To fully secure RKE2 production clusters it does need some manual intervention like securing the host operating system and Network policies. 

Moreover, to strengthen the security RKE2 can be installed over the SELinux systems, which itself is a security-enhanced Linux kernel. To ensure the security of our secrets we get the option to encrypt our secrets in RKE2. For user access management to the RKE2 cluster, it provides easy-to-configure Token management and Certificate management systems.

Above we have covered the major points that make RKE2 stand out from vanilla Kubernetes and make it a production-ready Kubernetes cluster. The offerings that will motivate me to use RKE2 as my production Kubernetes clusters are: obviously the very first will be the Security aspect, quick setup and upgrades, and the capability to be production-ready within minutes.

Quick Guide for RKE2 Installation

It’s enough of theory now, let’s fire up our labs and start some hands-on. In this section, I will be walking you through the whole process of RKE2 installation and getting some applications deployed over it.

Step 1: Setting up the environment

To start with an RKE2 installation, we require a VM with a Linux/Windows operating system. For this tutorial, we will be using two instances with Ubuntu V - 24.04. One will act as a server and another as a worker (Control plane and Worker node).

setting-up-environment
Setting up Environment

Step 2: RKE2 Server Node Installation

Before executing any command for provisioning of the RKE2 Server or Agent node you must have root access to the machine.

  • A server node is a control plane for our RKE2 cluster, which has components like api-server, metrics-server, ingress-nginx, and coredns.
  • To fire up the server node we need to execute the following commands:
    • An installer service for the RKE2 server.
      curl -sfL https://get.rke2.io | sh -
    • Enable the RKE2 server service.
      systemctl enable rke2-server.service
    • Start the RKE2 server service.
      systemctl start rke2-server.service
    • If are you a geek, then this is for you, observe the logs.
      journalctl -u rke2-server -f
rke2-server-installatio-logs
RKE2 Server Installation Logs
  • With that, our Server for RKE2 is ready, to get our compass for navigation i.e. kubectl navigate to cd/var/lib/rancher/rke2/bin/ on the Server node. Here we can see pre-installed kubectl, but the path for kubectl is not configured yet. To fire the kubectl commands we need to set the path for kubectl to execute the export KUBECONFIG=/etc/rancher/rke2/rke2.yaml command, now kubectl get pods you will be able to visualize all pods and their state.
rke2-pods
RKE2 Pods
  • The kubeconfig file for the RKE2 can be found at /etc/rancher/rke2/rke2.yaml
  • We will be requiring the node token to connect the node (agent) with the Server node of RKE2.

Step 3: Setting up Agent Node for RKE2 Cluster

  • For setting an Agent node we will be using a VM with similar configurations of Server node.
  • To prepare the Agent node we need to fire up some set of commands:
    • An installer service for RKE2 agents.
      curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE="agent" sh -
    • Enable the RKE2 agent service.
      systemctl enable rke2-agent.service
    • Now to connect an agent node with the server and config.yaml file is required on the agent node. This file includes the server address and secret token for setting up the connection.
      mkdir -p /etc/rancher/rke2/ configure
      vim /etc/rancher/rke2/config.yaml
    • The content that we need to put in config.yaml is
       server:https://<server>:9345 and token: <token from server node>
    • At <server> provide the IP address of your RKE2 Server and at <toke from server node> provide the node-token which can be found at:
      /var/lib/rancher/rke2/server/node-token  on the server node.
    • Save the config.yaml file and start the agent service
      systemctl start rke2-agent.service
rke2-agent-logs
RKE2 Agent Logs

Step 4: Test the RKE2 Cluster

  • Let’s get back to our server node and take a look at our cluster, execute kubectl get nodes to check if our agent node is ready or not.
rke2-node-added
RKE2 Node Added
  • You can also perform a small health check-up for all pods just execute kubectl get pods --all-namespaces all pods will be listed with the current state.
rke2-cluster-healthcheck
RKE2 Cluster Health-check

That’s it with just a few commands we were able to get up and running the RKE2 cluster. To make this a production-grade and High Availability (HA) cluster we need to do some more configurations. To make it an HA cluster we need to create a config.yaml file with information on all the agent nodes. To set an HA you can refer to the documentation

RKE2 being a lightweight and easy to set up cluster is feasible to use on bare metal machines. Unlike other managed Kubernetes distributions i.e. EKS, AKS, GKE RKE2 does not come with a load balancer, which is one of the essential requirements when we talk about HA. To fill up the gap we have something called MetalLB an open-source project, it's an easy to setup load balancer specifically designed for bare metal machines. To set up a MetalLB with an RKE2 cluster refer to their documentation or take a look at their GitHub repository.

Pros and Cons of RKE2

Pros

  • Enhanced Security
    RKE2 is a security-focused Kubernetes distribution, it comes with features like hardened component images and compliance with CIS security benchmarks. Which makes it a strong choice for organizations with high-security requirements. 
  • Ease of Upgrades
    RKE2 offers automated upgrade processes, which simplify cluster maintenance and updates.
  • Lightweight
    RKE2's lightweight design makes it efficient and potentially more cost-effective to run it on bare metal and edge servers. It requires fewer resources compared to some other Kubernetes distributions, making it suitable for various deployment scenarios.

Cons

  • Potential Vendor Lock-in
    Adopting RKE2 might make it challenging to switch to other Kubernetes distributions in the future.
  • Limited Ecosystem
    RKE2 has fewer third-party tools and integrations available compared to cloud-based managed Kubernetes distributions.
  • Learning Curve for Teams
    Teams unfamiliar with Rancher and RKE2 might need time to adapt to its specific features and workflows. This could lead to initial productivity slowdowns

Some other renowned managed Kubernetes distributions are EKS, AKS, and GKE and when we talk about self-managed Kubernetes distributions there are microk8s, K3d, and K3s. Each offers unique benefits such as simplified installation, enhanced security features, integration with specific cloud ecosystems, or optimizations for particular hardware or environments. In upcoming blogs, we will be covering each one of them analyzing their features, pros, and cons over vanilla Kubernetes.

Complexities of Kubernetes

The real game starts now, once an RKE2 cluster is up and running at the production level where there will be multiple such clusters with multiple nodes under each cluster, each node containing multiple applications/services. All these will be spread across regions around the world, while RKE2 eliminates some of the complexities at the cluster level i.e. provisioning and version upgrades but the real challenge is managing the fleet of these clusters at the production level.

The complexities of Kubernetes are very well known and they keep increasing with the scale of the system. A single misconfiguration can cause a huge impact, such as downtime of some services or disruption of all services. Managing this HA Kubernetes cluster using only command line tools like kubectl is a tedious process and complicated as well, these tools surely come with great power but act as double-edged swords for organizations. If we have listed out the pain and complexities of managing Kubernetes using command line tools it will be like this:

  • Limited Visibility Across Clusters
  • Complex User Access Management
  • Configuration Management Difficulties
  • Limited Collaboration 
  • Logging and Troubleshooting Complexities
  • Scalability Challenges

RKE2 with Kubernetes Dashboard by Devtron

The complexities that we have discussed above can be eliminated using a Kubernetes dashboard, a dashboard that is designed to give visibility across multiple clusters along with operational efficiency. Utilizing a dashboard for managing multiple Kubernetes clusters allows teams to collaborate and navigate quickly through the web of Kubernetes. 

There are several Kubernetes dashboards available for managing Kubernetes at scale. Some of them are Lens, Rancher, Headlamp, and Devtron. All of these dashboards come with their own set of features and capabilities aimed at simplifying the management of multiple Kubernetes clusters and workloads deployed into it.

However, from all of them, Devtron stands out by offering some powerful functionalities. Devtron provides granular visibility across multiple clusters with fine-grained access controls for users. Devtron also provides developers with the capability of generating access tokens and time-based permissions. Moreover, the Devtron Kubernetes dashboard includes application management features that allow you to deploy and manage your Helm release lifecycle. For ease of troubleshooting Devtron streams the logs of Kubernetes objects and provides an integrated terminal, enabling users to execute commands within pods for troubleshooting.

Let’s take a look at some of the major capabilities of Devtron and see how we can manage our RKE2 cluster and workloads deployed into it using the Kubernetes dashboard by Devtron.

Visibility Across Clusters

Devtron’s Resource Browser provides us with granular visibility of our RKE2 cluster, where we can visualize each node, namespace, workload, and all other resources of our cluster.

The Kubernetes dashboard by Devtron enables us to take action quickly by providing capabilities like dedicated terminal support for troubleshooting across the nodes/pod and, the capability to Cordon, Drain, Edit taints, Edit node config, and Delete. Similar types of capabilities can also be found to manage the Kubernetes workloads i.e. Pods, Deployments, Job, etc.

Devtron-resource-browser
Resource Browser

User Access Management

Devtron enables us to manage robust RBAC configuration,  by allowing granular access control for Kubernetes resources. We can create Permission Groups with predefined access levels and easily assign them to new users. Devtron also provides us support for  SSO integration with several different options, streamlining access management and eliminating the need for separate dashboard credentials.

sso-&-rbac-of-Devtron
SSO and RBAC of Devtron

Application Live Status

The Devtron dashboard provides us with the live status of our applications and provides a logical separation for Kubernetes resources of the application deployed, which provides ease in managing applications. In case the application needs troubleshooting Devtron provides support for launching the Terminal, checking logs, Events, and Manifets.

helm-application-details
Helm Application Details

Configuration Management for Applications

The major challenge while managing Kubernetes using CLI tools is not having visibility and capability to compare the configuration differences between versions. Devtron allows us to compare configurations of previous deployments with the newer ones. Also, we can have the audit logs of deployments through Deployment history. 

configuration-diff
Configuration Diff

That’s it finally now we have a secure and production-grade Kubernetes cluster using RKE2 and a robust Kubernetes dashboard by Devtron to manage our Kubernetes cluster and workloads. Now we can quickly deploy the fleet of our RKE2 cluster on-premises and manage them using Devtron’s Kubernetes dashboard. Where RKE2 provides easy installation, version upgrades, and secure Kubernetes environments. Devtron’s Kubernetes dashboard provides capabilities to manage the Kubernetes complexities by offering visibility across multiple clusters, fine-grained access control, application management, handling configuration diff, and last but not least troubleshooting capabilities.

Related articles

Spread the word

Keep reading