Mistakes to Avoid when Configuring a Kubernetes Cluster

4 years ago   •   5 min read

By Anushka Arora

Kubernetes, the open-source container-orchestration platform. As organizations accelerate their use of containers and Kubernetes and move their application development and deployment to cloud platforms, preventing avoidable misconfigurations in their environment becomes increasingly crucial. We have hand-picked the top-5 mistakes you can make when configuring a Kubernetes Cluster. These are based on our own experience on our journey of Kubernetes. So without further ado, let me share with you these hard-earned mistakes you should avoid when you create a Kubernetes Cluster.

1.) Not Leveraging AWS EC2 Spots / GCP Preemptible VM wherever possible

Ec2 Spot Instances: With EC2 Spot instances, you pay the Spot price that’s in effect for the time period your instances are running. Spot instance prices are set by Amazon EC2 and adjust gradually based on long-term trends in supply and demand for Spot instance capacity. Spot Instances are available at up to a 90% discount compared to On-Demand prices.

GCP Preemptible VM: Preemptible instances are highly affordable, short-lived compute instances suitable for time-flexible workloads. They offer the same machine types and options as regular compute instances, last for up to 24 hours and are available in all projects whose location is set to a Google Cloud region. Pricing is fixed so you will always get low cost and financial predictability, without taking the risk of gambling on variable market pricing. Preemptible instances are up to 80% cheaper than regular instances.

Thus, whenever possible use EC2 Spot Instances or GCP Preemptible VM to reduce the cost of your Kubernetes Cluster.

2.) Not selecting the right Instance type/size

The other important factor is to make sure you select the right instance type and size while you are configuring a Kubernetes Cluster.

Why does the type of cloud instance matter? Since they all fall under the AWS umbrella, aren’t they effectively the same? Not quite. Some offer substantially more memory, a focus on CPU optimization or accelerated GPU performance; while others provide a more generalized set of services and resources. So, it’s really important to choose the right instance based on specific needs.

Each individual node needs to be powerful enough to run cluster-supporting workloads, as well as a reasonable amount of your own. However simply making a few big nodes means that whenever a node fails, your available capacity drops much more – and that means that the knock-on effect on the rest of the cluster is much bigger. A good rule of thumb is that you should try to have nodes sized so that losing one node doesn’t take a significant portion of your capacity offline.

3.) Not Securely configuring the Kubernetes API server

The Kubernetes API server handles all the REST API calls between external users and Kubernetes components. The API Server services, REST operations and provide the frontend to the cluster’s shared state through which all other components interact.

Make sure to securely configure the settings while configuring Kubernetes API Server of your cluster because if you don’t properly control access to Kubernetes API, you’re leaving yourself wide open to attack. Some of the most common and risky mistake that you can make is not using authentication for access to the API server since that is the main administration entry point to your cluster. To configure your Kubernetes Cluster with an authentication token that provides access to the Kubernetes API by default is a high risk and if that token has cluster-admin rights, an attacker could easily escalate privileges and can take over the entire cluster by just accessing the single container.

4.) Not taking a holistic approach to container security

Many people assume that containers are inherently more secure because they are ephemeral. However, the ease of spinning up new containers automatically can backfire if your automated configuration includes security vulnerabilities.

It is advised to follow up a ten-layer approach to container security. This includes both the container stack layers (such as the container host and registries) and container lifecycle issues (such as API management).

Focusing too narrowly on a single area – such as Kubernetes and orchestration is likely to increase risks elsewhere. If you are sure that you have secured your cluster following the best practices, that doesn’t mean that the applications you run on K8s are also secured. They may still be vulnerable due to vulnerabilities in the code or bad setup of privileges, such as container images configured to run with root privileges.

5.) Improperly configuring (or ignoring) native Kubernetes features

It’s easy to misconfigure settings like incorrectly setting role-based access controls (RBACs) that allow too much or even too little access, this causes potential security holes or deployment issues when applications are trying to communicate. The intent should be to limit what users including administrators can do on the cluster.

In general, networking in Kubernetes can come with a significant learning curve, which in turn makes it fertile terrain for security mistakes, expert says that “There should be a zero-trust approach”. An example illustrating potentially risky default configuration is workloads are deployed into the default namespace which means those workloads are not isolated from each other even though the mechanism of namespaces allows for such isolation, this can cause a blast.

6.) Not Selecting wisely between KOPS and Amazon EKS

It is really crucial to wisely choose between KOPS and Amazon EKS while configuring a Kubernetes Cluster.

AWS is the most widely used Cloud Provider and it is offering EKS, it doesn’t give you control over the entire Cloud Environment and you can only control the worker nodes in the control plane. Setting up the Kubernetes cluster might seem to be a difficult task in the beginning, but after doing this tedious job, maintenance of the cluster becomes really easy.

If you decide to use KOPS to set up the Kubernetes Cluster, setting up might seem to be really easy but the maintenance of Cluster is a difficult job. Both of them have their own advantages and disadvantages, thus it is really important to know the needs of your organization and accordingly choosing between Amazon EKS and KOPS. Check out this brilliant post that compares AWS EKS vs KOPs.

7.) Not using the Resources judiciously

You should always specify resource requests (CPU, memory) and limits. If you forget to set it up, Kubernetes will closely pack all of your Pods into a handful of nodes. A single pod may consume all the CPU or memory available on the node, the cluster won’t scale itself up as needed and causing its neighbors to be starved of CPU or hit Out of Memory errors.

You have to make sure to use Resource requests, it lets the scheduler know how much resources you expect your application to consume. When assigning pods to nodes, Kubernetes budgets them so that all of their requirements are met by the node’s resources.

Moreover, Setting Resource Requests is also important because Kubernetes Horizontal Pod autoscaler works on these limits.

So, go ahead and define requests and limits for each of your containers, if you aren’t sure, just take a guess, and keep in mind the safe side is higher. And whether you are certain or not, make sure to monitor actual resources usage by your pods and containers through using your cloud provider.

Spread the word