Mistakes to Avoid when Deploying on a Kubernetes Cluster

4 years ago   •   3 min read

By Anushka Arora

Kubernetes is one of the highest development velocity projects in the history of open source. In this post, we have explored some useful techniques to improve the high-availability Kubernetes deployments. We have explored five common mistakes to avoid when working with Kubernetes.

1.) Moving Kubernetes into Production too Quickly

There can be many significant differences between running Kubernetes in a dev/test environment and running it in a production environment. You’re smart if you plan properly to minimize issues while moving into production, otherwise you might have a hard time.

The biggest mistakes that one can make while moving Kubernetes into production are rooted in the lethal combination of overconfidence, ignorance and pressing deadlines. So, don’t hurry to move Kubernetes into production instead be prepared with the right policies, processes and test coverage otherwise it might cause your organization a huge loss!!

2.) To Assume you’re secured by default with Kubernetes

This is the most common misunderstanding that people have while deploying Kubernetes into the production environment. It is true that Kubernetes Community has shown a strong commitment towards security and because the orchestrator itself has lots of security-oriented features.

However, you need to ensure that when you deploy your architecture in this fast-paced environment, you properly configure the security features. For example: The default setting for network policies is to leave deployments open to all traffic so that every resource can talk to each other and here is a catch! This open setup vastly increases the risk of attackers approaching your Kubernetes Cluster.

3.) Not configuring pod disruption budgets in Prod Env

Another mistake that one does is not protecting your applications with a PodDisruptionBudget(PDB). Firstly, you must think of what application you want to protect and then how your application reacts to the disruption that is to decide how many instances can be down at the same time for a short period due to a voluntary disruption. After this, you can configure the pod disruption budget using YAML.

An example of how you can specify the pod disruption budget( PDB Using maxUnavailable) is:

apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
	name: zk-pdb
spec:
	maxUnavailable: 1
	selector:
		matchLabels:
			app: animal

The use of **maxUnavailable** is recommended as it automatically responds to changes in the number of replicas of the corresponding controller.

4.) Not doing Proper Monitoring

You can monitor your resources, overuse of resources might occur because of the lack of monitoring. Implementing an effective resource monitoring system might usually takes time because it is deprioritized by the everyday tasks that DevOps teams face.

To achieve a better Kubernetes usage, it is critical to have Monitoring Systems. The lack of monitoring leads to resource exhaustion, as Application Developers climb the learning curve, organizations end up with a massive cluster of reserved idle resources and thus leading to resource wastage and increase in cost. Therefore, your organization must have a monitoring system to provide a crystal clear view to your teams of their utilization of the Kubernetes Cluster.

5.) Not adding Default Memory Limits and Cpu limits to Namespaces

Even Before you start running your own applications, you must know how many resources Kubernetes start consuming and how many small VMs weren’t capable of running your applications because Kube-system eats approx 70% of a node.

It happens because various system components of Kubernetes already have derived resource requirements and you might not have enough resources while you’re deploying your applications on Kubernetes Cluster in the production environment. Therefore it is advisable to use at least two or more CPU per node. Later on you can modify the settings to be workable on lesser resources but in the beginning it should be avoided.

If someone writes an application For example: An application that opens a connection to a database every second but never closes it, this causes a memory leak in one of your application clusters. If it is deployed to your Kubernetes Cluster in Production Environment with no limit set it can crash a node. To set the limit, it is as simple as creating a YAML for limit range and applying that to the namespace.

apiVersion: v1
    kind: LimitRange
    metadata:
    	name: mem-limit-range
    spec:
    	limits:
    	-default:
   			 memory: 512Mi
   		 defaultRequest:
    		 memory: 256Mi
         type: Container

Spread the word

Keep reading