Autoscaling based on ALB metrics

KEDA is used to fetch metrics from AWS Cloud watch. Based on ALB metrics the application can be scaled accordingly. This will help run software efficiently and smoothly.

9 days ago   •   4 min read

By Kamal Acharya

Autoscaling is one of the key benefits of the Kubernetes. It helps reduce the utilization of resources, thus reducing the cost of cloud infrastructure. When demand drops, the autoscaling mechanism automatically removes the resources to avoid overspending.  The scaling of nodes or pods increases or decreases as the demand for the service response.

KEDA is a Kubernetes-based Event Driven Autoscaler. It can scale the Kubernetes container based on events needed to be processed. It is a lightweight, single-purpose component that can be added to the Kubernetes cluster.

In this blog, we will look into the following things:

1- What is an Application Load Balancer

2- What are the ALB Metrics

3- How to autoscale based on ALB Metrics using KEDA

What is an Application Load Balancer?

An application load balancer distributes incoming traffic among multiple applications, which we call servers or instances. An application load Balancer (ALB) is typically used to route HTTP and HTTPS requests to specific targets, such as Amazon ec2 instances, containers, and IP addresses.

What are the ALB Metrics

Application Load balancer publishes data points to the cloud watch, enabling it to retrieve statistics about these data points, known as metrics. These performance and usage metrics are known as ALB metrics.

Some common ALB metrics include:

  1. Request Count: This metric tells us about the total number of requests received by ALB.
  2. HTTP Code Count: This metric tracks the HTTP response codes returned by the ALB, such as 2xx, 3xx, 4xx, and 5xx.
  3. Target Response Time: This metric measures the time taken by the target instances to respond to requests forwarded by the ALB.
  4. Active Connection Count: This metric tracks the number of active connections between the ALB and the target instances.
  5. Target Connection Error Count: This metric counts the number of errors that occur when the ALB tries to establish connections with target instances.
  6. Target Response Error Count: This metric counts the number of errors that occur when the target instances fail to respond to requests forwarded by the ALB.

How to autoscale based on ALB Metrics using KEDA

Now time to dive deep into execution!

For setting up the autoscaling, we will be using KEDA and autoscale our cluster with ALB metrics. To execute all the tasks, we will use Devtron, which has native integration of KEDA (event-driven autoscaler).

Step-1: Install ALB Controller using Helm Chart

CRDs can be installed from Helm charts or either by using kubectl. To deploy the controller through the chart, navigate to chart store and search for aws-load-balancer

ALB chart
ALB chart

Configure the YAML file and choose the cluster where you want to deploy it. To check your chart is successfully deployed, you can search your app in the helm app or navigate to the resource browser to check the controller pod. To know more about Resource browser

Pod of ALB Controller
Pod of ALB Controller

Step-2: Install KEDA controller from Chart Store

KEDA chart
KEDA chart

You have to enable ingress of your application you want to autoscale, for that configure the base deployment template of the application ,enable the ingress accordingly

ingress:
  annotations:
    alb.ingress.kubernetes.io/load-balancer-name: <CustomName>
    alb.ingress.kubernetes.io/healthcheck-port: "80"
    alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/sub1nets: subnet-id , subnet-id
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/group.name: <GroupName>
  className: alb.    #Required
  enabled: true
  hosts:
    - host: <YourHostName>
      pathType: Prefix
      paths:
        - /
Ingress configuration
Ingress configuration

Step-4: Add your AWS credentials through secret

It is required to fetch load balancer metrics from AWS cloud watch.

Secret
Secret
Note: Make sure you have given permission to the user to create a load balancer and autoscaling it.

Step-4: Create KEDA Scaled Object

Configure KEDA autoscaling in base deployment template of application

KEDA configuration
KEDA configuration
Note: Make sure you use the secret name correctly in triggerAuthentication of KEDA object!!

You can view your KEDA objects in Custom resources of App Details Page

Step-5: Test your HPA by increasing your request to your application

You will be able to view the number of replicas increasing as per request in your HPA.Run this command to automatically increase requests on your load balancer

While true
Do 
Curl <hostname>
Done
Autoscaling resource
Autoscaling resource 

Conclusion

To run software applications efficiently and smoothly, they must automatically scale up and down according to traffic. So, the cloud provider provides a load balancer to expose the application, and ALB is one of them. So to autoscale according to ALB metrics, KEDA is used, which helps to fetch metrics from AWS Cloud watch and auto-scales the application accordingly.

Handling Kubernetes resources through the command line requires lots of experience, and debugging and troubleshooting require effort. Devtron provides Kubernetes Dashboard, which drives things without commands. It provides full-stack observability of resources for easy debugging.

Spread the word

Keep reading