Autoscaling Using KEDA Based On Prometheus Metrics

3 years ago   •   2 min read

By Shubham Kumar

This blog will discuss how we can autoscale deployments based on Prometheus metrics using KEDA.

What is Prometheus?

Prometheus is an open-source tool used for metrics-based monitoring and alerting. It is a very powerful tool for collecting and querying metric data. It collects data from application services and hosts and stores them in a time-series database. It offers a simple yet powerful data model and a query language (PromQL), and can provide detailed and actionable metrics to analyze an application's performance.

What is KEDA?

KEDA is a Kubernetes-based Event Driven Autoscaler. KEDA can be installed into any Kubernetes cluster and can work alongside standard Kubernetes components like the Horizontal Pod Autoscaler(HPA). When we install KEDA in any Kubernetes cluster, two containers run. The first one is keda-operator, and the second one is keda-operator-metrics-apiserver. Here, Role of keda-operator is to scale Deployments, Statefulsets, or Rollouts from zero or to zero on no events. KEDA acts as a Kubernetes metrics server that exposes rich event data like queue length or stream lag to the HPA to drive scale-out.The metric serving is the primary role of the second container i.e. keda-operator-metrics-apiserver.

Installing  KEDA (Using Helm)

  1. Add helm repo
helm repo add kedacore https://kedacore.github.io/charts

2. Update helm repo

helm repo update

3. Install keda helm chart

Using helm2 -

helm install kedacore/keda --namespace keda --version 1.4.2 --name keda

Using helm3

kubectl create namespace keda
helm install keda kedacore/keda --version 1.4.2 --namespace keda

Example:

apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: prometheus-scaledobject
  namespace: demo3
spec:
  scaleTargetRef:
    apiVersion: argoproj.io/v1alpha1
    kind: Rollout
    name: keda-test-demo3
  triggers:
    - type: prometheus
      metadata:
      serverAddress:  http://<prometheus-host>:9090
      metricName: http_request_total
      query: envoy_cluster_upstream_rq{appId="300", cluster_name="300-0", container="envoy", namespace="demo3", response_code="200" }
      threshold: "50"
  idleReplicaCount: 0                       
  minReplicaCount: 1
  maxReplicaCount: 10

In .spec.triggers section, we provide the informations that KEDA uses to trigger the autoscaling. Here are some parameters which can be used for autoscaling.

Parameters Descriptions
.spec.triggers.type Type of the metric used for scaling.
.spec.triggers.metadata Additional information about the metric used for scaling.
.spec.triggers.metadata.serverAddress URL for the prometheus server.
.spec.triggers.metadata.metricName Name of prometheus metric that is to be used for autoscaling.
.spec.triggers.metadata.query promQL query to run to get response and start autoscaling.
.spec.triggers.metadata.threshold Value to start scaling the deployment

Spread the word

Keep reading