ūüöÄ Start Deploying into Kubernetes, today. Install Now

Autoscaling Using KEDA Based On Prometheus Metrics

Autoscaling Using KEDA Based On Prometheus Metrics

In this blog, we are going to discuss how we can autoscale deployments on the basis of Prometheus metrics using KEDA.

What is Prometheus?

Prometheus is an open-source tool used for metrics-based monitoring and alerting. It is a very powerful tool for collecting and querying metric data. It collects data from application services and hosts and stores them in a time-series database. It offers a simple yet powerful data model and a query language (PromQL), and can provide detailed and actionable metrics using which we can analyse the performance of an application.

What is KEDA?

KEDA is a Kubernetes-based Event Driven Autoscaler. KEDA can be installed into any kubernetes cluster and can work alongside standard kubernetes components like the Horizontal Pod Autoscaler(HPA). When we install KEDA in any kubernetes cluster, two containers run. First one is keda-operator and second one is keda-operator-metrics-apiserver. Here, Role of keda-operator is to scale Deployments, Statefulsets or Rollouts  from zero or to zero on no events. KEDA acts as a kubernetes metrics server that exposes rich event data like queue length or stream lag to the HPA to drive scale out.The metric serving is the primary role of the second container i.e. keda-operator-metrics-apiserver.

Installing  KEDA (Using Helm)

  1. Add helm repo
helm repo add kedacore https://kedacore.github.io/charts

2. Update helm repo

helm repo update

3. Install keda helm chart

Using helm2 -

helm install kedacore/keda --namespace keda --version 1.4.2 --name keda

Using helm3

kubectl create namespace keda
helm install keda kedacore/keda --version 1.4.2 --namespace keda


apiVersion: keda.sh/v1alpha1
kind: ScaledObject
  name: prometheus-scaledobject
  namespace: demo3
    apiVersion: argoproj.io/v1alpha1
    kind: Rollout
    name: keda-test-demo3
    - type: prometheus
      serverAddress:  http://<prometheus-host>:9090
      metricName: http_request_total
      query: envoy_cluster_upstream_rq{appId="300", cluster_name="300-0", container="envoy", namespace="demo3", response_code="200" }
      threshold: "50"
  idleReplicaCount: 0                       
  minReplicaCount: 1
  maxReplicaCount: 10

In .spec.triggers section, we provide the informations that KEDA uses to trigger the autoscaling. Here are some parameters which can be used for autoscaling.

Parameters Descriptions
.spec.triggers.type Type of the metric used for scaling.
.spec.triggers.metadata Additional information about the metric used for scaling.
.spec.triggers.metadata.serverAddress URL for the prometheus server.
.spec.triggers.metadata.metricName Name of prometheus metric that is to be used for autoscaling.
.spec.triggers.metadata.query promQL query to run to get response and start autoscaling.
.spec.triggers.metadata.threshold Value to start scaling the deployment
Reduce Developer Toil by 90%
check Build and containerize without YAML files
check One-click GitOps
check Multicluster visibility into Apps throughout the value chain
check GUI for HELM and K8s resource management
check Perform Blue Green and Canary from templates