Running Low CPU Utilisation Jobs with KEDA

April 8, 2022
Francois du Toit
5
min read

Running Low CPU Utilisation Jobs with KEDA

Francois du Toit

April 8, 2022

At Kohort, we publish messages to queues which in turn are consumed by Kubernetes (k8s) pods. As these messages are consumed, they increase the average CPU usage of these pods which in turn triggers Horizontal Pod Autoscaling (HPA) which then creates more pods. Simply put, if the average CPU usage increases, then the number of pods increases and vice versa.

This image illustrates how the k8s HPA monitors the average CPU usage in the cluster in order to determine whether more pods should be created.
This image illustrates how the k8s HPA monitors the average CPU usage in the cluster in order to determine whether more pods should be created.

During a recent change in architecture we offloaded a substantial amount of computational work required in some of our longer-running processes from the pods to our data warehouse. This decrease in computational effort on the pods introduced an unforeseen consequence. K8s would detect the decrease in average CPU utilisation and consequently it would prematurely remove these seemingly idle pods (resulting in a lot of re-processing).

graph 2 (KEDA)
Of the three pod states illustrated in this image (based on CPU usage and task status) only the pod with low CPU usage and task completion is viable for pod pruning. However, when the CPU usage is low k8s will remove the pod even if the tasks have not yet been completed.

Solution

After investigating multiple strategies we came to two conclusions. Firstly, we have to look into metrics other than CPU usage for scaling our resources and secondly, some of our processes are better suited for k8s jobs (a k8s job has the advantage that it does not have to be externally monitored to determine when the job is done, it merely exists when it has completed its workload).

We discovered that KEDA solves both these requirements. Not only does KEDA allow us to scale resources based on several other scalers, it also allows us to start either k8s jobs or pods.

graph 3 (KEDA)
KEDA has several scalers that can be utilised to scale either deployments or jobs.

Implementation

Setting up KEDA proved to be a very simple process. With a basic helm chart installation all the required KEDA components and plugins were ready for action. After that we configured a ScaledJob (for our solution we are using the KEDA Amazon SQS scaler) and updated our code so that it will work within the context of a k8s job instead of a k8s deployment. Finally, we’ve set the queueLength to 1; this allows us to deploy a single k8s job per message in the queue. Now we have a one-to-one mapping between messages and jobs.

Known Issues

KEDA has been serving us well for the last couple of months with very few problems. On occasion KEDA seems to “lock up” after a ScaledJob is updated; a simple re-running of the deployment CI/CD job resolves the issue. Unfortunately, we have not yet been able to identify the root cause of this issue.

--

In summary, we are extremely happy with KEDA. It made the transition to using k8s jobs much easier than we expected.

At Kohort, we publish messages to queues which in turn are consumed by Kubernetes (k8s) pods. As these messages are consumed, they increase the average CPU usage of these pods which in turn triggers Horizontal Pod Autoscaling (HPA) which then creates more pods. Simply put, if the average CPU usage increases, then the number of pods increases and vice versa.

This image illustrates how the k8s HPA monitors the average CPU usage in the cluster in order to determine whether more pods should be created.
This image illustrates how the k8s HPA monitors the average CPU usage in the cluster in order to determine whether more pods should be created.

During a recent change in architecture we offloaded a substantial amount of computational work required in some of our longer-running processes from the pods to our data warehouse. This decrease in computational effort on the pods introduced an unforeseen consequence. K8s would detect the decrease in average CPU utilisation and consequently it would prematurely remove these seemingly idle pods (resulting in a lot of re-processing).

graph 2 (KEDA)
Of the three pod states illustrated in this image (based on CPU usage and task status) only the pod with low CPU usage and task completion is viable for pod pruning. However, when the CPU usage is low k8s will remove the pod even if the tasks have not yet been completed.

Solution

After investigating multiple strategies we came to two conclusions. Firstly, we have to look into metrics other than CPU usage for scaling our resources and secondly, some of our processes are better suited for k8s jobs (a k8s job has the advantage that it does not have to be externally monitored to determine when the job is done, it merely exists when it has completed its workload).

We discovered that KEDA solves both these requirements. Not only does KEDA allow us to scale resources based on several other scalers, it also allows us to start either k8s jobs or pods.

graph 3 (KEDA)
KEDA has several scalers that can be utilised to scale either deployments or jobs.

Implementation

Setting up KEDA proved to be a very simple process. With a basic helm chart installation all the required KEDA components and plugins were ready for action. After that we configured a ScaledJob (for our solution we are using the KEDA Amazon SQS scaler) and updated our code so that it will work within the context of a k8s job instead of a k8s deployment. Finally, we’ve set the queueLength to 1; this allows us to deploy a single k8s job per message in the queue. Now we have a one-to-one mapping between messages and jobs.

Known Issues

KEDA has been serving us well for the last couple of months with very few problems. On occasion KEDA seems to “lock up” after a ScaledJob is updated; a simple re-running of the deployment CI/CD job resolves the issue. Unfortunately, we have not yet been able to identify the root cause of this issue.

--

In summary, we are extremely happy with KEDA. It made the transition to using k8s jobs much easier than we expected.

Related Articles

May 25, 2023
Michelle Millar
5
min read
ChatGPT at Work: Your talented but very junior new hire
May 16, 2023
Dan Marcus
5
min read
How ChatGPT-4 is allowing my team to 10x themselves