Search code examples
kubernetesscalingkubernetes-jobs

Is it possible to have a worker pool for Kubernetes Jobs to avoid pod creation time?


As of now, I'm spinning up individual K8s jobs for a specific processing task. Some of these tasks require significant CPU/memory but other are fairly simple and could easily be accomplished with in-memory processing. A simple task that takes a few milliseconds in-memory is much slower in comparison when running as a K8s job due to the pod creation time.

I'm wondering if it is possible to have something like a worker pool dedicated to those specific K8s jobs so that the less-intensive tasks don't incur the overhead of the K8s job pod creation time. For example, if I could have 5 pods already created and idling waiting for tasks, they could quickly pick up the incoming requests for processing (without waiting for a pod to spin up). If these pods are not enough for the amount of tasks coming in, ideally they would autoscale to accommodate more processing. I couldn't find clear documentation for what I'm trying to do so any help would be appreciated. Thanks!


Solution

  • There is no such thing as the smallest unit of Kubernetes is a pod - which is either started directly (create pod) or controlled by another resource, eg. replicaset, job, cronjob.

    The pod creation time should be fairly small if you have the image already present on the worker node. Are your workers terminating after each job because of an autoscaler? I am not sure what your use case is exactly and what startup time you would consider 'small enough'. Also, it strongly depends if the different jobs need different environments to run in.

    You could deploy a queue service (eg. RabbitMQ) and create tasks by adding messages to that queue and deploy workers that watch those queues. There are frameworks like dramatiq that make this quite easy and take care of all the queue handling. Also there are some solutions to make a kubernetes deployment scale based on a custom metric, eg. https://github.com/kedacore/keda , this would cover autoscaling if jobs pile up in the queue.

    If you don't want to code it yourself, you could look into an open source Automation server, eg. Jenkins. Usually you would make Jenkins start pods in your cluster, but you could also add some static worker nodes that execute jobs right away. (Autoscaling might be an issue in that approach, but it's definetly possible)