Search code examples
kubernetesairflowkubernetes-helm

Allocate memory request and limit for the pods that airflow will create to run tasks in kubernetes


I have deployed airflow with helm stable/airflow: https://github.com/helm/charts/tree/master/stable/airflow

I have deployed it with celery executor and I have run a task that need to read a big table. I am getting this error in the Pod that run this task:

The node was low on resource: memory. Container base was using 3642536Ki, which exceeds its request of 0.
I understand that I have exceed the memory of the node is running this pod.

I want to modify the limit of the memory and cpu each pod use in airflow. I can see that I can limit the resources of the workers.

My question is:

How can I specify the memory requests, the limit for the pods that are going to be created from the worker? Because I can only see how to set the resources from the workers but not the Pots that the workers create.


Solution

  • I think you may be conflating the Celery Executor with the Kubernetes Executor. The Kubernetes Executor creates a new pod for every task instance. With the Celery Executor you will get a preset number of worker pods to run your tasks. For example, you would set the number of workers in advance here. In which case, setting these values should give your worker pods the resources you require. Their README has good example of how you can leverage dynamic scaling if that's what you're after.

    enter image description here