Search code examples
pythonairflowamazon-ecs

airflow local_task_job.py - Task exited with return code -9


I am trying to run apache airflow in ECS using the v1.10.5 version of apache/airflow using my fork airflow. I am using env variables to set executor, Postgres and Redis info to the webserver.

AIRFLOW__CORE__SQL_ALCHEMY_CONN="postgresql+psycopg2://airflow_user:airflow_password@postgres:5432/airflow_db"
AIRFLOW__CELERY__RESULT_BACKEND="db+postgresql://airflow_user:airflow_password@postgres:5432/airflow_db"
AIRFLOW__CELERY__BROKER_URL="redis://redis_queue:6379/1"
AIRFLOW__CORE__EXECUTOR=CeleryExecutor
FERNET_KEY=46BKJoQYlPPOexq0OhDZnIlNepKFf87WFwLbfzqDDho=
AIRFLOW__CORE__LOAD_EXAMPLES=False

My tasks are randomly getting failed with the following error

[2020-01-12 20:06:28,308] {ssh_utils.py:130} WARNING - 20/01/13 01:36:28 INFO db.IntegerSplitter: Split size: 134574; Num splits: 5 from: 2 to: 672873
[2020-01-12 20:06:28,449] {ssh_utils.py:130} WARNING - 20/01/13 01:36:28 INFO mapreduce.JobSubmitter: number of splits:5
[2020-01-12 20:06:28,459] {ssh_utils.py:130} WARNING - 20/01/13 01:36:28 INFO Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name
[2020-01-12 20:06:28,964] {ssh_utils.py:130} WARNING - 20/01/13 01:36:28 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1578859373494_0012
[2020-01-12 20:06:29,337] {ssh_utils.py:130} WARNING - 20/01/13 01:36:29 INFO impl.YarnClientImpl: Submitted application application_1578859373494_0012
[2020-01-12 20:06:29,371] {ssh_utils.py:130} WARNING - 20/01/13 01:36:29 INFO mapreduce.Job: The url to track the job: http://ip-XX-XX-XX-XX.ap-southeast-1.compute.internal:20888/proxy/application_1578859373494_0012/
[2020-01-12 20:06:29,371] {ssh_utils.py:130} WARNING - 20/01/13 01:36:29 INFO mapreduce.Job: Running job: job_1578859373494_0012
[2020-01-12 20:06:47,489] {ssh_utils.py:130} WARNING - 20/01/13 01:36:47 INFO mapreduce.Job: Job job_1578859373494_0012 running in uber mode : false
[2020-01-12 20:06:47,490] {ssh_utils.py:130} WARNING - 20/01/13 01:36:47 INFO mapreduce.Job:  map 0% reduce 0%
[2020-01-12 20:06:54,777] {logging_mixin.py:95} INFO - [[34m2020-01-12 20:06:54,777[0m] {[34mlocal_task_job.py:[0m105} INFO[0m - Task exited with return code -9[0m

But when I check the certain application in EMR UI it is showing as ran successfully.

enter image description here

My ECS config is as follows

airflow-worker

Hard/Soft memory limits -> 2560/1024

No of workers -> 3

airflow-webserver

Hard/Soft memory limits -> 3072/1024

airflow-scheduler

Hard/Soft memory limits -> 2048/1024

run_duration -> 86400

What is causing this error?


Solution

  • The error was due to worker container reaching the Hard memory limit and thus randomly killing the task. I fixed this by increasing the memory limit by comparing the memory utilization graph of old airflow deployment running through the local executor

    This is the old memory utilization graph with respect to soft memory limit

    enter image description here

    After changing the worker memory config to 2560/5120, now this is the memory utilization graph with respect to soft memory limit

    enter image description here