Search code examples
apache-sparkkubernetesairflowspark-operator

How to send args SparkKubernetesOperator Airflow


I have a DAG in Airflow running on Kubernetes with Spark. How can I send aws credentials to a spark file using the SparkKubernetesOperator.

In my DAG file I get the credentials from the connections: Example:

from airflow.hooks.base import BaseHook
aws_conn = BaseHook.get_connection('aws_conn')

How is it possible to send this aws_conn to the spark file through the operator?

transformation = SparkKubernetesOperator(
    task_id='spark_transform_frete_new',
    namespace='airflow',
    application_file='spark/spark_transform_frete_new.yaml',
    kubernetes_conn_id='kubernetes_default',
    do_xcom_push=True,
)

The yaml file:

apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: SparkApplication
metadata:
  name: "dag-example-spark-{{ macros.datetime.now().strftime("%Y-%m-%d-%H-%M-%S") }}-{{ task_instance.try_number }}"
  namespace: airflow
spec:
  timeToLiveSeconds: 30
  volumes:
    - name: ivy
      persistentVolumeClaim:
       claimName: dags-volume-pvc
    - name: logs
      persistentVolumeClaim:
       claimName: logs-volume-pvc
  sparkConf:
    spark.jars.packages: "org.apache.hadoop:hadoop-aws:3.2.0,org.apache.spark:spark-avro_2.12:3.0.1"
    spark.driver.extraJavaOptions: "-Divy.cache.dir=/tmp -Divy.home=/tmp"
    "spark.kubernetes.local.dirs.tmpfs": "true"
    "spark.eventLog.enabled": "true"
    "spark.eventLog.dir": "/logs/spark/"
  hadoopConf:
    fs.s3a.impl: org.apache.hadoop.fs.s3a.S3AFileSystem
  type: Python
  pythonVersion: "3"
  mode: cluster
  image: "myimagespark/spark-dev"
  imagePullPolicy: Always
  mainApplicationFile: local:///dags/dag_example_python_spark/src/spark/spark_transform_frete_new.py 
  sparkVersion: "3.1.1"
  restartPolicy:
    type: Never
  driver:
    cores: 1
    coreLimit: "1200m"
    memory: "4g"
    labels:
      version: 3.1.1
    serviceAccount: spark
    volumeMounts:
      - name: ivy
        mountPath: /dags
      - name: logs
        mountPath: /logs/spark/
  executor:
    cores: 2
    instances: 2
    memory: "3g"
    labels:
      version: 3.1.1
    volumeMounts:
      - name: ivy
        mountPath: /dags
      - name: logs
        mountPath: /logs/spark/

Solution

  • You can send arguments using Jinja template field like

    mainApplicationFile: local:///dags/dag_example_python_spark/src/spark/spark_transform_frete_new.py       
    arguments: {{ params.ARGUMENTS}}
    

    And from DAG you can pass feed ARGUMENTS into params the way you want