Search code examples
pythonwindowsairflowgspread

ModuleNotFoundError: No module named 'gspread' in Airflow


I am running a defautl airflow image version 2.5.1 on docker and I am trying to create a DAG in order to send data to a google gsheets. I already have the credentials and I've tested it and it is ok with that. I've created an env, my OS is windows and my python version is 3.10.2. That's my code:

    from airflow import DAG
    from airflow.providers.google.common.hooks.base_google import GoogleBaseHook
    from df2gspread import df2gspread 
    import gspread
    from datetime import datetime

    default_args = {
        'owner': 'airflow',
        'start_date': datetime(2023, 1, 1)
    }

    with DAG(
        dag_id="test",
        start_date=datetime.now(),
        schedule_interval="@daily",
    ) as dag:
        
        @dag.task
        def test_dag():

            # Create a hook object
            # When using the google_cloud_default we can use 
            # hook = GoogleBaseHook()
            # Or for a deligate use: GoogleBaseHook(delegate_to='foo@bar.com')
            hook = GoogleBaseHook(gcp_conn_id='google_conn_id') 

            # Get the credentials
            credentials = hook.get_credentials()
            print(credentials)

            # Optional, set the delegate email if needed later. 
            # You need a domain wide delegate service account to use this.
            #credentials = credentials.with_subject('foo@bar.com')

            # Use the credentials to authenticate the gspread client
            gc = gspread.Client(auth=credentials)

            # # Create Spreadsheet
            gc.create('example') 
            gc.list_spreadsheet_files()
        
        
        test_dag()

And this is the docker image:

    # Licensed to the Apache Software Foundation (ASF) under one
    # or more contributor license agreements.  See the NOTICE file
    # distributed with this work for additional information
    # regarding copyright ownership.  The ASF licenses this file
    # to you under the Apache License, Version 2.0 (the
    # "License"); you may not use this file except in compliance
    # with the License.  You may obtain a copy of the License at
    #
    #   http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing,
    # software distributed under the License is distributed on an
    # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
    # KIND, either express or implied.  See the License for the
    # specific language governing permissions and limitations
    # under the License.
    #
    
    # Basic Airflow cluster configuration for CeleryExecutor with Redis and PostgreSQL.
    #
    # WARNING: This configuration is for local development. Do not use it in a production deployment.
    #
    # This configuration supports basic configuration using environment variables or an .env file
    # The following variables are supported:
    #
    # AIRFLOW_IMAGE_NAME           - Docker image name used to run Airflow.
    #                                Default: apache/airflow:2.5.1
    # AIRFLOW_UID                  - User ID in Airflow containers
    #                                Default: 50000
    # AIRFLOW_PROJ_DIR             - Base path to which all the files will be volumed.
    #                                Default: .
    # Those configurations are useful mostly in case of standalone testing/running Airflow in test/try-out mode
    #
    # _AIRFLOW_WWW_USER_USERNAME   - Username for the administrator account (if requested).
    #                                Default: airflow
    # _AIRFLOW_WWW_USER_PASSWORD   - Password for the administrator account (if requested).
    #                                Default: airflow
    # _PIP_ADDITIONAL_REQUIREMENTS - Additional PIP requirements to add when starting all containers.
    #                                Default: ''
    #
    # Feel free to modify this file to suit your needs.
    ---
    version: '3'
    x-airflow-common:
      &airflow-common
      # In order to add custom dependencies or upgrade provider packages you can use your extended image.
      # Comment the image line, place your Dockerfile in the directory where you placed the docker-compose.yaml
      # and uncomment the "build" line below, Then run `docker-compose build` to build the images.
      image: ${AIRFLOW_IMAGE_NAME:-apache/airflow:2.5.1}
      # build: .
      environment:
        &airflow-common-env
        AIRFLOW__CORE__EXECUTOR: CeleryExecutor
        AIRFLOW__DATABASE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow@postgres/airflow
        # For backward compatibility, with Airflow <2.3
        AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow@postgres/airflow
        AIRFLOW__CELERY__RESULT_BACKEND: db+postgresql://airflow:airflow@postgres/airflow
        AIRFLOW__CELERY__BROKER_URL: redis://:@redis:6379/0
        AIRFLOW__CORE__FERNET_KEY: ''
        AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION: 'true'
        AIRFLOW__CORE__LOAD_EXAMPLES: 'false'
        AIRFLOW__API__AUTH_BACKENDS: 'airflow.api.auth.backend.basic_auth,airflow.api.auth.backend.session'
        _PIP_ADDITIONAL_REQUIREMENTS: ${_PIP_ADDITIONAL_REQUIREMENTS:-}
      volumes:
        - ${AIRFLOW_PROJ_DIR:-.}/dags:/opt/airflow/dags
        - ${AIRFLOW_PROJ_DIR:-.}/logs:/opt/airflow/logs
        - ${AIRFLOW_PROJ_DIR:-.}/plugins:/opt/airflow/plugins
      user: "${AIRFLOW_UID:-50000}:0"
      depends_on:
        &airflow-common-depends-on
        redis:
          condition: service_healthy
        postgres:
          condition: service_healthy
    
    services:
      postgres:
        image: postgres:13
        environment:
          POSTGRES_USER: airflow
          POSTGRES_PASSWORD: airflow
          POSTGRES_DB: airflow
        volumes:
          - postgres-db-volume:/var/lib/postgresql/data
        healthcheck:
          test: ["CMD", "pg_isready", "-U", "airflow"]
          interval: 5s
          retries: 5
        restart: always
    
      redis:
        image: redis:latest
        expose:
          - 6379
        healthcheck:
          test: ["CMD", "redis-cli", "ping"]
          interval: 5s
          timeout: 30s
          retries: 50
        restart: always
    
      airflow-webserver:
        <<: *airflow-common
        command: webserver
        ports:
          - 8080:8080
        healthcheck:
          test: ["CMD", "curl", "--fail", "http://localhost:8080/health"]
          interval: 10s
          timeout: 10s
          retries: 5
        restart: always
        depends_on:
          <<: *airflow-common-depends-on
          airflow-init:
            condition: service_completed_successfully
    
      airflow-scheduler:
        <<: *airflow-common
        command: scheduler
        healthcheck:
          test: ["CMD-SHELL", 'airflow jobs check --job-type SchedulerJob --hostname "$${HOSTNAME}"']
          interval: 10s
          timeout: 10s
          retries: 5
        restart: always
        depends_on:
          <<: *airflow-common-depends-on
          airflow-init:
            condition: service_completed_successfully
    
      airflow-worker:
        <<: *airflow-common
        command: celery worker
        healthcheck:
          test:
            - "CMD-SHELL"
            - 'celery --app airflow.executors.celery_executor.app inspect ping -d "celery@$${HOSTNAME}"'
          interval: 10s
          timeout: 10s
          retries: 5
        environment:
          <<: *airflow-common-env
          # Required to handle warm shutdown of the celery workers properly
          # See https://airflow.apache.org/docs/docker-stack/entrypoint.html#signal-propagation
          DUMB_INIT_SETSID: "0"
        restart: always
        depends_on:
          <<: *airflow-common-depends-on
          airflow-init:
            condition: service_completed_successfully
    
      airflow-triggerer:
        <<: *airflow-common
        command: triggerer
        healthcheck:
          test: ["CMD-SHELL", 'airflow jobs check --job-type TriggererJob --hostname "$${HOSTNAME}"']
          interval: 10s
          timeout: 10s
          retries: 5
        restart: always
        depends_on:
          <<: *airflow-common-depends-on
          airflow-init:
            condition: service_completed_successfully
    
      airflow-init:
        <<: *airflow-common
        entrypoint: /bin/bash
        # yamllint disable rule:line-length
        command:
          - -c
          - |
            function ver() {
              printf "%04d%04d%04d%04d" $${1//./ }
            }
            airflow_version=$$(AIRFLOW__LOGGING__LOGGING_LEVEL=INFO && gosu airflow airflow version)
            airflow_version_comparable=$$(ver $${airflow_version})
            min_airflow_version=2.2.0
            min_airflow_version_comparable=$$(ver $${min_airflow_version})
            if (( airflow_version_comparable < min_airflow_version_comparable )); then
              echo
              echo -e "\033[1;31mERROR!!!: Too old Airflow version $${airflow_version}!\e[0m"
              echo "The minimum Airflow version supported: $${min_airflow_version}. Only use this or higher!"
              echo
              exit 1
            fi
            if [[ -z "${AIRFLOW_UID}" ]]; then
              echo
              echo -e "\033[1;33mWARNING!!!: AIRFLOW_UID not set!\e[0m"
              echo "If you are on Linux, you SHOULD follow the instructions below to set "
              echo "AIRFLOW_UID environment variable, otherwise files will be owned by root."
              echo "For other operating systems you can get rid of the warning with manually created .env file:"
              echo "    See: https://airflow.apache.org/docs/apache-airflow/stable/howto/docker-compose/index.html#setting-the-right-airflow-user"
              echo
            fi
            one_meg=1048576
            mem_available=$$(($$(getconf _PHYS_PAGES) * $$(getconf PAGE_SIZE) / one_meg))
            cpus_available=$$(grep -cE 'cpu[0-9]+' /proc/stat)
            disk_available=$$(df / | tail -1 | awk '{print $$4}')
            warning_resources="false"
            if (( mem_available < 4000 )) ; then
              echo
              echo -e "\033[1;33mWARNING!!!: Not enough memory available for Docker.\e[0m"
              echo "At least 4GB of memory required. You have $$(numfmt --to iec $$((mem_available * one_meg)))"
              echo
              warning_resources="true"
            fi
            if (( cpus_available < 2 )); then
              echo
              echo -e "\033[1;33mWARNING!!!: Not enough CPUS available for Docker.\e[0m"
              echo "At least 2 CPUs recommended. You have $${cpus_available}"
              echo
              warning_resources="true"
            fi
            if (( disk_available < one_meg * 10 )); then
              echo
              echo -e "\033[1;33mWARNING!!!: Not enough Disk space available for Docker.\e[0m"
              echo "At least 10 GBs recommended. You have $$(numfmt --to iec $$((disk_available * 1024 )))"
              echo
              warning_resources="true"
            fi
            if [[ $${warning_resources} == "true" ]]; then
              echo
              echo -e "\033[1;33mWARNING!!!: You have not enough resources to run Airflow (see above)!\e[0m"
              echo "Please follow the instructions to increase amount of resources available:"
              echo "   https://airflow.apache.org/docs/apache-airflow/stable/howto/docker-compose/index.html#before-you-begin"
              echo
            fi
            mkdir -p /sources/logs /sources/dags /sources/plugins
            chown -R "${AIRFLOW_UID}:0" /sources/{logs,dags,plugins}
            exec /entrypoint airflow version
        # yamllint enable rule:line-length
        environment:
          <<: *airflow-common-env
          _AIRFLOW_DB_UPGRADE: 'true'
          _AIRFLOW_WWW_USER_CREATE: 'true'
          _AIRFLOW_WWW_USER_USERNAME: ${_AIRFLOW_WWW_USER_USERNAME:-airflow}
          _AIRFLOW_WWW_USER_PASSWORD: ${_AIRFLOW_WWW_USER_PASSWORD:-airflow}
          _PIP_ADDITIONAL_REQUIREMENTS: ''
        user: "0:0"
        volumes:
          - ${AIRFLOW_PROJ_DIR:-.}:/sources
    
      airflow-cli:
        <<: *airflow-common
        profiles:
          - debug
        environment:
          <<: *airflow-common-env
          CONNECTION_CHECK_MAX_COUNT: "0"
        # Workaround for entrypoint issue. See: https://github.com/apache/airflow/issues/16252
        command:
          - bash
          - -c
          - airflow
    
      # You can enable flower by adding "--profile flower" option e.g. docker-compose --profile flower up
      # or by explicitly targeted on the command line e.g. docker-compose up flower.
      # See: https://docs.docker.com/compose/profiles/
      flower:
        <<: *airflow-common
        command: celery flower
        profiles:
          - flower
        ports:
          - 5555:5555
        healthcheck:
          test: ["CMD", "curl", "--fail", "http://localhost:5555/"]
          interval: 10s
          timeout: 10s
          retries: 5
        restart: always
        depends_on:
          <<: *airflow-common-depends-on
          airflow-init:
            condition: service_completed_successfully
    
    volumes:
      postgres-db-volume:

When I initialize airflow on localhost, I keep getting import errors on my DAG, indicating ModuleNotFoundError: No module named 'gspread'.

I've read some questions made here on stack overflow about that issue, and I've already tried creating an env, I've tried installing the library with pip install gspread, pip3 install gspread and none has worked.


Solution

  • Your issue is: the airflow docker image does not contain the package gspread.

    That means when you start the airflow docker image it tries to load your code, at that point you code tries to import gspread but it's not installed in the docker image so it fails.

    In a nutshell: when using airflow docker image you can only use airflow packages and packages provided by airflow. gspread is not provided by airflow.

    What you need is: create a file named Dockerfile

    Use the airflow image as a base image (using the instruction: FROM apache/airflow:2.5.1)

    Then install gspread. Using the following instruction: RUN pip install gspread

    Then save the file and update your docker compose so it uses your custom image instead of the original image.

    For that simply update the docker compose file, comment the line bellow

    image: ${AIRFLOW_IMAGE_NAME:-apache/airflow:2.5.1}
    

    Then uncomment the line bellow:

    build: .
    

    In case of any issue come back here will adjust what need be.