Search code examples
pythongoogle-cloud-dataflowapache-beam

Why is Apache Beam `DoFn.setup()` called more then once after worker startup?


I am currently experimenting with a streaming Dataflow pipeline (in Python). I read a stream of data which I like to write into a PG CloudSQL instance. To do so, I am looking for a proper place to create the database connection. As I am writing the data using a ParDo function, I'd thought the DoFn.setup() would be a good place.

According to multiple resources, this should be a good place as setup() is only called once (when the worker starts).

I ran some tests, but it seems that setup() is called way more often then only on initialization of the worker. It seems to run just as much as start_bundle() (which is after so many elements).

I created a simple pipeline that reads some messages from PubSub, extracts an object's filename and outputs the filename. Besides that, it logs the times that setup() and start_bundle() are being called:

import argparse
import logging
from datetime import datetime

import apache_beam as beam
from apache_beam.options.pipeline_options import PipelineOptions

setup_counter=0
bundle_counter=0

class GetFileName(beam.DoFn):
    """
    Generate file path from PubSub message attributes
    """
    def _now(self):
        return datetime.now().strftime("%Y/%m/%d %H:%M:%S")

    def setup(self):
        global setup_counter
                
        moment = self._now()
        logging.info("setup() called %s" % moment)
        
        setup_counter=setup_counter+1
        logging.info(f"""setup_counter = {setup_counter}""")

    def start_bundle(self):
        global bundle_counter
        
        moment = self._now()
        logging.info("Bundle started %s" % moment)
        
        bundle_counter=bundle_counter+1
        logging.info(f"""Bundle_counter = {bundle_counter}""")

    def process(self, element):
        attr = dict(element.attributes)

        objectid = attr["objectId"]

        # not sure if this is the prettiest way to create this uri, but works for the poc
        path = f'{objectid}'

        yield path


def run(input_subscription, pipeline_args=None):

    pipeline_options = PipelineOptions(
        pipeline_args, streaming=True
    )

    with beam.Pipeline(options=pipeline_options) as pipeline:

        files = (pipeline
                 | "Read from PubSub" >> beam.io.ReadFromPubSub(subscription=input_subscription,
                                                                with_attributes=True)
                 | "Get filepath" >> beam.ParDo(GetFileName())
                )

        files | "Print results" >> beam.Map(logging.info)


if __name__ == "__main__":
    logging.getLogger().setLevel(logging.INFO)

    parser = argparse.ArgumentParser()
    parser.add_argument(
        "--input_subscription",
        dest="input_subscription",
        required=True,
        help="The Cloud Pub/Sub subscription to read from."
    )

    known_args, pipeline_args = parser.parse_known_args()

    run(
        known_args.input_subscription,
        pipeline_args
    )

Based on this, I would expect to see that setup() is only logged once (after starting the pipeline) and start_bundle() an arbitrary amount of times, when running this job on DirectRunner.

However, it seems that setup() is called just as much as start_bundle().

Looking at the logs:

python main.py \
>     --runner DirectRunner \
>     --input_subscription <my_subscription> \
>     --direct_num_workers 1 \
>     --streaming true
...
INFO:root:setup() called 2022/11/16 15:11:13
INFO:root:setup_counter = 1
INFO:root:Bundle started 2022/11/16 15:11:13
INFO:root:Bundle_counter = 1
INFO:root:avro/20221116135543584-hlgeinp.avro
INFO:root:avro/20221116135543600-hlsusop.avro
INFO:root:avro/20221116135543592-hlmvtgp.avro
INFO:root:avro/20221116135543597-hlsuppp.avro
INFO:root:avro/20221116135553122-boevtdp.avro
INFO:root:avro/20221116135553126-bomipep.avro
INFO:root:avro/20221116135553127-hlsuppp.avro
INFO:root:avro/20221116135155024-boripep.avro
INFO:root:avro/20221116135155020-bolohdp.avro
INFO:root:avro/20221116135155029-hlmvaep.avro
...
INFO:root:setup() called 2022/11/16 15:11:16
INFO:root:setup_counter = 2
INFO:root:Bundle started 2022/11/16 15:11:16
INFO:root:Bundle_counter = 2
INFO:root:high-volume/20221112234700584-hlprenp.avro
INFO:root:high-volume/20221113011240903-hlprenp.avro
INFO:root:high-volume/20221113010654305-hlprenp.avro
INFO:root:high-volume/20221113010822785-hlprenp.avro
INFO:root:high-volume/20221113010927402-hlprenp.avro
INFO:root:high-volume/20221113011248805-hlprenp.avro
INFO:root:high-volume/20221112234730001-hlprenp.avro
INFO:root:high-volume/20221112234738994-hlprenp.avro
INFO:root:high-volume/20221113010956395-hlprenp.avro
INFO:root:high-volume/20221113011648293-hlprenp.avro
...
INFO:root:setup() called 2022/11/16 15:11:18
INFO:root:setup_counter = 3
INFO:root:Bundle started 2022/11/16 15:11:18
INFO:root:Bundle_counter = 3
INFO:root:high-volume/20221113012008604-hlprenp.avro
INFO:root:high-volume/20221113011337394-hlprenp.avro
INFO:root:high-volume/20221113011307598-hlprenp.avro
INFO:root:high-volume/20221113011345403-hlprenp.avro
INFO:root:high-volume/20221113012000982-hlprenp.avro
INFO:root:high-volume/20221113011712190-hlprenp.avro
INFO:root:high-volume/20221113011640005-hlprenp.avro
INFO:root:high-volume/20221113012751380-hlprenp.avro
INFO:root:high-volume/20221113011914286-hlprenp.avro
INFO:root:high-volume/20221113012439206-hlprenp.avro

Can someone clarify this behavior? I am wondering whether my understanding of setup()'s functionality is incorrect or whether this can be explained in another way. Because based on this test, it seems that setup() is not a great place to setup a DB connection.


Solution

  • According to the Beam documentation, the setup method can be invoked more that once :

    DoFn.setup(): Called whenever the DoFn instance is deserialized on the worker. 
    This means it can be called more than once per worker because multiple instances of a given DoFn subclass may be created 
    (e.g., due to parallelization, or due to garbage collection 
    after a period of disuse). 
    This is a good place to connect to database instances, open network connections or other resources.
    

    But it still remains the best place to instantiate and create a connection pool for a database.

    The teardown is the best place to close the connections per worker.

    DoFn.teardown(): Called once (as a best effort) per DoFn instance when the DoFn instance is shutting down. 
    This is a good place to close database instances, close network connections or other resources.
    
    Note that teardown is called as a best effort and is not guaranteed. For example, 
    if the worker crashes, teardown might not be called.