Search code examples
streamingartificial-intelligencegoogle-cloud-dataflowapache-beamsetup.py

How to deploy a Google dataflow worker with a file loaded into memory?


I am trying to deploy Google Dataflow streaming for use in my machine learning streaming pipeline, but cannot seem to deploy the worker with a file already loaded into memory. Currently, I have setup the job to pull a pickle file from a GCS bucket, load it into memory, and use it for model prediction. But this is executed on every cycle of the job, i.e. pull from GCS every time a new object enters the dataflow pipeline - meaning that the current execution of the pipeline is much slower than it needs to be.

What I really need, is a way to allocate a variable within the worker nodes on setup of each worker. Then use that variable within the pipeline, without having to re-load on every execution of the pipeline.

Is there a way to do this step before the job is deployed, something like

with open('model.pkl', 'rb') as file:
   pickle_model = pickle.load(file)

But within my setup.py file?

##### based on - https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/complete/juliaset/setup.py


"""Setup.py module for the workflow's worker utilities.
All the workflow related code is gathered in a package that will be built as a
source distribution, staged in the staging area for the workflow being run and
then installed in the workers when they start running.
This behavior is triggered by specifying the --setup_file command line option
when running the workflow for remote execution.
"""

# pytype: skip-file

from __future__ import absolute_import
from __future__ import print_function

import subprocess
from distutils.command.build import build as _build  # type: ignore

import setuptools


# This class handles the pip install mechanism.
class build(_build):  # pylint: disable=invalid-name
    """A build command class that will be invoked during package install.
    The package built using the current setup.py will be staged and later
    installed in the worker using `pip install package'. This class will be
    instantiated during install for this specific scenario and will trigger
    running the custom commands specified.
    """
    sub_commands = _build.sub_commands + [('CustomCommands', None)]



CUSTOM_COMMANDS = [['pip', 'install', 'scikit-learn==0.23.1']]
CUSTOM_COMMANDS = [['pip', 'install', 'google-cloud-storage']]
CUSTOM_COMMANDS = [['pip', 'install', 'mlxtend']]



class CustomCommands(setuptools.Command):
    """A setuptools Command class able to run arbitrary commands."""
    def initialize_options(self):
        pass
    
    def finalize_options(self):
        pass

    def RunCustomCommand(self, command_list):
        print('Running command: %s' % command_list)
        p = subprocess.Popen(
            command_list,
            stdin=subprocess.PIPE,
            stdout=subprocess.PIPE,
            stderr=subprocess.STDOUT)
        # Can use communicate(input='y\n'.encode()) if the command run requires
        # some confirmation.
        stdout_data, _ = p.communicate()
        print('Command output: %s' % stdout_data)
        if p.returncode != 0:
            raise RuntimeError(
                'Command %s failed: exit code: %s' % (command_list, p.returncode))

    def run(self):
        for command in CUSTOM_COMMANDS:
            self.RunCustomCommand(command)


REQUIRED_PACKAGES = [
    'google-cloud-storage',
    'mlxtend',
    'scikit-learn==0.23.1',
]

setuptools.setup(
    name='ML pipeline',
    version='0.0.1',
    description='ML set workflow package.',
    install_requires=REQUIRED_PACKAGES,
    packages=setuptools.find_packages(),
    cmdclass={
        'build': build,
        'CustomCommands': CustomCommands,
    })

Snippet of current ML load mechanism:

class MlModel(beam.DoFn):
    def __init__(self):
        self._model = None
        from google.cloud import storage
        import pandas as pd
        import pickle as pkl
        self._storage = storage
        self._pkl = pkl
        self._pd = pd
        
    def process(self,element):
        if self._model is None:
            bucket = self._storage.Client().get_bucket(myBucket)
            blob = bucket.get_blob(myBlob)
            self._model = self._pkl.loads(blob.download_as_string())

        new_df = self._pd.read_json(element, orient='records').iloc[:, 3:-1] 
        predict = self._model.predict(new_df)
        df = self._pd.DataFrame(data=predict, columns=["A", "B"])
        A = df.iloc[0]['A']
        B = df.iloc[0]['B']
        d = {'A':A, 'B':B}
        return [d] 

Solution

  • You can use the @Setup method in your MlModel DoFn method where you can load your model and then use it in your @Process method. The @Setup method is called once per worker initialization.

    I had written a similar answer here

    HTH