Search code examples
pythonfirebasegoogle-cloud-functionsgoogle-cloud-storagegcloud

gcloud mistakes event trigger for storage trigger


There's a cloud function in Python that processes some data when a file is uploaded to firebase's bucket:

@storage_fn.on_object_finalized(bucket = "my-bucket", timeout_sec = timeout_sec, memory = memory, cpu = cpu, region='us-central1')
def validate_file_upload(event: storage_fn.CloudEvent[storage_fn.StorageObjectData]):
    process(event)

When it's deployed via firebase cli the function works properly

firebase deploy --only functions:validate_file_upload

However, when the same function is deployed via gcloud

gcloud functions deploy validate_file_upload
--gen2 
--region=us-central1
.......
--entry-point=validate_file_upload
--trigger-event-filters="type=google.cloud.storage.object.v1.finalized" 
--trigger-event-filters="bucket=my-bucket"

When function is triggered, it fails with

TypeError: validate_file_upload() takes 1 positional argument but 2 were given

The reason is that when function is deployed viafirebase, GCP sends 'Eventarc' object as single argument to cloud function, but if it's deployed via gcloud it sends two: (data, context) and naturally it causes exception

Even in the documentation it states there should be only 1 argument:

A Cloud Storage trigger is implemented as a CloudEvent function, in which the Cloud Storage event data is passed to your function in the CloudEvents format

https://cloud.google.com/functions/docs/calling/storage

How to make sure that gcloud deployment uses correct function prototype?


Solution

  • The answer is to use double-decorators:

    import functions_framework
    from cloudevents.http import CloudEvent
    
    from firebase_functions import storage_fn, options
    
    @functions_framework.cloud_event
    @storage_fn.on_object_finalized(bucket = "my-bucket", timeout_sec = timeout_sec, memory = memory, cpu = cpu, region='us-central1')
    def validate_file_upload(event: CloudEvent):
        process(event)
    
    

    This way it'll work regardless of being deployed via gcloud or firebase.

    Also I've noticed that if the function was originally deployed with HTTP triggers as opposed to "bucket" trigger, it GCP it stays marked as HTTP untill you delete it and redeploy anew. Simple redeployment leaves this function marked as HTTP (altough in Triggers tab it'll show Event for buckets as expected)

    screenshot of Cloud Functions overview showing Bucket and HTTP trigger types