Search code examples
google-cloud-platformgoogle-bigquerygoogle-cloud-dataflowgoogle-cloud-dataproc

GCP Dataflow, Dataproc, Bigtable


I am selecting services to write and transform JSON messages from Cloud Pub/Sub to BigQuery for a data pipeline on Google Cloud. I want to minimize service costs. I also want to monitor and accommodate input data volume that will vary in size with minimal manual intervention. What should I do?

A. Use Cloud Dataproc to run your transformations. Monitor CPU utilization for the cluster. Resize the number of worker nodes in your cluster via the command line.

B. Use Cloud Dataproc to run your transformations. Use the diagnose command to generate an operational output archive. Locate the bottleneck and adjust cluster resources.

C. Use Cloud Dataflow to run your transformations. Monitor the job system lag with Stackdriver. Use the default autoscaling setting for worker instances.

D. Use Cloud Dataflow to run your transformations. Monitor the total execution time for a sampling of jobs. Configure the job to use non-default Compute Engine machine types when needed.


Solution

  • C!

    Use Dataflow on pubsub to transform your data and let it write rows into BQ. You can monitor the ETL pipeline straight from data flow and use stackdriver on top. Stackdriver can also be used to start events etc.

    Use autoscaling to minimize the number of manual actions. Basically when this solution is setup correctly, it doesn't need work at all.