In a big query, I have an export of search console data landing into a table daily. This is done by Google and can happen at any time of the day. I then run a query to enrich and format the data into another table for use in Data Studio.
I'd like to know the best way to trigger the query every time this table is updated.
Scheduled queries do work. However, there is some uncertainty/no fallback if the daily export didn't land in time. Ideally, I'd like it if when the data lands in the raw dataset, it would then trigger the job to run.
There are a few options for this use case.
Depending on how complex the enrichment process is, a Materialized View could be suitable for you. This differs from a regular View in that the View Definition is executed whenever the source table data changes and the output is stored (contrast a standard View, which is really a saved query masquerading as a table). This ensures you have up to date data available.
Materialized Views are not always suitable. It's a common use case to need to execute a series of transformations on source data to prepare (model) it for use in BI tools; DBT is a popular tool for capturing the dependencies of these transformations and running them all in sequence.
This isn't a complete solution for you in and of itself because it's not going to watch and wait for source data to 'land'. However, DBT runs can be triggered by API. One pattern I've seen is to have the task that updates the source data finish by triggering the DBT run (tools such as Apache Airflow can make this sort of thing easy enough).
If you query the INFORMATION_SCHEMA
you can find out when a certain table last updated. It should be very cheap to run such a query regularly. You could imagine a little Python script running on a short schedule that checks this regularly and, if it ever detects an update since the last run, executes your desired update query in addition.