Search code examples
pythongoogle-cloud-dataflowapache-beam

What windowing constraints are needed when combining two streams in Apache Beam [Dataflow]?


I have an ETL flow where I need to combine two Pub/Sub messages on a key and write these into BigQuery. One of the message types is the parent; I am working on payment processing, and this is an order or a payment, for example. The other is the child; this is an update to the payment ("Authorized", "Paid", etc).

I would like to use Dataflow to combine on the key and write to BigQuery, where these updates are added elements to the original transaction. The schema in BigQuery looks something like this:

name description type mode
id UUID for payment transaction String Single
amount transaction amount Integer Single
event the transaction event (see below) Record Repeated

...

and within the events record, it has something like:

name description type mode
event_id UUID for this event String Single
transaction_id UUID tying back to payment transaction (above) String Single
event_type an enum specifying if it is an Authorization, etc. Integer Single

...

In other words, each event-type Pub/Sub message will be matched with the appropriate transaction-type Pub/Sub message.

I am planning on using Dataflow's CoGroupByKey. AFAICT, there is no specification for what sort of windowing is required for CoGroupByKey to use, if any. In that case, I don't understand how it works. Something like one of the following options is needed, I would presume:

  1. CoGroupByKey will leave each element in memory indefinitely until the other element is found. For instance, if there is an id on the transaction of value 1234987, then it will remain "in waiting" until the transaction_id of 1234987 were found. After it is found, CoGroupByKey is performed, and whatever subsequent pipeline actions are completed, then the message with that ID can be purged from memory.
  2. CoGroupByKey will not work on streaming data unless there is windowing in place. Similar to above, it would remain in waiting until the same id and transaction_id are matched. However, it _would purge the id or transaction_id once the window (and whatever associated allowed lateness) has expired.
    • This is clearly not needed for non-streaming data, as the CoGroupByKey example is not windowed.
  3. There is some other alternative. Perhaps some method on the PCollection that I am unaware of that allows for some sort of purge.

Am I right? Do I need some sort of limitation? What is that limitation, or what should it be?

I simply need to know how I can create a pipeline combining these two streams in a way that will not crash my system once in production. This is difficult to test for, if the memory problem will only creep up once I am at massive scale.

(I use the Python SDK, but coded solutions in any language are appreciated; it's easy enough to translate from one to another.)


Solution

  • You are correct, and it is #2: CoGroupByKey will not work on unbounded data unless there is some windowing or triggering in place.

    There are a couple reasons, one of which you already identified:

    • The window allows Dataflow to clean up state, rather than holding it indefinitely.
    • The window allows output to be produced. Without the windowing, we have to wait forever before outputting a grouping, because more items might come in on that key.

    Quite often, you may want Session windows, because this will allow you to join together two elements where you care only about the difference between their timestamps.

    In other cases, you may need to do your join in the global window by merging the PCollections and using stateful ParDo. For brevity and to get this answer ou t th