Search code examples
azureazure-functionsazure-cosmosdbazure-table-storageazure-stream-analytics

Event Processing (17 Million Events/day) using Azure Stream Analytics HoppingWindow Function - 24H Window, 1 Minute Hops


We have a business problem that needs solving and would like some guidance from the community on the combination of products in Azure we could use to solve it.

The Problem:

I work for a business that produces online games. We would like to display the number of users playing a specific game in a 24 Hour Window, but we want the value to update every minute. Essentially the output that HoppingWindow(Duration(hour, 24), Hop(minute, 1)) function in Azure Stream Analytics will provide.

Currently, the amount of events are around 17 Million a day and the Stream Analytics Job seems to be struggling with the load. We tried the following so far;

Tests Done:

17 Million Events -> Event Hub (32 Partitions) -> ASA (42 Streaming Units) -> Table Storage

Failed: Stream Analytics Job never outputs on large timeframes (Stopped test at 8 Hours)

17 Million Events -> Event Hub (32 Partitions) -> FUNC -> Table Storage (With Proper Partition/Row Key)

Failed: Table storage does not support distinct count

17 Million Events -> Event Hub -> FUNC -> Cosmos DB

Tentative: Cosmos DB doesn't support distinct count, not natively anyways. Seems to be some hacks going around, but not sure that's the way to go.

Is there any known designs geared for processing 17 Million events a minute, every minute?

Edit: As per the comments, the code.

SELECT
    GameId,
    COUNT(DISTINCT UserId) AS ActiveCount,
    DateAdd(hour, -24, System.TimeStamp()) AS StartWindowUtc,
    System.TimeStamp() AS EndWindowUtc INTO [out]
FROM
    [in] TIMESTAMP BY EventEnqueuedUtcTime
GROUP BY
    HoppingWindow(Duration(hour, 24), Hop(minute, 1)),
    GameId,
    UserId

The expected output, note that in reality there will be 1440 records per GameId. One for each minute

enter image description here

To be clear, the problem is that generating the expected output on the larger timeframes, ie 24 Hours doesn't output or at the very least takes 8+ Hours to output. The smaller window sizes work, for example changing the above code to use HoppingWindow(Duration(minute, 10), Hop(minute, 5)).

The tests that followed assumed that ASA is not the answer to the problem and we tried different approaches. Which seemed to have caused a bit of confusion, sorry about that


Solution

  • The way ASA scales up at the moment is with 1 node vertically from 1 to 6SU, then horizontally with multiple nodes of 6SU above that threshold.

    Obviously, to be able to scale horizontally a job needs to be parallelizable, which means the stream will be distributed across nodes according to the partition scheme.

    Now if the input stream, the query and the output destination are aligned in partitions, then the job is called embarrassingly parallel and that's where you'll be able to reach maximum scale. Each pipeline, from entry to output, will be independent, and each node will only have to maintain in memory the data pertaining to its own state (those 24h of data). That's what we're looking for here: minimizing the local data store at each node.

    With EH supporting 32 partitions on most SKUs, the maximum scale publicly available on ASA is 192SU (6*32). If partitions are balanced, that's means that each node will have the least amount of data to maintain in its own state store.

    Then we need to minimize the payload itself (the size of an individual message), but from the look of the query that's already the case.

    Could you try scaling up to 192SU and see what happens?

    We are also working on multiple other features that could help on that scenario. Let me know if that could be of interest to you.