Search code examples
apache-kafkarabbitmqwebhooksevent-dispatching

Reliable Webhook dispatching system


I am having a hard time figuring out a reliable and scalable solution for a webhook dispatch system.

The current system uses RabbitMQ with a queue for webhooks (let's call it events), which are consumed and dispatched. This system worked for some time, but now there are a few problems:

  • If a system user generates too many events, it will take up the queue causing other users to not receive webhooks for a long time
  • If I split all events into multiple queues (by URL hash), it reduces the possibility of the first problem, but it still happens from time to time when a very busy user hits the same queue
  • If I try to put each URL into its own queue, the challenge is to dynamically create/assign consumers to those queues. As far as RabbitMQ documentation goes, the API is very limited in filtering for non-empty queues or for queues that do not have consumers assigned.
  • As far as Kafka goes, as I understand from reading everything about it, the situation will be the same in the scope of a single partition.

So, the question is - is there a better way/system for this purpose? Maybe I am missing a very simple solution that would allow one user to not interfere with another user?

Thanks in advance!


Solution

  • So, I am not sure if this is the correct way to solve this problem, but this is what I came up with.

    Prerequisites: RabbitMQ with deduplication plugin

    So my solution involves:

    • g:events queue - let's call it a parent queue. This queue will contain the names of all child queues that need to be processed. Probably it can be replaced with some other mechanism (like Redis sorted Set or something), but you would have to implement ack logic yourself then.
    • g:events:<url> - there are the child queues. Each queue contains only events that are need to be sent out to that url.

    When posting a webhook payload to RabbitMQ, you post the actual data to the child queue, and then additionally post the name of the child queue to the parent queue. The deduplication plugin won't allow the same child queue to be posted twice, meaning that only a single consumer may receive that child queue for processing.

    All you consumers are consuming the parent queue, and after receiving a message, they start consuming the child queue specified in the message. After the child queue is empty, you acknowledge the parent message and move on.

    This method allows for very fine control over which child queues are allowed to be processed. If some child queue is taking too much time, just ack the parent message and republish the same data to the end of the parent queue.

    I understand that this is probably not the most effective way (there's also a bit of overhead for constantly posting to the parent queue), but it is what it is.