I'm developing a system based on Azure Service Bus for quick fire-and-forget through an API and async processing of a lot of messages by background services through a topic. In the context of this question, the topic has a single subscription, why it could have been a queue. For other reasons, I would like to keep this as a topic.
I recently migrated the code from .NET framework app using the WindowsAzure.ServiceBus
package to a .NET Core package using the Microsoft.Azure.ServiceBus
package. To process a large number of message, I'm using the MessageReceiver
class like this:
var connString = "...";
var subscriptionPath = EntityNameHelper.FormatSubscriptionPath("topic", "subscription");
var messageReceiver = new MessageReceiver(connString, subscriptionPath);
while (...)
{
var messages = await messageReceiver.ReceiveAsync(10, TimeSpan.FromSeconds(5));
...
}
For simplicity, I have hidden a range of details. Like the fact that my app starts 5 threads and process messages in each thread using the same messageReceiver
instance.
I normally have more than one instance of this app running to spread out both across threads and processes. And I believe we have finally arrived at the code of my question. After migrating to .NET Core and the new NuGet package, I noticed that only one of the apps is processing messages at the same time. When opening two console windows and launching a process in each window, I can see the app in window 1 starts processing. App in windows 2 doesn't process anything. After a number of seconds, the app in windows 1 stops processing and the app in window 2 starts processing. After a while, it switches back. There's no real pattern in the switch, but all of my messages are successfully processed.
Is there some kind of limitation in MessageReceiver
that allows a total maximum number of threads process messages from the same subscription or something like that?
I'm not aware of any limitations with MessageReceiver
when it comes to the number of threads. The new library though was optimized to take advantage of concurrency w/o the need for threads (asynchronous code). So technically you could run with a single thread and have multiple concurrent receiving tasks. Another approach would be to use the message handler provided by QueueClient
and SubscriptionClient
that allow specifying concurrency for processing multiple messages easily, but those allow receiving one message per concurrent callback (no batching).
The broker provides as many messages as it can in a single call to the first competing consumer. If there are not enough messages, all the messages will be given to a single (or a first few) consumers. There's no round-robin and fair distribution. It works as expected indeed.