Search code examples
c#asp.netrabbitmqmicroservicesmessage-queue

How Pub/Sub pattern decouples microservices while there are IntegrationEvents


I have been looking for an asynchronous communication pattern between microservices, where it ensures decoupling between the microservices. Then I came across the eShopOnContainers project from Microsoft, where it explains how to implement a Pub/Sub pattern, it reads:

The integration events can be defined at the application level of each microservice, so they are decoupled from other microservices, ... What is not recommended is sharing a common integration events library across multiple microservices; ... [REF]

That is a bit confusing when considering the implementation of the integration events, and how service subscribe or publish them. For instance, the integration event ProductPriceChangedIntegrationEvent is implemented as the following in Catalog API:

namespace Microsoft.eShopOnContainers.Services.Catalog.API.IntegrationEvents.Events
{
    public class ProductPriceChangedIntegrationEvent : IntegrationEvent
    {        
        public int ProductId { get; private set; }

        public decimal NewPrice { get; private set; }

        public decimal OldPrice { get; private set; }

        public ProductPriceChangedIntegrationEvent(int productId, decimal newPrice, decimal oldPrice)
        {
            ProductId = productId;
            NewPrice = newPrice;
            OldPrice = oldPrice;
        }
    }
}

If the product price is changed, the Catalog microservice publishes the ProductPriceChangedIntegrationEvent event as the following:

var priceChangedEvent = new ProductPriceChangedIntegrationEvent(catalogItem.Id, productToUpdate.Price, oldPrice);
await _catalogIntegrationEventService.SaveEventAndCatalogContextChangesAsync(priceChangedEvent);
await _catalogIntegrationEventService.PublishThroughEventBusAsync(priceChangedEvent);

[REF]

It becomes interesting when I checked how other microservices subscribe to this even while remaining "decoupled". It turns out that the a service that subscribes to this event, implements an exact copy of the integration event and subscribes to!!

For instance; the Basket miroservice has an implementation of ProductPriceChangedIntegrationEvent as the following:

namespace Microsoft.eShopOnContainers.Services.Basket.API.IntegrationEvents.Events
{
    public class ProductPriceChangedIntegrationEvent : IntegrationEvent
    {        
        public int ProductId { get; private set; }

        public decimal NewPrice { get; private set; }

        public decimal OldPrice { get; private set; }

        public ProductPriceChangedIntegrationEvent(int productId, decimal newPrice, decimal oldPrice)
        {
            ProductId = productId;
            NewPrice = newPrice;
            OldPrice = oldPrice;
        }
    }
}

[REF]

and it subscribes to the ProductPriceChangedIntegrationEvent event as the following:

private void ConfigureEventBus(IApplicationBuilder app)
{
    var eventBus = app.ApplicationServices.GetRequiredService<IEventBus>();

    eventBus.Subscribe<ProductPriceChangedIntegrationEvent, ProductPriceChangedIntegrationEventHandler>();
    eventBus.Subscribe<OrderStartedIntegrationEvent, OrderStartedIntegrationEventHandler>();
}

[REF]

It is interesting to note that the ProductPriceChangedIntegrationEvent refers to the implementation at Microsoft.eShopOnContainers.Services.Basket.API.IntegrationEvents.Events and not at Microsoft.eShopOnContainers.Services.Catalog.API.IntegrationEvents.Events.

Questions:

Does it mean every microservice has to have a "cloned" implementation of the integration event they want to subscribe?

  • if so, if any change happens on the publisher, all the subscriber microserverices need to update their integration event accordingly?
  • how is that "decoupled" when they are so dependent of each others implementation? (ignoring backward-compatible changes for sake of clarity)

Solution

  • When I first started working with microservices and, in fact, when I first started going through the eShopOnContainers sample project, I had similar concerns. While it's a great sample, the challenge is that it's essentially one big project built on (mostly) one technology platform, so it already has that smell of tight coupling to it even before you start digging in.

    Imagine, in the case you mention, if the Catalog API was written in - say - Python (or anything non-C#) and located in a different repository. There may (or may not) still be a ProductPriceChangedIntegrationEvent class - but the microservice could still easily generate and publish the same Integration Event to the message bus. From any subscribing application, it would not be able to tell any difference about how the event was generated or what technology was used to do so.

    That's the loose-coupling part in action. Even though the subscribing service depends on a consistent shape/payload in the integration message, it does not know or care about how that message came to be.

    This flexibility is a big part of the benefit to microservices. Want to change part of or the whole technology stack for the Catalog API? Go for it. So long as it honors the existing contracts, the system will never even know a change was made.

    So the way any microservice communicates to the outside world represents its contract. In our application, this encompasses not only the information contained in the Events it publishes, but also in the API endpoints it exposes. The good news - at least in my experience thus far - is that because the domain of each microservice is both small and well-defined, it's been quite easy to NOT have to come back and make changes to contracts after launching. The few times we have, the changes are adding fields/endpoints and are 100% backwards compatible.

    Does it mean every microservice has to have a "cloned" implementation of the integration event they want to subscribe?

    Each service only needs to be able to successfully extract, from the contracted message/payload, the specific information it needs. This may be only a small part of the total message payload, or all of it.

    In fact, this is the same situation with any external API an application uses already. You count on the API provider to honor the contract(s) they've published and warn you well ahead of time of any breaking changes. Each microservice must be treated the same way.

    There will come a time when that's not the case, when we have to break a contract, so we've built in API Versioning using custom Headers. If/when we do break a contract, we'll be able to gracefully deprecate the old as we transition in the new one.

    Knowing that eventually microservices may change contracts is also why we're diligent about using the Facade Pattern whenever we consume services. We also ensure that any time we convert from a message or API response payload (we use JSON, but there are others) to a concrete class, we can tolerate EXTRA fields - I believe this is in the JSON style guide, but it's good practice regardless of model abstraction technology.

    It does seem scary and fragile at first, but if everyone follows the rules regarding honoring contracts, planning for eventual change, and essentially being conformant to the overall application guidelines, it is quite robust. I highly recommend starting with an explicit Style Guide before you start building services. Things like:

    • API Endpoint Naming and Versioning
    • Message Routing / Structure
    • Date/Time Formats and Localization

    Those can all be handled independently, but as the number of services grows, the people working on the consuming applications will be happier (and more productive) without having to jump through different hoops for every service.