Search code examples
design-patternsmicroservicesdomain-driven-design

Decomposed versus “monolithic” microservices


Microservices are normally designed according domains. These can become quite large even for smaller companies. But what about very big companies with a lot of smaller and bigger differences between their service offerings, e.g., due to legal requirements, local characteristics, size, etc.?

I see two alternatives:

My questions are:

  • If the first alternative is preferable:
    • in an event-based environment (e.g. Kafka), should these services integrate each other asynchronously (Kafka) or synchronously (e.g. REST) for queries, commands and updates?
    • with a lot of subdomains and subsubdomains, a lot of process spaces have to be left and entered. Wouldn't this be a load or performance overkill?
    • it seems difficult to decide which logic belongs to the more general services and which logic to the more specialized ones. Which service owns which data? Wouldn't this require a lot of coordination? Are there best practices available?
  • If the second alternative is preferable: this alternative sounds like a monolith, which comes along in the robes of a microservice. But what is big? Despite its size, it seems but be much easier to implement a big service in teams than decomposed services (e.g. teams implement general and special requirements)?
  • Are there additional / better alternatives available?
  • It could be great to refer to a blog or some literature with details regarding the questions above.

Solution

  • It depends™.

    To give an example of monoliths: Facebook uses a monolith and if I recall correctly if they had to build a deployment that used torrents to decentralice the download as it was +1GB. This is tens (or +100?) teams working on the same application. A common reason to use MS is to give teams full control of their apps and cut the amount of cross-team communication required. This is a fallacy, as a well architected solution (either monolith, MS or big services) will share the same characteristics: failure partition/isolation, domain boundaries, low cognitive load etc. I think it's important to mention this as many people struggle to build well architected monoliths, then try to the same with MS and end up with a distributed big ball of mud.

    This is an opinion, but a well designed monolith is (usually) better than a microservice architecture (less moving parts, lower latency, considerable easier to refactor, etc). It's easier to start with a well designed monolith and move to microservices than the other way around, mainly because there's a big chance one is going to mess the initial microservice boundaries. Mary Poppendieck suggests the same in some of her presentations (maybe in The Future of Software Engineering?)

    It's important to say that even if one builds a monolith, it doesn't mean that everything should be synchronous, and it's fine to use queues/topics or other integration approaches to communicate between the domains in the monolith. Most MS platforms I've seen are actually distributed monoliths. All services have to be deployed at the same time as they are not backwards/forwards compatible. This is the worse of both worls: a more complex architecture for little benefit.

    I think your point about sync/async is the key regardless of a monolith or MS architecture. Everything that is not part of the critical path should be done asynchronously to partition errors and reduce latency.

    A few more pointers: