Search code examples
c#domain-driven-designcqrsclean-architecture

Communication between the writing model and the reading model in CQRS + DDD


I recently learned DDD and Clean architecture with CQRS, and I'm doing my first project on the subject, and there are some unclear things (which as much as I've checked, I can't find clear explanations for them) that I have some questions about and I'd appreciate it if someone could clarify the issue clearly.

From my project I initially came to the conclusion that I need to split the read and write model and this is because there is a lot of difference between them in most cases.

in addition I decided to also split the DB into a write DB (With SQL Server) and a read DB (With Mongodb) and this is because of the denormalization and also the issue of reduces the complexity of queries.

So far, I have seen that there are two options to update the reading model as soon as there is a change in the writing model.

  1. Event sourcing.
  2. Domain event.

Now, the first option is quite complex, and because I have enough new things that I am doing for the first time in the project, I decided not to do it right now.

The second option seems pretty good to me, Just send an domain event from the aggregate (in the same transaction, of course) and handle it in the Read-model.

Now the first question is, is it really right to do this? Because until today, I understood that this domain event belongs only to the domain (and only between aggregates in the same BC), and here I am supposed to handle it in a read model.

Second question: If the conclusion to the first answer is that it is fine, does this mean that the read model must know the write domain (to know the domain events in its handlers)? is this true or is there a way to break this connection?

Third question:, because I also have a need to do denormalization and also have a lot of complex logic on the side of the queries And I'm really struggling with the right way to do the entire part of the query side,So in the meantime (from what I read and understood) is, make a project/package let's say I will call it read model, what will be in it is this:

  1. folder with classes whose name that end with read model, and will be in them: 1. data, corresponds to the denormalization result. 2. functions that are responsible for The denormalization process. 3. if there is a logic of queries after the denormalization is performed, then there should be functions that handle this logic.

  2. Folder with handler classes responsible Handle the domain events, which will receive the domain events and initiate their denormalization into the read model objects with the objects' functions.

  3. After all that, (and assuming that everything is fine until now): Here I am debating, because according to what I understand there should be an option for many read models.

so I thought here to send another event that who will handle it is actually the side of The query in cqrs, witch actually have request and response dto classes and a handler that will receive the event of the read model will insert into its specific db (Mongodb), insert to db the objects of the read model or the objects of the response?!.

or that I will actually have a reference on the side of the query in cqrs to the read model project and then simply in the read model I will enter the data of the read model into the db and on the side of the query I will simply hold an object of the db of the read model and just map the read model to the query response, and this means that there will be one side of the query side.

I would be really happy for an explanation because I'm really confused about all kinds of definitions.


Solution

  • I tried to read a lot of articles on the subject but I didn't see that it was talked about specifically. I would like to receive a clear explanation on the matter

    Not your fault, the Literature sucks[tm].

    For copying information from the storage appliance which supports "writes" to a storage appliance which supports "reads", the usual process is to have a process which (a) polls to see if there have been any new writes and (b) has a bit of metadata which it stores to keep track of how far it has gotten in the copying process.

    For example, we might include in our write storage model a sequential log of the writes, and the copying process would query that log to get a list of changes, fetch the data associated with each change, and update its own metadata to track how far into that list of changes it had gotten.

    In effect, the copy process runs like a batch job, processing a chunk of information.

    In the ideal case, the copy process is idempotent, in the sense that processing some change to the write storage model twice has the same effect as processing that change once.

    The copying process is certainly going to have to know about the data model (how to retrieve stored information, what that information means), but should not need to know very much about the domain model (the policies that govern how stored information changes, the transient data structures used to compute those changes).

    Adding more read models is "just" a matter of setting up another copy process, with its own progress metadata, and its own storage.

    In the general case, there will be some real wall-clock time between the point at which information is visible in the "write database" and the point at which information is visible in each "read database"; we don't expect equilibrium to be re-established instantaneously.