Search code examples
domain-driven-designddd-repositoriesaggregaterootunique-index

Where to validate unique AggregateRoot-properties?


I have a "large" set of AggregateRoots with a property that should be unique in its context. But where do I validate this? I guess it depends on what the context is and as I see it I have two options:

Either I implement the validation within a repository-service so the persistence-logic can validate unique properties before saving aggregates (which would then also have to synchronize all saves of this AR-type).

Or I move the "unique index" inside another aggregate as a dictionary of aggregate references and let this dictionary validate unique properties. Since I have a very large set of AR's, this approach could be problematic, if not implemented so that the index can be kept on disk as much as possible.

But is there any true winner here? Are both methods valid and safe to use? Any major drawbacks to consider? Other variants?

My thoughts:

The first method is a bit simpler perhaps but is more limited as well. It's for instance more complicated to have multiple indexes for the same AR-type, if that's ever needed. The other method is more localized to a single aggregate which is more in line with how aggregates should be handled I guess. The first method requires all aggregates of this type to be saved by the same process since all saves have to be synchronized. The other method does not require this but instead introduce this index-aggregate that all saves have to pass through in order to validate new and updated values on the property. This method also do not validate that there exists multiple aggregates in the database with the same property-value, only that the referenced aggregates have unique properties.


Solution

  • The aggregate only cares for its own consistency. It doesn't really have an interest how it correlates with or relates to anything else in the system, outside of its own boundaries.

    If you need to do any cross-aggregate checks, there are two options - either you need to reconsider your aggregate boundaries, maybe your current aggregates are just entities for a larger aggregate. However, it won't work if your scope of a transaction is what you currently have as an aggregate (although the uniqueness constraints kind of contradicts this statement).

    But we all know things like the infamous unique user name paradox. It is clear the whole entirety of users cannot be a single aggregate, but you need to ensure that user names are unique. The solution for that is to check for uniqueness in the application service, before even going to the aggregate. If your query store is fully consistent, it should never be a problem. If your write and read sides are not guaranteed to be in sync, you can still use the read side to ensure uniqueness, considering the possibility of this constraint to be violated. If there will be no major blast and no kittens will die, you can probably accept such a situation and deal with the constraint violation when it actually happens, which might be never.