CQRS, Eventsourcing and DDD

Notes from Greg Young's CQRS course

Posted by Frederik Michel    on April 1, 2016 in Conference tagged with Microservice

In these notes I would like to share my thoughts about the course I took together with Rainer Michel and Raul-Andrei Firu in London about the above mentioned topics. In this three days last November Greg Young explained with many practical examples of his career the benefits especially of CQRS and how this is related to things like Event Sourcing which is a way of reaching eventual consistency.

CQRS and DDD

So let’s get to the content of the course. It all starts with some thoughts about Domain Driven Design (DDD) especially about how to get to a design. This included strategies for getting the information out of domain experts and how to come to an ubiquitous language between different departments. All in all Greg pointed out that the software just does not have to solve every problem there is which is actually why the domain model resulting out of this is absolutely unequal to the ERM which might come to mind when solving such problems. One should more think about the actual use cases of the software than about solving each and every corner case that actually just will not happen. He showed very interesting strategies to break up relations between the domains in order to minimize the amount of getters and setters used between domains. At the end Greg spoke shortly about Domain Services which deal with logic using different aggregates and making the transactions consistent. But more often than not one should evaluate eventual consistency to use instead of such domain services as the latter explicitly show that one breaks the rule of not using more than one aggregate within one transaction. In this part Greg actually got just very briefly into CQRS describing it as a step on the way of achieving an architecture with eventual consistency.

Event Sourcing

This topic was about applying event sourcing to a pretty common architecture that uses basically a relational database with some OR-mapper on top and above that domains/domain services. On the other side there is a thin read model based on a DB with 1st NF data. He showed that this architecture would eventually fail in production. The problem there is to keep these instances in sync especially after some problems in production might have been occurred. In these cases it is occasionally very hard to get the read and write model back on the same page. In order to change this kind of architecture using event sourcing there has to be a transition to a more command based communication between the components/containers within the architecture. This can generally be realized by introducing an event store which gathers all the commands coming from the frontend. This approach eventually leads to a point where the before mentioned 3rd NF database (which up to that point has been the write model) is going to be completely dropped in favor of the event store. This actually has 2 reasons. First of all is that the event store already has all the information stored that also is present in the database. Second and maybe more important, it stores more information than the database as the latter one generally just keeps the current state. The event store on the other hand stores every event in between also which might be relevant for analyzing the data, reporting, … What this architecture we ended up with also brings to the table is eventual consistency as the command send by the UI takes some time until it is available in the read model. The main point about eventual consistency is that the data in the read model is not false data it might just be old data which in most cases is not to be considered critical. However, there are cases where consistency is required. For these situations there are strategies to just simulate consistency. This can be done by making the time the server takes to get the data to the read model smaller than the time the client needs to retrieve the data again. Mostly this is done by just telling the user that the changes have been received by the server or the ui just fakes the output.

To sum this up - the pros about an approach like this are especially the fact that every point in time can be restored (no data loss at all) and the possibility to just pretend that the system still works even if the database is down (we just show the user that we received the message and everything can be restored when the database is up again). In addition to that if a SEDA like approach is used it is very easy to monitor the solution and determine where the time consuming processes are. One central point in this course was that by all means we should prevent widespread outrage - meaning errors that make the complete application crash or stall with effect on many or all users.

Process Managers

This topic was essentially about separation of concerns in that regard that one should separate process logic and business logic. This is actually something that should be done as much as possible as the system can then be easily changed to using a workflow engine in the longer run. Actually Greg showed two ways of building a process manager. The first one just knows in what sequence the business logic has to be run. It triggers each one after the other. In the second approach the process manager creates a list of the processes that should be run in the correct order. It then hands over this list to the first process which passes the list on to the next and so forth. In this case the process logic is within the list or the creation of the list.

Conclusion

Even though Greg sometimes switched pretty fast from showing very abstract thoughts to going deep into source code the course was never boring - actually rather exciting and absolutely fun to follow. The different ways of approaching a problem were shown using very good examples - Greg really did a great job there. I can absolutely recommend this course for people wanting to know more about these topics. From my point of view this kind of strategy was very interesting as I see many people trying to create the “perfect” piece of software paying attention to cases that just won’t happen or spending a lot of time on cases that happen very very rarely rather to define them as known business risks.