Subj. MSDN has recently got a Bid Now sample application for the Windows Azure Cloud Computing Framework.
Bid Now is an online auction site designed to demonstrate how you can build highly scalable consumer applications.
This sample is built using Windows Azure and uses Windows Azure Storage. Auctions are processed using Windows Azure Queues and Worker Roles. Authentication is provided via Live Id.
It is an extremely nice starter. Yet, I think there are at least three major directions, in which this sample could be improved in order to serve as a proper architectural guidance and reference implementation for building cloud computing solutions.
First, even such a simple application already has quite a complex architecture. There is no strict logical structure around the messages, handlers, objects and behaviors. It is possible to draw a diagram sketching all there relations, but it will be rather hard to understand and work upon:
Development, delivery and maintenance of such a solution might turn into an expensive project with significant development and mental friction.
We can work around these problems by leveraging approaches like Domain Driven Design. The latter provides reality-proven way of formalizing and organizing elements of business solutions in such a way, that:
- Real-world problems and domains could be expressed in the architecture (and there are deterministic ways of doing that)
- Overall complexity is reduced, since we explicitly enforce certain organization of the architecture. This makes the solution more structured and understandable by mere mortals. The latter allows to reflect and handle rich business scenarios and behaviors in the code.
- Ubiquitous language is established, allowing to communicate over the model with the domain experts AND evolve this model in accordance with the real-world changes.
There are books on DDD already.
Second, service bus implementation is based on a rather simplistic approach (message handlers) that might not work out nicely in more-or-less complex real world applications. In essence, messaging is like inversion of control for enterprise applications. And all users of Autofac, Castle, Unity, StructureMap already know how important is it to have the proper container in order to manage the complexity of your solution, as it grows bigger. In fact, there already are known requirements that any generic container should fulfill in order to be called a proper IoC/DI solution. Same is with the messaging and service buses - service bus and messaging requirements are already known and implemented by NServiceBus, MassTransit, Rhino ESB and all the similar projects in the Java community.
"BaseHandler" architecture of the Bid Now application could be improved at least by introducing:
- Publish/subscribe in order to decouple message handlers from each over, allowing to evove them independently
- Poison and discard queue management, since we don’t want failing messages to block the worker roles; neither do we want to lose information about such messages.
- Proper transaction scopes, since we don’t want failing operations to introduce inconsistency in our system. If the message processing fails half-way (i.e. due to the connectivity issue), then all the operations should be rolled back, before the handler retries everything. We don’t want to charge our customer twice, don’t we?
- Implicit message serialization and deserialization that gets rid of all the manual string parsing code and provides a way to stream large messages over to the blob storage (in real world applications developers sometimes need to send messages larger than a 8 Kb limit of Azure Queues)
Implementing proper infrastructure to get rid of all the unnecessary plumbing in the code. We could at least add automatic handler wiring (including interfaces) and IoC to the picture. Writing such things manually is not fun:
handlers.Add(new CategoryHandler(5000)); handlers.Add(new FinishingSoonHandler(5000)); handlers.Add(new UserItemHandler(5000)); handlers.Add(new UserBidItemHandler(5000));
By the way, note the linguistics of the handler names. These are not commands or events, but rather nouns instead, leading to the anemic domain model (which was documented by Fowler back in 2003).
Third, Command-Query Responsibility Segregation already provides well-known and established body of knowledge on defining architecture of such consumer applications and delivering them. What’s more important, there is a clear migration path for existing brown-field solutions towards CQRS.
CQRS couples nicely with the DDD (guidance on understanding, modeling and evolving the domain synchronously with the business changes) and Event Sourcing (persistence-independent approach on persisting the domain that brings forward the scalability, simplicity and numerous business benefits).
There already is a plenty of materials on CQRS by Greg Young, Udi Dahan, Mark Nijhof, Jonathan Oliver and many others (links)
Given all this, I’d think that instead of inventing a new Azure guidance for developing enterprise applications, Microsoft might simply adopt established and documented principles, adjusting them slightly to the specifics of the Windows Azure environment and filling up the missing tool set (or just waiting till OSS community does this). This approach might provide better ROI for the resources at hand along with the faster and more successful adoption of Windows Azure for the enterprise applications.
What do you think?