For a while now, at work I’ve been in a kind of limbo; not quite an architect, not quite a developer, a pinch of project manager and a a sprinkle of business analyst. To tell the truth, as of recently, mostly business analyst—however that may be interpreted (as unforutnate as it is, I don’t think there’s a coherent view of what a BA does, or is responsible for in our industry).
As the challenging project that I’ve been involved with for the past five years draws to a close, I’ve found myself reflecting on it’s early stages and contrasting it with things as they are now. There’s this cliché that there’s 20/20 vision in hindsight, so this is the perfect opportunity to challenge those things that we look at now as being so clear in hindsight which were such murky choices early on.
A natural tendency, when "shown the truth" through experience is to focus on the negative. I want to make a conscious decision though, to focus on the positive. Focusing on the positive, and recognising what we would normally term "failure" as being "opporunity for improvement", is what popular wisdom teaches us as being what we should strive for. Why would I argue with that?
"The Project" as it will be known, in hindsight, was enormous, is enormous. We didn’t fully recognise it as being enormous at the time. I’m not at liberty to explain what it’s about, suffice to say it is in the financial industry (that shouldn’t be enough to incriminate me), and suffice to say that the business logic involved is highly complex. I doubt whether I will ever again, in my life, encounter a bespoke development project involving business logic as complex as this.
In any case, we spent some time coming up with a detailed functional requirements document. This is a story I’m sure you’ve heard before. You might even be sighing internally as you read this … a sigh of pity; "more poor souls who didn’t embrace Agile" you might be saying you in your head. There are no excuses here, because I don’t think that specifying requirements in detail up-front was entirely the wrong thing to do in this case. That’s because, what was required was poorly understood, difficult to understand and prone to misinterpretation. The subject matter was, and is, the domain of academia, being tackled by folk who for the most part didn’t have formal qualifications. Due diligence, even if through up-front requirements specification and review was justified, however you look at it. The only way that interpretation of requirements could have been tackled efficiently in an agile setting in this case would have been if the product owner was uber-qualified—someone who knew the business backwards and who had a sufficient personal stake to take delivery personally. No such person existed.
The positive: specifying requirements as a detailed document, up-front, was the best thing to do given the circumstances.
Following on from the requirements, as is dictated by a well-behaved waterfall implementation, is the architecture specification. That was, and continues to be, my responsibility.
The guiding principles that lead me to the architecture that shapes our lives:
- Live, Eat and Sleep Objects;
- Keep it Simple (S);
- You Aint Gonna Need It;
- If Microsoft Says it, it Must Be So;
- Don’t Repeat Yourself.
If you read between the lines from the above, there’s a theme, let’s call it "minimalism", engendered primarily by "Keep It Simple (S)" and "You Aint Gonna Need It", but arguably, also by "If Microsoft Says it, It Must Be So" and DRY.
Let’s back up a bit and start with …
Live, Eat and Sleep Objects
I have always, in my heart-of-hearts, believed that modelling a real-world problem with objects is the way to go. It’s all parcelled up in some deep beliefs I have about the consequences of lying to oneself. If you represent something as something other than what it is (or at least as something not as close as you can get to what it is, given the tools at hand), you’re on a slippery slope to invevitable doom. As it turns out, through experience, and through confirmation by Martin Fowler in his PoEAA, there really was no other choice—when presented with a sufficiently complex domain, the choice of whether or not to use a rich domain model is a simple one … it is, for all intents and purposes, made for you.
The positive: modelling the domain using well-understood OO modelling techniques was far and away the right choice; with such a complex domain, anything else would have been death-by-duplication.
Keep It Simple (S)
What has been said about KISS has been said many times before and continues to be said—it’s one of those instances where one can be forgiven for repeating oneself. Ironically one would think that one can’t go wrong with KISS, the benefits are simply obvious … matched by the implementation. Sadly, this is not always the case. The fact is, when confronted with a complex domain, there is a certain amount of “not-so-simple” that one must take on as necessary baggage in order for things to be manageable, and that’s where this little four-letter-bundle-of-wisdom has a sting in the tail.
Never-the-less, this particular post is all about the positive, so here are some positive things that KISS influenced me on:
Two tiers: a client tier and a database tier; that’s it.
And I have great news … this has served us well. We did need to take on a persistent service running on an application server, simply because there are things that are global and shouldn’t be influenced by whether or not a client workstation is on or not, but that’s the extent of it. No need for Windows Workflow Foundation, Windows Communication Foundation or Microsoft Message Queue, just a good ‘ol database.
The positive: KISS has it’s benefits—we didn’t take on some unnecessary technology because we thought we just might need it or because it has a warm-and-fuzzy enterprisey feel to it.
You Aint Gonna Need It
This one is pretty closely related to KISS, and I have to admit, my approach at the outset was quite heavily influenced by the “last responsible moment” camp. YAGNI at a details level is helpful, even necessary, but at an architectural level it has far-reaching consequences. I will continue to give it it’s due, but not with the same rigour as DRY, for instance, and certainly not at macro (read architectural) level. The problem is that YAGNI dictates that you only consider what you can see, and in doing so provides a false sense of peace that it’s Ok to forge forward with inadequate knowledge of the territory. For example: “I’d like to use objects, but damn … all this talk about ORMs—all I need is LINQ-to-Sql, right?”. Wrong.
The positive: as a meta-lesson, YAGNI shouldn’t be applied at an architectural level, and I’ll even go so far as to say that this whole “last responsible moment” talk is somewhat irresponsible.
If Microsoft Says It, It Must Be So
I feel sorry for the tens of thousands of developers out there whose lives are bound by the Redmond Reality Distortion Field. I know I shouldn’t need to, because the vast majority of them are in bliss and will continue to be as long as ignorance will allow them to be. When I started this project I had come out of what was effectively six years of Java and Unix and even though prior to that I had done Windows development, I was happy to accept any guidance from the Mothership that I could get.
Sadly, experience has proven that Microsoft makes best-of-breed product in some areas, but not others.
Things where Microsoft is best-of-breed:
- The .NET platform;
- Visual Studio.
Things where Microsoft is not best-of-breed:
- Test-driven-development, and associated tools (MSTest);
- Domain-driven-design and associated tools;
- Build tools (MSBuild);
- Source-control (TFS);
- ORMs (EF);
- IoC containers (Unity);
- Aspect orientation (Unity).
We adopted a few of the technologies above simply because it was the “Word from Redmond”, and didn’t adopt other of the technologies above because they didn’t yet exist and Microsoft didn’t have an answer to equivalents in the Java world that had been around for years. I’m not going to labour on this one, because too much emotional energy has already gone into it; suffice to say that I’m disappointed.
The positive: the lesson is, don’t blindly follow what may, in fact, be akin to a faith. Question, dig deep, analyse, consider, understand and criticise before you marry yourself to a technology—I’m much wiser for this experience.
Don’t Repeat Yourself
I don’t know how much I’ve repeated this, but as ironic as it is, I’ll continue to do so. The problem with DRY is that it’s only when you don’t observe the principle that you discover how valuable it would have been if you had observed it. Everything we’ve done has revolved around this, including:
- An aspect-orientation system for applying cross-cutting functionality in a type-independent way;
- A single view of data and generation of types from the data model that are used by the data access code;
- Generation of enumerated types that have an analogue as a table in the database from a single source;
- A philosophy of single-responsibility at project, type and method level engendered into how we develop and refactor as a team;
- A philosophy of writing code that is self-documenting.
Revisiting the statement I made about knowing the value that DRY has delivered, and referring to item 2 above, we have not once had a situation where we referenced a table or column at runtime that didn’t exist in the database.
The positive: By diligently observing DRY, both at an architecture level as well as design and code level, we’ve averted a lot of potential wasted time in correcting unnecessary inconsistencies. The real plus though, is the durability of this benefit—the fact that such potential wastages in resources, time and money won’t ever be repeated.
There’s a lot of wisdom bandied about on “core values” when it comes to software architecture. In fairness, one needs to contextualise things before one arrives at a set of values—that is, when embarking on a project, figure out where you’re at in terms of project scale, complexity, and in particular, what would be considered as acceptable as “success”.
I’ve put myself in a position now where I feel obliged to distil all of this into a neat uber-tweet of wisdom. Now here’s the real irony … a principle that I didn’t explicitly follow, but that arguably trumps all the others—figure out what’s really important, remembering to express it in business terms and optimise for that (can you say 80/20?)