The Documentation Problem

Last week I talked about one of the ways in which a software development organisation can assess where they sit on an industry-recognised maturity scale, that is: the maturity model set out by the CMMI.  For my own efforts, having to do such an assessment myself, I compiled a number of interview questions in order to determine whether or not the required so-called Specific Goals (SG) were being satisfied.

Due to our simultaneously experimenting with introducing a lean/agile mindset in one of our teams, the inevitable face-off between the classic process-oriented philosophy implied by CMMI and the lightweight, waste-eschewing agile approach has come to the fore.  The maturity assessment interview questions that I compiled were largely influenced by the CMMI specific practices, subpractices and examples as set out in the official document.  I compiled forty questions, and no less than twelve of them contain variations of the word “document” in them.

What do we mean by “Document”?

In its broadest sense, a document is a record of some sort, the essential characteristics being:

  1. Informational;
  2. Persistent.

Our reasons for keeping documents on projects are many and varied; Scott Ambler has delved into this question quite comprehensively, but distilling this down, we can classify the reasons as:

  1. Maintenance of organisational memory (the “good” reason most people think of);
  2. Because we have to  (for audit purposes, or the customer has explicitly requested it);
  3. In order to make agreements explicit (i.e., contractual).

As Scott points out in his article, one could also be writing documentation in order to “work through something”, for purposes of fleshing the details of a design out, for instance.

Documentation is always intended for human consumption.  In contrast, source code is intended for machine consumption; well … that’s partly true; although our primary focus when doing things in an agile manner is the immediate requirement for software, there’s a natural tension between that imperative and supporting the next thing.  Along those lines, writing software for the immediate requirement only can be argued to be foolish–there needs to be a balance of some sort.

The Cost of Documentation

Anything that does not result in delivery of value to the customer is waste.  Waste is, by definition, that which is of no value.  Some things are of absolutely no value, for example a process that is busy occurring, the outputs of which are no longer required.  Other things can be argued to be of value, but it’s not immediately clear whether or not they’re value-ful; for instance: a requirements document or a project plan; both such artifacts are deemed to be necessary in order to produce software, but the link is tenuous and the level of value associated with such artifacts is largely a function of human considerations.

It’s easy to understand, though, that documentation that is not directly tied to producing working software has an associated cost of ownership.  Writing a document that ostensibly contributes to a software product or application immediately has an indefinite annuity cost as soon as it comes into existence; that’s because in order to be of value it needs to be maintained.

Agile Documentation: The Tenets

In order to support an agile process, it’s generally accepted that there are certain tenets that need to be observed; these are in line with the Agile Manifesto:

  1. Essence, that is just enough, and only that which is of value.
  2. Aggressively favour executable over static;
  3. Only if and when it provides value.

Trying to get documentation to be executable means that it needs to be actively part of the system—this point to me is the most crucial one because it deals with the aspect of documentation that is generally considered to be the reason why projects are more likely to fail.   The fact that introducing documentation is tantamount to introducing risk has to do with its disconnected nature–it is disconnected (or tenuously connected) from the delivery of value to the customer.  To be sure, it may be connected to delivery of value but if it’s not executable, that call is up to a person, or a group of people—this is where things get fuzzy.

Examples of executable documentation:

  1. Code;
  2. Automated tests.

Of the articles that I’ve read on this subject, many (if not all) talk about comments in code.  In my opinion comments are an example of documentation disconnected from the delivery of value to the customer and should therefore be avoided.  Almost all comments can be represented by narrative code thereby making the documentation part of the mechanics of the product.  Occasionally, telling the “how” story in the source code borders on contrived (the effort to do so may significantly exceed the risk associated with introducing “disconnected documentation”); in these situations, if it’s necessary to convey something that will save time when the “next thing” comes along, then it’s OK to use a comment or two.

Back to Maturity

I started writing this post originally to “work through” the issue that has been worrying me, that is, the apparent bias that the CMMI maturity model has towards documentation being at odds with an agile approach.

I’m going to take a leap here and focus on the intent of what is stated within CMMI.  Although the word “document” is used when describing processes (as in: “Is the process by which you do X documented?”), can we interpret the word slightly differently in order to address what is needed, but at the same time respect our key value of avoiding waste?

If the key attributes of a document are a) an informational record, b) persistence, then what methods can we employ to satisfy those characteristics without introducing things that are disconnected from the delivery of value to the customer?

Looking at the reasons why we document, business continuity, or maintaining organisational memory is largely a risk-mitigation strategy; that is, an attempt to avoid bus factor completely through thud factor.  In an agile setting, this consideration can be addressed by:

  1. Making sure that there is regular knowledge-sharing between team members;
  2. Making all processes be reflected by code (e.g., you do a build?  Have a look at the build script to see how our environments and release process work);
  3. Where it really is impossible or highly contrived to do b) above, documenting the essence (and only the essence) and getting the document to delegate to the source code or “talking to a team member” for more detailed information.

Obviously, you can’t do the above effectively without trust, collaboration and discipline; this goes without saying.

The holy grail is to make what we produce to the customer which adds value be self-documenting.  That’s all well and fine when we’re talking about artifacts that relate directly to the end product, but what about meta-considerations; for instance: how we reflect on our behaviour for the purposes of improvement?  We could write a document that describes that, yes – but that’s precisely what we’re trying to avoid.  We can’t write code that is self-documenting in this case, it’s meta after all.

If we take a step back and think about it, how we go about our jobs is sort of like a program.  For instance, the practice of getting together on a regular basis to have a retrospective meeting; what determines how we do this; the timing, regularity, structure, decision-making?  It’s something that everybody knows (learnt and agreed-upon behaviours) which is executed (metaphor: executable code) by a team (metaphor: execution platform) of people.  So, to make the “code” in this case self documenting, it needs to be accessible and understandable.  For that:

  1. Pointers must be available (remember, just the essence), that is, in this case, the what (a retrospective meeting–can be a one-liner on a Wiki);
  2. A “more information” instruction with the pointer on the Wiki—that is, “just ask a team member to explain the process”.

The integrity of the “documentation” of the process is maintained by ensuring that a high number of people know how it’s done (the whole team) together with the fact that team members trust one another and share a philosophy of collaboration to ensure that they’re always on the same page.

The persistence requirement is also fulfilled by the built-in redundancy of having multiple people share the same knowledge.  Short of the entire team being wiped out in a plane (or bus) disaster on their way to a team retreat, the risk of loss of IP is taken care of.

Of course, all of this is assuming a land of rainbows and unicorns where the term “corporate politics” is nowhere to be found in the dictionary—mind you, ye traditional manner of maintaining the integrity of processes by tome is arguably no more resistant to such realities of the corporate world, whilst being more expensive and risky by virtue of the false impression that it creates.

Capability Maturity Model Integration (CMMI)

My job affords me the opportunity to dive into all sorts of things.  Sooner or later, if you work in ICT, unless you’re a very small outfit, you’ll bump into some or other governance frameworkIn case it isn’t obvious, governance is there to ensure that everyone in the organisation is behaving as they should be; that is, such that no laws are being broken and all activities are in the interests of the business.

The Bottom Line: Rules and Regulations

The irony is that one would expect an organisation to be, well … organised, meaning that “governance” is already baked in.  In general, of course, this is not the case and things like SOX, TOGAF, ITIL, COBIT and family tend to be introduced some way down the business lifecycle, often in a forceful manner.

As it turns out, it’s possible to make a respectable amount of money and appear somewhat polished, even to the extent of being an exemplary “organisation” whilst being decidedly disorganised on the inside—I suppose I’m confessing my naivite when I say that I only fully appreciated this recently.  If disorganised is the worst it gets, then that’s still better than downright rotten, I suppose.

In any case, let’s say for argument’s sake as an organisation we have elected to “get more organised”.  For the purposes of this article, how we reached this point isn’t relevant, what is relevant is the journey going forward.

Knowing Where We Are: Maturity

This is all 101 stuff—before we embark on a journey, we need to know where we are.  Seems like a silly question, but as an IT organisation how do we do this?  We could get together in a room and cook some stuff up: after all, we’re all intelligent, educated folk … we know what we’re doing, right?  We could also stand on the shoulders of giants and use something that someone has built specifically for this purpose.

In an IT organisation, how mature are we?  As you suspected, answering this question is going to require work.  That said though, there’s something that can help us, namely the Capability Maturity Model (Integration) or … CMMI (link to the Software Engineering Institute’s site).

A Process of Improving: Stages

CMMI is a modern process improvement framework that provides guidance to getting to that “next level” of productivity and efficiency.  It’s modelled on the Capability Maturity Model, hatched back in 1988—a process improvement framework specifically for software development.

Fast-forward to 2014 and we’ve got something that isn’t limited to software development but to an entire IT Services Management organisation.

In any case, the principle is straightforward (and hasn’t changed over the years): move up a ladder of maturity, each rung being a maturity level (“ML” in CMMI parlance).  The maturity level diagram hasn’t changed much over the years, but this one will do for the purposes of illustration (courtesy of Wikipedia):CMMI LevelsCharacteristics of Capability Maturity Model” by Sally Godfrey. Original uploader was Conan at en.wikipedia – Transferred from en.wikipedia; transferred to Commons by User:FSII using CommonsHelper.(Original text : What is CMMI ?). Licensed under Public domain via Wikimedia Commons

Shades of a Governance Framework: What is this Animal?

The acronyms are abundant so this framework doesn’t disappoint in the governance framework jargon department.  There are process areas (PA’s), specific and generic goals (SG’s and GG’s) and specific and generic practices (SP’s and SG’s) to name a few.  Being something that by design accommodates an ITSM organisation in its entirety, CMMI is vast and has spawned the inevitable consulting industry to “get you going” (and keep you going).  It’s no surprise then, that apart from the standard artifacts provided by the Software Engineering Institute and CMMI Institute, there’s little free material out there.

Frameworks like these tend to suffer from an unavoidable problem: in order to be of value, one needs to be specific, however being specific means having to talk about real-world practices, practices about-which there will always be a healthy dose of advocacy (can you say “traditional vs agile”?).  The most recent version of the framework has put an effort into distancing itself from being pinned to specific practices.  Never-the-less, one can’t blame the uninitiated for thinking that CMMI advocates traditional practices over lean or agile methods when the names of key process areas are “Requirements Management (REQM)”, “Project Monitoring and Control (PMC)” and “Project Planning (PP)”.

The “brutally honest, totally hip CMMIFAQ” takes great pains to point out that for maturity assessment purposes, the only things that are required to be fulfilled (an affirmative to the question: “are you doing this?”) are the so-called goals (specific and generic).  Goals are the highest level description of maturity criteria—here’s an example (from the Configuration Management (CM) process area):

Baselines of identified work products are established.

That sentence doesn’t give one much to go on, does it?  Now if you dig into the practices and subpractices, you can get an idea of what’s being asked for here, but doing so means that you start to align yourself with a certain flavour of methodology.  No prizes for guessing what methods are being used as a basis for modelling given the ancestry of this beast.

Despite its genetic makeup, CMMI is striving to please everyone.  There’s even a paper available for download from the Software Engineering Institute’s digital library that talks about how CMMI and agile are by no means incompatible—it makes a good read.

CMMI and Agile: The Lion Lies with the Lamb?

I’m a relative newbie with this CMMI stuff, but so far convincing me that it can be applied smoothly to an environment that practices lean/agile methods is something akin to convincing me that creationism and evolution are both equally valid.  I’m not going to give up on the idea that I will eventually be persuaded as much though – I’m way too invested in CMMI providing us with a much-needed industry-accepted standard for measuring where we are to do so.

 

 

 

 

 

Agility as Religion, Part 3

It’s been some time coming now, but if you’ve been reading this track you would have heard a whole lot of things about the nature of corporate governance and the associated challenges of getting agile practices to be embraced—the title though has the word “religion” in it (and has for three posts now), so where does this fit in?

Depending on your background, the word “religion” can conjure up a variety of images.  Perhaps it’s places of worship, stained glass windows or ceremonious getup.  Perhaps it’s the idea of unwavering adherence to dogma.  I’ve realised now that “religion” isn’t quite the word I was looking for—the point I’m trying to make here isn’t so much about religion as it is faith.

To get that key person in your organisation to back an agile effort is more than likely going to require them to go out on a limb.  Your C-suite leader is going to have to embrace something non-traditional and quite frankly, kooky.  There are, of course, the unconventional leaders in some organisations who have the natural aptitude to walk that path, but they’re outliers.

If your boss is a traditionalist, that change is going to be something akin to a paradigm shift; she’s going to have to have her own road to Damascus experience.  There’s a whole lot of leading the horse to water that you can do, but the actual event of shifting approach is a matter of faith.  Faith on your part (as the persuader) and faith on the part of the decision-maker.

You may be lucky enough to be at that stage where the standard way of solving systems development problems has been, for some time now, demonstrably failing the IT department.  That will mean that IT has pretty much taken up permanent residence in the dog box.  Issues of quality are resulting in regular “near miss” incidents of major corporate embarrassment (with the odd ruction-inducing direct hit); overrunning projects are beginning to be the norm and the politically correct open-door policy which used to be the proud example set by the corner office has been abandoned in an effort to suppress the noise associated with C-suite verbal showdowns.  Yes, I said lucky enough, because that’s what this situation is – the perfect environmental conditions for a paradigm shift.

You probably won’t notice the event proper because it’s one of those things that isn’t accompanied with much fanfare.  There will be little hints at first – mentioning a word like retrospective won’t immediately be followed by a “don’t get all zen on me now” retort or protesting that a project manager or a business analyst is “also a software developer” won’t be met with immediate derision.  Then there will be a moment where you’ll know that it is done—a not-insignificantly-priced purchase order for agile coaching is approved.

Assuming the shift has been made, there’s no going back to the old way – there are few greater forces than the newly converted.  The rest is about providing guidance and channeling the new-found energy enjoyed by your reborn agile leader.

If your organisation is not at that crucial point, then yours is a waiting game—waiting for that perfect storm of (trying) conditions where a key individual is forced to abandon their closely-held belief system in a desperate effort to survive.

As you can see, the success of “turning” your leader is a function of circumstances, timing and regular, gentle suggestion; not much different from the analogue of religious conversion.

Agility as Religion, Part 2

Last week I waxed lyrical on the circumstances that are necessary for real change in an organisation and the inevitable challenge that pursuers of agile methods are faced with in light of these prerequisites.

To recap: since a software developer’s ultimate goal isn’t producing some code, but  delivering value to the customer, she is forced to consider ways in which her circle of influence can be expanded to meet her circle of concern (where the primary concern is delivering real value to the customer).  In most organisations, change beyond the development team necessarily requires action at a senior management level.

As a professional software developer, I have a duty to do the best I can when it comes to delivering value to the customer, despite my circumstances.  In my book, a professional is more than just someone who does work for money; a professional is someone who has the customer’s interests at heart—everything flows out of this.

Professionalism means being proactive; more specifically: the Covey flavour of that phrase.  In practice, this translates to moving beyond practices with-which we feel comfortable.  Software development became my career mostly because I enjoyed programming, starting at a young age, but as the years went on, I found myself becoming increasingly frustrated by what were, in my opinion, ill-informed decision-making practices of superiors.  The crux of this thread is that it is up to us, as software developers to put on our sales hats and become persuaders in addition to programmers; experience has taught me that nobody else is going to do it for us (just typing that sentence makes me realise how silly it was to expect someone who doesn’t have a software background to make that leap).

All of that waffle boils down to the first step:

Look in the mirror and truly understand that in order to widen your circle of influence, you are the one who is going to have to change first.

The “you” in the above statement can be an individual, but it can also be a team.

Then, put on your sales hat and engage the guy or gal who can make these changes. Fully comprehend that the person who you’re trying to persuade is approaching things using her own paradigm, and adjust your thinking appropriately.  As passionate as your pitch may be, trying to persuade a chartered accountant of the benefits of agile development by carrying on about TDD, pair programming and continuous integration won’t get the traction that you were hoping for.  Before others will change, you need to change: become that accountant, feel the sorts of things that are important to her; pitch it on her terms.

Simple as the solution may be, carrying it out is hard.  I believe the solution to the biggest problems in software development are simple to describe, but hard to implement.  Case in point: the concept of two people working on the same piece of code together is a simple one; the act of convincing the holder of the purse strings that putting this behaviour into practice will yield better results over the long term is hard.

Agility as Religion, Part 1

Any significant change in an organisation only has a chance of becoming permanent with the committed backing of senior management. Of course I’m no sociologist, but I’m fairly certain that this pattern repeats in any type of organisation, commercial or otherwise — in short, real change (for better or worse) happens top-down (revolution comes to mind as an exception, but for the purposes of this discussion, I’m going to keep to evolutionary change).

So, ask Joe Blogs what agile is all about, and he’ll probably struggle to nail it down, after all, the word has become corrupted and is used in ways that don’t even make grammatical sense, but a good start would be:

Frequent iterations of work with uninhibited feedback resulting in regular delivery of value and optimisation of practice.

The key concept here is feedback. The quicker we can know if the course we’re on is the wrong one, the quicker we can make amends. This isn’t new stuff … it’s also intuitive, but history has proven that when it comes to groupthink, common sense does not necessarily prevail.

If we treat it as a given, the top-down nature of organisational behaviour becomes a constraining factor in trying to imbue agility in the psyche of people and teams. This manifests itself as constructs such as the Waterfall Sandwich (pertinent part at around 11:50) — an example of something that is borne out of a well-intentioned effort to marry long-entrenched traditional practices with an attempt to “become agile”. Agility initiatives are more often than not seeded amongst the geekier part of the corporate software development outfit — the agile manifesto, after all, was dreamed up by a bunch of software developers. What follows is software-development-team-bound agility, the boundaries being determined by the team’s circle of influence.

Real agility is characterised by the involvement of all those who ultimately benefit from the value that is delivered through software (re-inforced by Kent Becks mutual benefit XP principle). Of course then, the customer should be included. When persuading the internal corporate customer to “come over”, we’re forced to face up to the inevitable top-down nature of corporate change. It’s helpful then to look for solutions that are in harmony with that reality.

Considering a) organisations mimic the behaviour of their leaders and b) agility is only acheived when all beneficiaries of potential value participate, then the natural conclusion is to target the leaders. That’s quite a pageful of blog to come to what at face value seems anticlimatic. Obvious as it may seem though, what is not obvious is that there are no other options.

In the journey to realising real agility, start by leading the right horse to water. In the next post I’ll talk about making it drink.

To Buffer or Not To Buffer

The other day I was reviewing some code, and I came across a change that had been made recently to an existing method that was essentially responsible for copying a template file from one location to another in preparation for it to be “applied”. The source location of the template file was hard-coded to a drive and path, so it became a natural target for improvement. The change involved sourcing it from a resource embedded in the executing assembly rather than the filesystem—it’s not this design decision that is the topic of this post, but rather an interesting bit of minutiae around the chosen implementation and the clues that something so small offers about the health of a development team.

Effectively the whole affair boiled down to copying the contents of one stream to another, and the nub of this operation was implemented using the following code:

for (int i=0; i < stream.Length; i++)
    destStream.writeByte((byte)stream.ReadByte());

A seasoned developer would, of course, have introduced a buffer, something along the lines of:

var buffer = new byte[4096];
int bytesRead;
while ((bytesRead = sourceStream.Read(buffer, 0, buffer.Length)) > 0)
    destStream.writeBytes(buffer, 0, bytesRead);

Note the tell-tale choice of power-of-two buffer size (this is even true of Stack Overflow articles talking about the same).

The former seems to be a naive implementation–well that’s what my initial reaction was, but why would I look at it that way?

Buffered I/O

There’s a great answer to a general question around buffered/unbuffered I/O on Stack Overflow, which includes a nice ASCII art representation of what’s happening under the hood from a buffering point of view. In short, there’s buffering happening up and down the stack. At each layer, the imperatives may be slightly different, but at the end of the day all of this buffering is there for one reason: speed.

So if the runtime, OS and disk hardware all have built-in buffering, then why bother with a buffer at application level? Why indeed. The more I think about it, the more this seems to be an archaic holdover from a time when we as developers were required to think about things that today, are just taken care of for us

Clues and Lessons

Duck or RabbitI’m not going to go so far as to say that the developer who made this change did it the way he did because he knew he didn’t need to worry about buffering (after all, it’s “done” for us anyway)–he may have; I just don’t know. That said, apart from being aesthetically distasteful to me, for the purposes of performing some local disk I/O, it’s good enough, that is, it works and it’s simple. No one can dispute the value of being able to see things differently.

There is something, however, that’s a bit of a red flag here, though. If it was a junior developer who had written this piece of code, I would not have been surprised. This is not the case, however; we’re talking about someone who is ostensibly the most senior person on the team—to be completely honest, I was being a touch facetious about the possibility of the author choosing that design route on the back of knowledge that buffering was handled in lower layers. A more likely explanation is that laziness had a part to play in that choice. If nothing else, this means that further investigation is warranted.

Speaking of having stuff just taken care of for us, the natural route, of course is to stand on the shoulders of giants and just use something like Stream.CopyTo (available from .NET 4.0 onwards only, mind you).

Cheating at Word Games, Part 2

In a previous post, I detailed how to cheat at a popular Zynga game which is a play on the age-old hangman, but with cute balloons (the style of which you can upgrade with purchasable coins).  I covered the part of the game where you need to choose a word that your opponent needs to guess.  I mentioned that, of course, it’s also possible to cheat on the flip-side, that is, in having to guess the word that your opponent has set as a challenge for you. At that stage, I was resorting to grepping through all the words, with a regular expression, using the “any” regex token for letters that hadn’t yet been guessed (“.”).  i.e., if the word so far ended in at and consisted of four letters, finding possibilities was a case of:

grep –i “..at” /tmp/scowl/final/*

Of course, this isn’t particularly smart, because it will return words that aren’t candidates by virtue of the fact that they contain letters that I may have already incorrectly guessed.  Assuming that I have already guessed e and r, neither of which were correct, then a better grepwould be:

grep –iE “[^er]{2}at” /tmp/scowl/final/*

Enter Ruby

I’m a great fan of knocking Unixy bits together in impressive ways, but the poor mans word-guesser above needs an upgrade, and since I’m learning Ruby at the moment, I figured it was time to put in the effort to Ruby-fy what is essentially a simple need. In Part 1 of the cheating story I defined a class called EnglishWords which serves to encapsulate all possible words and operations on those words that I may be interested in.  I need to re-use this class for my word-guesser, so it’s time to move it out into its own source file. Using EnglishWords from a script then becomes:

require File.join(File.dirname($0), 'english_words.rb')

$0 is the path to the script being executed and File.dirname gives us the directory part of the path.  This require assumes that english_words.rb resides in the same folder as the top-level script.  Fortunately, I have a bit of background in Perl, from-which Ruby borrows many things, so the idioms are familiar.  Ruby also borrows a few things from Unix shell script, e.g., $0, $1 etc., and dirname/basename.  How do I get the directory of the currently executing script in Bash?

echo $( dirname $0 )

This is one of the reasons why this language is growing on me so quickly—an example of the principle of least surprise in action (provided, of course, you’re a Unix-head and not from a purely Windows background). I took the grep above and Rubyfied it without much translation at all:

words = EnglishWords.new ARGV.shift
word_so_far = ARGV.shift
wrong_letters = ARGV.shift

regexp_so_far =
   wrong_letters.empty? ?
   Regexp.new("^#{word_so_far}$";) :
   Regexp.new("^" +
      word_so_far.gsub('.', "[^#{wrong_letters}]") + "$";)

Line 7 is the important part; just replace all instances of “.” with a regular expression character class of the letters already guessed, and we’re ready to apply the regular expression.  Lines 1 to 3 are another example of a Unix idiom, namely, shift which “knocks” the first item of the array being acted on off and returns it.  Interestingly, shift is an example that I’ve come across in Ruby of a deviation from another convention—that of naming “dangerous” methods (that is, methods that alter the state of what’s being acted on) with a bang at the end e.g., gsub!(..).  I guess in this case, assuming you’re familiar with the Unix shift, the bang is redundant since shift by definition changes state. Searching for (and outputting) candidate words becomes:

words.each do |w|
    puts w if regexp_so_far.match(w)
end

But wait … EnglishWords has no definition for each yet.  Here it is:

  1: def each()
  2:     @words.keys.each { |w| yield w }
  3: end

Notice how each above takes no parameters, but I’m passing it a block (blocks are the same as lambdas in C#-speak).  Another feature of Ruby—the implicit block parameter which can be passed to a method and invoked with yield. image Without too much effort, and a little bit of Ruby, we’re up-and-running in the word guessing department Smile.