In the second of my posts tackling this subject I address platform ports. These are another type of requirement which I have encountered which can be difficult to breakdown in the accepted format of a user story.
Essentially a platform port can be viewed in two ways. Either:-
Re-applying all of the previous stories to a new operating system
+ Each story relates to valuable user functionality
+ Allows clear picture of how the port is progressing based on what stories have been delivered
- Porting rarely delivers functions one at a time, it is more often a case of addressing compilation issues and then delivering large areas of functionality (ie tens or hundreds of previously implemented stories) at a time leading to a logistical nightmare in managing which ones have been delivered.
- Failures can often be technology rather than function specific and so it can be hard to marry the bugs/blocks to the stories
Having one user story which equates to the user wanting to run the entire system on a new OS.
+ Provides one clear story which delivers value to the customer (as a X I want the system to work on platform Y so that I can Z)
- Ports are usually too large to fit into an iteration
- Little breakdown of the work involved which affords less opportunity for tracking of progress based on completed stories
Neither of these approaches worked in practice when tackling this type of requirement. Over time we have evolved a third way of addressing this type of requirement.
Defining stages which deliver value to stakeholders
The approach that we have settled on is a hybrid approach, breaking down the work into stages and grouping the corresponding deliverable value to stakeholders at each stage and the associated tests together. e.g.
- The software will compile and build running unit tests on OS XX
The value here being making the build available to the testers and automated test harnesses for further testing
- Functional and SOAK testing will be executed on OS XX
The value here being the confidence and product knowledge provided to the product management team reported from the tester/team. Not all tests need to pass first time but the software needs the be in a state sufficient to allow the full suite of tests to be executed
- The software will be packaged for release and will pass clean machine installation and configuration tests on OS XX
The value here being the deliverable software to the customer, or the 'brochure summary' requirement
+ Allows for stories which are testable and deliver value to stakeholder, albeit not always the customer
+ Blocks and issues are relevant to the story in question
+ The stories are manageable in both size and quantity and the appropriate testing clearly definable.
+ Performance stories can be defined separately depending if there are any set performance criteria for the OS (see previous post)
I still have some problems with this approach. It does feel a little more task based than value based, with a very strong dependency between the stories. It does, however, allow for the breakdown of a lengthy requirement over a series of iterations with valuable and testable deliverables from each story and a sense of progress across iterations. In the absence of a better solution, this approach is "good enough" for me.
Copyright(c) Adam Knight 2010