Wednesday, 20 January 2010

Difficult to fit stories - part 2 : Platform Ports

Having read a number of books and articles on Agile software developments, I find that most of the documented examples of user stories deal with the implementation of a new piece of functionality, often a pretty well encapsulated one at that. At RainStor I am working with an emerging product. Often our requirements do fit this form pretty well, however we also encounter many occasions when the work to be undertaken does not marry easily with this model of a user story.

In the second of my posts tackling this subject I address platform ports. These are another type of requirement which I have encountered which can be difficult to breakdown in the accepted format of a user story.

Essentially a platform port can be viewed in two ways. Either:-

Re-applying all of the previous stories to a new operating system


Pros

+ Each story relates to valuable user functionality
+ Allows clear picture of how the port is progressing based on what stories have been delivered

Cons

- Porting rarely delivers functions one at a time, it is more often a case of addressing compilation issues and then delivering large areas of functionality (ie tens or hundreds of previously implemented stories) at a time leading to a logistical nightmare in managing which ones have been delivered.
- Failures can often be technology rather than function specific and so it can be hard to marry the bugs/blocks to the stories

Having one user story which equates to the user wanting to run the entire system on a new OS.


Pros

+ Provides one clear story which delivers value to the customer (as a X I want the system to work on platform Y so that I can Z)

Cons

- Ports are usually too large to fit into an iteration
- Little breakdown of the work involved which affords less opportunity for tracking of progress based on completed stories

Neither of these approaches worked in practice when tackling this type of requirement. Over time we have evolved a third way of addressing this type of requirement.

Defining stages which deliver value to stakeholders



The approach that we have settled on is a hybrid approach, breaking down the work into stages and grouping the corresponding deliverable value to stakeholders at each stage and the associated tests together. e.g.


  • The software will compile and build running unit tests on OS XX
    The value here being making the build available to the testers and automated test harnesses for further testing


  • Functional and SOAK testing will be executed on OS XX
    The value here being the confidence and product knowledge provided to the product management team reported from the tester/team. Not all tests need to pass first time but the software needs the be in a state sufficient to allow the full suite of tests to be executed


  • The software will be packaged for release and will pass clean machine installation and configuration tests on OS XX
    The value here being the deliverable software to the customer, or the 'brochure summary' requirement


Pros

+ Allows for stories which are testable and deliver value to stakeholder, albeit not always the customer
+ Blocks and issues are relevant to the story in question
+ The stories are manageable in both size and quantity and the appropriate testing clearly definable.
+ Performance stories can be defined separately depending if there are any set performance criteria for the OS (see previous post)

I still have some problems with this approach. It does feel a little more task based than value based, with a very strong dependency between the stories. It does, however, allow for the breakdown of a lengthy requirement over a series of iterations with valuable and testable deliverables from each story and a sense of progress across iterations. In the absence of a better solution, this approach is "good enough" for me.

Copyright(c) Adam Knight 2010

Friday, 15 January 2010

Dealing with Difficult to fit stories - Part 1: Performance

Having read a number of books and articles on Agile software developments, I find that most of the documented examples of user stories deal with the implementation of a new piece of functionality, often a pretty well encapsulated one at that. At RainStor I am working with an emerging product. Often our requirements do fit this form pretty well, however we also encounter many occasions when the work to be undertaken does not marry easily with this model of a user story.

In my next few blog posts I will outline some examples of such requirements and how we have attempted to deal with such scenarios. In this post I discuss performance:-

1. Performance


As RainStor provides a data storage and analysis engine, our requirements regarding performance of e.g. data load and querying are well defined. For new administration functions, however, the customer rarely has specific performance requirements, but we know from experience that they will not accept performance if they deem it to be unsatisfactory.

Some approaches that can be adopted to address this:-

  1. Define acceptable performance as part of each story


  2. Pros

    - Focuses attention on optimising performance during the initial design
    - Improves delivery speed if acceptable performance can be achieved in the iteration where functionality is implemented

    Cons

    + Measuring performance often requires lengthy setup of tests/data which can take focus away from functional exploration and result in lower functional quality


  3. Have separate performance stories


  4. Pros

    + Allows you to deliver valuable functionality quickly and then focus on performance, "get it in then get it fast"
    + Having a specific story will focus time and attention on performance and help to optimise performance
    + Separating performance out helps provide focus on designing and executing the right tests at the right time.

    Cons

    - Separates performance from functional implementation and can 'stretch' the delivery of functionality over multiple iterations.
    - Performance stories are likely to be prioritised lower than other new functionality, so we could end up never optimising and having a system that grinds to a halt
    - We have a delayed assessment of whether performance is very poor or will scale badly on larger installations


  5. Have a set of implicit stories or as Mike Cohn calls them, 'constraints', that can apply system wide and are tested against every new development


  6. Pros

    + Having these documented provides the tester with a measurable benchmark against which they can specify acceptance criteria

    Cons

    - Constraints may be too generic and hard to apply to new functionality, and lead to a tendency to always perform at the worst of our constraint limits.
    - Alternatively we may end up specifying constraints for every requirement and end up in the same situation as option 1.
    - A new piece of functionality will usually take precedence over breaking a constraint, and so then you need to prioritise the work to bring the performance back up to within the constraint limit

  7. Define acceptable criteria for each story based on previously delivered functionality


  8. Pros

    + Uses our own expertise and experience on what is achievable
    + Also taps into our knowledge of what customer will find acceptable based on previous experience

    Cons

    - May be difficult to find suitable functionality to compare against
    - We may accept poor performance if the function used to compare against is more resource intensive, or itself has not been optimised


In practice the option chosen in my organisation will vary depending on the situation. If the customer has no performance requirement, or lacks the in depth knowledge to assess performance in advance, then we tend to use our own knowledge of customer implementations to decide on performance criteria. This requires an excellent knowledge of the customers' implementations and expectations to make decisions on what is acceptable on their behalf.

In terms of when to test performance, where there is not an explicit performance requirement involved, we tend to obtain information on performance during the initial functional implementation. We will then discuss with Product Managers/developers and possibly customers whether this is acceptable and whether further improvements could be made, or whether further assessment is required. We will then prioritise further targeted performance work at that stage. This works well for us as it maintains a focus on performance without hindering the development effort, and allows this focus to be escalated in later iterations if we deem it to be a priority.

ShareThis

Recommended