Friday 15 January 2010

Dealing with Difficult to fit stories - Part 1: Performance

Having read a number of books and articles on Agile software developments, I find that most of the documented examples of user stories deal with the implementation of a new piece of functionality, often a pretty well encapsulated one at that. At RainStor I am working with an emerging product. Often our requirements do fit this form pretty well, however we also encounter many occasions when the work to be undertaken does not marry easily with this model of a user story.

In my next few blog posts I will outline some examples of such requirements and how we have attempted to deal with such scenarios. In this post I discuss performance:-

1. Performance


As RainStor provides a data storage and analysis engine, our requirements regarding performance of e.g. data load and querying are well defined. For new administration functions, however, the customer rarely has specific performance requirements, but we know from experience that they will not accept performance if they deem it to be unsatisfactory.

Some approaches that can be adopted to address this:-

  1. Define acceptable performance as part of each story


  2. Pros

    - Focuses attention on optimising performance during the initial design
    - Improves delivery speed if acceptable performance can be achieved in the iteration where functionality is implemented

    Cons

    + Measuring performance often requires lengthy setup of tests/data which can take focus away from functional exploration and result in lower functional quality


  3. Have separate performance stories


  4. Pros

    + Allows you to deliver valuable functionality quickly and then focus on performance, "get it in then get it fast"
    + Having a specific story will focus time and attention on performance and help to optimise performance
    + Separating performance out helps provide focus on designing and executing the right tests at the right time.

    Cons

    - Separates performance from functional implementation and can 'stretch' the delivery of functionality over multiple iterations.
    - Performance stories are likely to be prioritised lower than other new functionality, so we could end up never optimising and having a system that grinds to a halt
    - We have a delayed assessment of whether performance is very poor or will scale badly on larger installations


  5. Have a set of implicit stories or as Mike Cohn calls them, 'constraints', that can apply system wide and are tested against every new development


  6. Pros

    + Having these documented provides the tester with a measurable benchmark against which they can specify acceptance criteria

    Cons

    - Constraints may be too generic and hard to apply to new functionality, and lead to a tendency to always perform at the worst of our constraint limits.
    - Alternatively we may end up specifying constraints for every requirement and end up in the same situation as option 1.
    - A new piece of functionality will usually take precedence over breaking a constraint, and so then you need to prioritise the work to bring the performance back up to within the constraint limit

  7. Define acceptable criteria for each story based on previously delivered functionality


  8. Pros

    + Uses our own expertise and experience on what is achievable
    + Also taps into our knowledge of what customer will find acceptable based on previous experience

    Cons

    - May be difficult to find suitable functionality to compare against
    - We may accept poor performance if the function used to compare against is more resource intensive, or itself has not been optimised


In practice the option chosen in my organisation will vary depending on the situation. If the customer has no performance requirement, or lacks the in depth knowledge to assess performance in advance, then we tend to use our own knowledge of customer implementations to decide on performance criteria. This requires an excellent knowledge of the customers' implementations and expectations to make decisions on what is acceptable on their behalf.

In terms of when to test performance, where there is not an explicit performance requirement involved, we tend to obtain information on performance during the initial functional implementation. We will then discuss with Product Managers/developers and possibly customers whether this is acceptable and whether further improvements could be made, or whether further assessment is required. We will then prioritise further targeted performance work at that stage. This works well for us as it maintains a focus on performance without hindering the development effort, and allows this focus to be escalated in later iterations if we deem it to be a priority.

Whatsapp Button works on Mobile Device only

Start typing and press Enter to search