Sunday, 20 November 2016

Rewrites and Underestimation

In my opinion, the word 'rewrite' is one that should not exist in software. The word is most commonly used to describe the process of recreating functionality or behaviour of an existing piece of software through the process of writing a new one. On the face of it this presents a straightforward task - does the previous incarnation of the software provide us with the perfect specification for the development and testing of the new capability? Unfortunately things are rarely that simple. Use of the term rewrite implies such a level of simplicity in the activity that belies the difficulty involved in such an undertaking. In my experience rewrites are some of the most underestimated of all development projects.

Why rewrite?

So what would possess us to attempt to recreate a piece of software? Surely if we have something that does the job then we don't need to rebuild it? I've encountered two main reasons for performing rewrites - either to support the same use with improved technology/architecture, or to allow the same feature set to be delivered into a new context. In both of these cases there is an inherent assumption that recreating the same feature set will yield the value that you are looking for, which is by no means guaranteed.

Another false assumption in these situations is that development will inevitably be quicker because we've 'done it before'. The fact that we want to rewrite implies a certain level of success in the existing capability in terms of adoption and use. Despite the effort involved on both the initial build, and also in the ensuing months and years of maintenance and improvement, to deliver that success we seem to have an incredible knack of forgetting what it has taken to get to where we are and consistently underestimate rewrite projects.

How do I underestimate thee? Let me count the ways

Let's look at some of the many ways that these underestimations come about

  • Underestimating how smart you were before
  • The attitude of those approaching or proposing a rewrite is usually along the lines of insisting that we won't make the same mistakes that were made in the original development. This can be particularly amusing when it is the same individuals approaching the rewrite that created the original. How easy it is to forget that we were actually pretty smart when we created the original software, and the decisions that we made were done for genuine and valid reasons. I've lost count of the number of times I've looked back at work I created in the past somewhat sceptically and ended up quietly impressed with myself for the quality of it under scrutiny. We shouldn't assume that anything we create now will inevitably be better than what went before. If we don't give ourselves enough credit for the work done first time around then we risk simply repeating the same mistakes, only worse because we have given ourselves even less time to develop this time around.

  • Underestimating the power of incremental improvements
  • It's amazing how prone we are to forgetting how much a piece of software has improved over time relative to the initial release. One of my earliest tasks at RainStor was to test a new version of our data import tool. We already had a data importer that was suitable for our range of existing supported types. The reasoning behind the rewrite was to allow us to more easily extend to add more types to the supported range. The work had been performed by the chief scientist in the company, a man without whom "there would be no company" according to my introduction on my first day. As a new recruit to the organisation, testing this work was understandably a daunting prospect. What I discovered repeatedly through testing it was the performance of the importer was nowhere near what we were seeing for the previous one. Ultimately in attempting the rewrite the focus had been on improvements relating to architecture and maintenance, yet an important customer characteristic of performance had been sacrificed. It had been assumed that a better architecture would be more performant, however a huge number of iterative improvements had gone into improving the performance of the previous version over time, to the effect that the newly written version struggled to deliver equivalent performance.

  • Underestimating the growth in complexity
  • Just as quality and value will incrementally grow in a product over time, similarly complexity also builds. The simple logic structures that we started with (we were, after all, pretty smart) may have become significantly more complicated as new edge cases and scenarios arose that needed to be supported. On a recent development rewrite I worked on, I thought I had a good understanding of the user/roles structure after a spike task to research that exact area of the old system had been completed. As the development progressed it became apparent that my understanding was limited to the simple original roles structure first created in the product. Subsequent to that a number of much more nuanced rules had been added to the system which made the roles structure far richer and more complex than I understood it to be.

  • Underestimating how bespoke it is
  • It is tempting to look at an existing system and think that the logic can be 're-used' in a new application. This can be dangerous if the differences in application context aren't considered. Even very generically designed systems can grow and mould over time to the shape of the business process in which they are applied through maintenance, support and feature development requested by the users.

  • Underestimating how much people are working around existing behaviour
  • As I wrote in my piece "The workaround denier" the ways that people use a system are often very different to how we want or expect them to be. If we don't understand the workarounds that folks are using to achieve their ends with our products then we risk removing important capabilities. One one system I worked on, some users were working around restrictions on the data available under an individual account by having multiple accounts and logging in and out of the system to view different data linked to each user. When, as part of a rewrite, we provided them with a single sign-on capability this actually hindered them as they no longer had control of which account to log in with.

  • Underestimating external changes
  • Even if the users of a product haven't changed, the environment they are working in and the constraints of it will have. I was caught out not long ago on a project working with an external standards body. The previous software had achieved an accreditation with a standards organisation. As we'd essentially copied the logic of the original we mistakenly assumed that we would not encounter any issues in gaining the same accreditation. Unfortunately the body were somewhat stricter in their application of the accreditation rules and we were required to perform significant extra development in order to fulfil their new exacting criteria. Our old software may not have moved on, but the accreditation process had for new software.

  • Underestimating how many bugs you've fixed
  • When creating a new piece of software we will accept that there will be issues apparent due to working on a new code base and that we will improve these over time. It's strange how when creating new code to rewrite existing software we somehow expect those issues to be absent. It's an unwritten assumption that if you've created a product and fixed the bugs in it, that you'll be able to rewrite the same capability with new code without introducing any new bugs.

A fresh start

The chance to re-develop a product is a golden opportunity to take a fresh start, yet often this opportunity is missed because we've left ourselves far too little time. What can we do to avoid the pitfalls of dramatic underestimation in rewrites? It is tempting here to product a long list of things that I think would help in these situations, but ultimately I think that it boils down to just one:-

Don't call it a rewrite

It's that simple. At no point should we be 'rewriting' software. By all means we should refresh, iterate, replace or any other term that implies that we're creating a new solution and not simply trying to bypass the need to think about the new software we are creating. As soon as we use the term rewrite we imply the ability to switch off the need to understand and address the problems at hand, and just roll out the same features parrot fashion. We assume that because we've had some success in that domain that we don't need to revisit the problems, only the solutions we've already come up with. This is dangerous thinking and why I'd like to wipe the term 'rewrite' from the software development dictionary.

Yes, a pre-existing system may provide all of the following:-

  • A window to learning
  • A valuable testing oracle
  • A source of useful performance metrics
  • A communication tool to understand a user community
  • A basis for discussion

What it is not, is a specification. Once we accept that, then our projects to create new iterations of our software stand a much better chance of success.

Image: Chris Blakeley https://www.flickr.com/photos/csb13/78342111

Tuesday, 27 September 2016

The Risk Questionnaire

What does it mean that something is "tested". The phrase "tested" troubles me in a similar way to "done" in the sense that it implies that some status has been achieved, some binary switch has been flicked such that a piece of work has achieved some threshold status that up until that point had not been the case.

Of course, there is no magic switch that renders the feature "tested" or the user story "done", we will simply reach a point where the activity performed against that item gives us sufficient confidence in it to move on to other things. My biggest problem with "tested" is that it can mean very different things to different people, often within the same organisation. When two different people in the same organisation refer to something as tested then that each can be taking a very different understanding of what is implied, particularly the exposure to risk involved.

A common understanding

Working for a long time in the same organisation affords one with the luxury of time to reach a common understanding across the various business and technical roles of what "tested" means. To the mature team, tested hopefully means testing has been performed to a level that we've established as a development team that the business is happy with in terms of the time and cost of carrying out the testing and the level of bug fixing that we're likely to need. This level will differ from company to company, with a critical driver being the level of acceptable business risk. I suggested in my post "The Tester, The Business and Risk Perception" that businesses, like individuals, will likely operate at a determined level of risk and the overall activities within that business will gravitate towards that level irrespective of the actions of individual testers.

In my experience an understanding evolves as a team or department matures around what the expectation is on testing and how much time is to be spent on it. If a team spends too much time on testing then it is questioned by management, if the team are pushed to do too little testing then they push back, or quality suffers and the customer feedback prompts review. Eventually in response to these inputs, an equilibrium is reached. But what of new companies or teams? How do we identify an appropriate testing balance between rigour and pace, when a team is either newly created or adopting a new approach such as agile? Is it possible to fast-track this learning process and calibrate ourselves and our activities more quickly around the risk profile of the business

Finding an equilibrium

I was recently asking myself just such a question when facing the challenge of establishing a test strategy for my new company. The company had grown quickly and also recently adopted agile methods, and so inevitable had not yet found its equilibrium in terms of an acceptable level of risk and testing. This caused some confusion across the development teams as they wasn't a consistent understanding of the level that was expected of them in their development and testing activities.

It would have been easy to come in and immediately push to increase the level of testing, however I felt that it was important first to establish amongst the business leaders understanding of the general attitude to acceptable risk. The approach that I took yielded some very interesting results:-

Learning from IFAs

Independent financial advisers (IFAs) need to understand risk. When an individual in the UK attends a financial adviser then they may well be asked to complete something like this questionnaire:- https://www.dynamicplanner.com/home/investor/what-is-risk-profiling/assess-attitude-to-risk/

The use of these questionnaires is common practice amongst financial advisers and institutions to assess the risk appetite of their clients and recommend a pension/investment strategy accordingly. The principle is simple, the customer is presented with a series of statements associated with their attitude to finances, with the answers to these questions provided through a set of standard options reflecting different risk levels.

Example questions might be ones like this

Assume you had an initial investment portfolio worth £100,000. If, due to market conditions, your portfolio fell would you:

  • a) Sell all of the investments. You do not intend to take risks.
  • b) Sell to cut your losses and reinvest into more secure investment sectors.
  • c) Hold the investment and sell nothing, expecting performance to improve.
  • d) Invest more funds to lower your average investment price.

If you could increase your chances of a return by taking a higher risk would you

  • a) Take more risk with all of my money
  • b) Take more risk with half of my money
  • c) Take more risk with quarter of my money
  • d) Not take the risk

The resulting range of answers provides the adviser with an overall impression of the individual's risk level, and they can build an investment plan accordingly. I recently went to an IFA to consolidate my pensions and took just such a questionnaire, coming out at a rather dull 6/10 (apparently the vast majority of people sit between 4 and 7).

I felt that this idea of a questionnaire was a useful simple technique which could be validly used for assessing the risk appetite of individuals across a development company. I wanted to establish a starting position that would allow me to progress the conversation around an acceptable level of business risk and establish whether or not there was consistency of opinion across the company on our approach to risk, and this looked like a good option.

The questionnaire

I created a risk questionnaire that we could ask members of the leadership team to complete. The audience was selected to be a range across development and client services, and the CEO. For the questionnaire I focussed around 4 primary areas

  1. Business Risk
  2. Development Risk
  3. Perceived Status
  4. Time Spent on Testing Activities

1. Business Risk

The business risk statements were focussed on the interaction between the software process and the customers. These were open to all to answer, but primarily targeted at the non-development leadership roles. The aim here was to establish an understanding of the priorities in terms of customer deliverables and the conflicting priorities of time, cost and rigour.

Rate the following questions on 1 - 5 from 1=Strongly agree 5=Strongly disagree

  1. On time delivery is more important than taking longer to deliver a higher quality product

  2. I am happy to accept the need for later effort in maintaining a product if we can deliver that product at a lower up-front cost

  3. Our customers would knowingly accept a reduced level of rigour in development compared to other products in order to keep the cost of the software down

  4. Putting the software in front of customers and responding to the issues they encounter is a cost effective way to prioritise fixing software problems

  5. The cost of fixing issues in production software is now reduced to the point that this is an economically viable approach

  6. Our product context is one in which we can adopt a relatively low level of rigour compared to other business facing software development organisations

  7. I would be reluctant to see an entire sprint given over entirely to testing and bug fixing unless this was driven by issues encountered by the customer

In review of the results we identified that some of these questions were open to interpretation. For example, in question 4. I was not clear as to whether this referred to giving customers early visibility prior to release, through demos or betas, or whether it meant pushing early to production and letting them find the issues in live use (I had intended the latter). Having similar questions worded in slightly different ways, such as question 5, helped to identify whether there was any misunderstanding there, however if doing a similar exercise again I would look more carefully for possible ambiguity.

2. Development Risk

Development risk questions focussed on the risks inherent in our development activity. The idea was to get an understanding of the expectation around the development process and the feeling towards developer only testing. Again these were open to all to answer, but did not shy away from slightly more involved development terminology and concepts.

Rate the following questions on 1 - 5 from 1=Strongly agree to 5=Strongly disagree

  1. The effective application of developer/unit testing can eliminate the need for further devoted testing activity

  2. Appropriate software design can eliminate the need for devoted performance and stability testing

  3. Adding further development skills in our agile teams provides more value in our context than devoted testers

  4. The testing of our products does not require specialist testing knowledge and could be performed by individuals with limited training in software testing

  5. I would be reluctant to schedule specific testing tasks on a team’s backlog without any associated development

In creating the questions I did try to avoid leading questions around any known problems or perceived shortfalls in the current status. At the same time, clearly the questions needed to be suitable for the nature of the business - questions suitable for on a long release-cycle big data database would not have been appropriate here.

3. Perceived Status

This was a really interesting one. I pitched two very straightforward questions around company status.

  1. With 1 being lowest and 5 highest rate how you think the company currently stands in its typical level of rigour in software quality and testing?

  2. With 1 being lowest and 5 highest rate how you think the company should stand in its typical level of rigour in software quality and testing?

The first of these doesn't give an answer that reveals anything about the level of acceptable risk, however what it does do is put the answers that do into context. Knowing how people feel about the current status and how this compares to where they want to be gives a strong indication of whether changes to increase development/testing rigour will be accepted and supported.

4. Testing Commitment

This section didn't work quite so well. I was aiming to get an understanding of how much time people felt should be spent on testing activities as part of a typical development.

Rate the following in terms of time; 1 = Less than 1 hour; 2 = 2 - 4 hours; 3 = 0.5 - 1 days ; 4 = 1-2 days; 5 = more than 2 days

  1. How much time out of a typical user story development taking 1 person-week in total should be given over to the creation of unit tests?

  2. How much time out of a typical user story development taking 1 person-week in total should be given over to automated acceptance testing?

  3. How much time out of a typical user story development taking 1 person-week in total should be given over to human exploratory testing?

One respondent didn't answer these as he felt that, even in the same organisation, different work items were too different to provide a 'typical' answer. My feeling was that there was a fairly typical shape for the developments undertaken that we could use to calibrate teams around how much testing to do, however I could see his point and understood the reluctance to answer here

Another issue here was that I provided timescales for user stories anchored by my experience of development of complex data systems. In this context "typical" user stories would be much shorter than 1 week in duration and therefore there was a contradiction built into the questions. Nevertheless the answers were informative and provided useful information to help in constructing a strategy.

Presenting the Figures

All testers are wary of metrics. It is a natural suspicion borne of seeing too many occasions of glowing bug statistics masking serious quality problems. Presenting the figures in a visually informative and digestible way was key to getting the benefits of the analysis here. I used Tableau Public to create some useful visualisations. The most effective being a simple min/max/average figure for each question. This format allowed me to highlight not only the average response but also the range of responses on each question.

(I've altered peoples names and scores of the responses here for obvious reasons, however tried to keep the outputs representative of the range of results and patterns that I encountered)

Business Risk:

With the business risk it was the ranges that were most interesting. Some questions would yield a wide range of opinion across respondents in their answers, whereas others would be much more focussed. Clearly in some areas specific individuals were prepared to consider a higher risk approach than others, something that hadn't been necessarily highlighted previously and possibly the cause of some uncertainty and pressure within the business. What was apparent in the real results was a general desire to reduce the risk levels in development and an acceptance of needing to increase rigour.

Development Risk

Most interesting on the development risk front was that, as I've shown here, there was 100% consensus on the need for specialist testing skills, however the organisations strategy to that point had been not to have specialist testers. Whilst testing skills doesn't necessarily require testers, the phrase "specialist testing skills" does imply a level of testing focus beyond a team solely consisting of developers.

Company Perception

The "Company Perception" demonstrated most clearly the desire to increase the level of rigour in development, with the desired level of testing clearly above what was perceived to be the current status in a similar way to the results shown here.

Starting from the Right Place

As I wrote in my post "The Living Test Strategy", a test strategy in iterative software development is not based around documents, but embodied in the individuals that make up the development teams, and the responsibilities that those individuals are given. The role of defining test strategy is then not in writing documents, but in communicating to those teams and individuals what their responsibilities and expectations are. Some fundamental decisions need to be made in order to establish a starting framework for these responsibilities. Questions such as:-

  • Are we going to have specialist testers?
  • If so will this be in every team?
  • What level of acceptance test automation is appropriate for the risk appetite of the business?

Need to be answered to establish the initial testing responsibilities and hiring needs of the team.

Using a risk questionnaire to profile the business leadership has given me an invaluable insight into the risk appetite stepping into a new company. In addition to giving an overall understanding of where the company sits in its acceptance of risk, the approach has also highlighted where there is a lack of consensus over testing issues that might need further investigation as to why.

As an experiment I would definitely regard it as a success. In my context, as someone stepping into a new agile organisation and looking at test strategy, it is particularly useful. I can see other uses as well, though. Whether your goal is to understand your new company, or to simply review where your current company stands, or even to expose conflicting opinions on testing and risk across the business, a risk questionnaire may just be the tool to help you achieve it.

References

A good guide to financial risk profiling and the requirements of it can be found here: https://www.ftadviser.com/2015/06/18/training/adviser-guides/guide-to-risk-profiling-k4JJTswPCCWAPyEsMwEZsN/article.html

Another good example of a risk profiling questionnaire that you can take online: https://www.standardlife.co.uk/c1/guides-and-calculators/assess-your-attitude-to-risk.page

Thursday, 21 July 2016

Making Luck

This week I've accepted a permanent role at River, the company that I've been consulting with since February. The title of the role is immaterial - what matters is that I'll be helping both Product Owners and Testers to develop themselves and their functions within the company.

I'd like to share the story about how I came to working in River. I apologise if it comes across as self-indulgent - I think there are some valuable lessons there for those focussing on their careers in demonstrating the value of the effort that we put into our professional development over time.

An Eventful Day

The events of my day on 18th January ran something like this:-

  • 8:00am - at train station as normal ready for work
  • 8:05 am - receive calendar invitation on my phone for "all company" meeting that morning. Predict redundancies.
  • 8:30am - get to work.
  • 10:00am - discover am being made redundant along with a number of colleagues
  • 11:15 am - in pub
  • 1:00pm - in Cafe eating disappointing sausage sandwich in pathetic attempt to soak up copious amounts of beer before going home
  • Afternoon with family sobering up
  • 7:00pm - At Cheltenham Geek Nights. Cheltenham Geek Nights is an development community event run in Cheltenham by Tom Howlett (@diaryofscrum). I'd become aware of it through communicating with Tom previously when he approached me to do a talk on Testing Big Data in Agile. The speaker on this occasion was David Evans. I had already been looking forward to taking the opportunity to catch up with David having benefited from his expertise in the past and maintained an amicable relationship since. After the events of the day I was particularly interested in his insight on the roles and team structures he was seeing develop through his work consulting in the fields of Agile Testing and Specification by Example.
  • 9:30pm - back in pub
  • 11:00pm - taxi home. Speaking to Tom and David that evening was the perfect antidote to my bad news and I ended the day on an optimistic note.
  • 5:00am - awake with hangover and worrying about future

During the next few days I was genuinely overwhelmed with the number of people who got in touch with me. Some contacted me just to wish me good luck, or to ask me what had happened. Others provided suggestions of possible roles or options available to me, and some even came in with direct offers of work. I can't overstate how valuable this communication was to me. Whether or not I followed up on every suggestion, the fact that so many people got in touch was hugely reassuring in an uncertain time and to those folks who got in touch - I can't thank you enough.

One of the options that I did follow up on was when Tom suggested a possible contract Product Owner role at a company he was consulting at. It turned out that he was looking to create a team of Product Owners to help to alleviate some of the challenges that they were facing in their Agile development adoption. This was exactly the kind of option that I was looking for - something that would allow me to get back to work quickly, in a very different organisation and provide some time for me to decide on a next long term move.

Landed on Your Feet

Over the ensuing weeks and months I've found that the nature of challenges presented in a very different work context have been refreshing. The experiences of working with such an amazing, close knit team at RainStor provide a constant reminder of how effective teams can be if you

  • Structure them with the right combinations of skills and experience levels
  • Allow them sufficient stability of members and technologies to confidently establish and meet release forecasts
  • Provide sufficient trust combined with flexibility over the delivered solution to fully engage the team in the creative process Having this knowledge is a powerful tool in looking where to focus to help add value in a new company.

When I've told people about my new position in a local organisation, a number have used a phrase like

'You've landed on your feet'.

This is an interesting one, as it implies a strong element of luck. Of course I would agree that the timing was somewhat fortuitous, however let's look at some of the events that led up to the situation:-

  • The contract role was suggested to me by Tom based on a conversation at a community event that I attended
  • The reason that I knew about the event was because of this blog and talks I'd done in the Agile/Testing communities had prompted Tom to ask me to speak at the event in the past
  • The reason that I specifically attended the event that day was because there were people there who I already knew from the software community
  • The reason that Tom felt confident in suggesting the role to me was because he had read my work, spoken to me personally, engaged me to do a talk and was confident in my abilities

This option, and other suggestions or offers that folks so kindly extended, made me really thankful for the time that I'd taken to invest in being an active member of my professional community.

The Slight Edge

I recently started reading the Slight Edge by Jeff Olsen. Whilst I'm not big on self help books, this one had been recommended to me and the concept interested me. One of the principles that the author presents in his book is the idea that it is not one-off Herculean effort but repeated incremental effort over time yields long term success. Many trends in modern society revolve around the concept of instant gratification: the lottery win, the talent competition, the lean start up overnight success. What Olsen suggests is that, far more reliable success comes from incremental effort to improve, applied over time. It comes from sticking with something even if it doesn't yield immediate results.

I've been writing a blog now for 6 years, and hit my 100th post last year, and I'd like to think that the effort that I've put in to blogging, attending meetups, making connections and generally working to improve myself through sharing ideas with others made that possible. If I hadn't invested that time over those years , then I wouldn't have interacted with so many different folks and would not have built a network of contacts and a body of work that allowed other professionals to place their faith in me.

Writing a blog is not easy, particularly when you have a busy job and a young family. For example during the first months of writing a software testing blog I had no idea whether anyone at all was reading it - it was 8 months and 10 posts before I got my first tweet share. In later years, particularly during the 'creativity vacuum' that comes when your small company is taken over by a large multinational, I have a couple of times considered stopping writing. I really started the blog from a desire for validating approaches rather than networking, but it was through redundancy that the realisation came as to just how valuable the effort was in building a great network of contacts to provide opportunities and guidance when needed. It is never nice losing a job involuntarily, yet ironically it was at exactly such a time that the extra effort in committing to something beyond the day job really showed its value.

I'm of the opinion that it is better to commit to one thing and stick with it than take on 5 things and give all of them up weeks later. I've given a specific example of writing here, however there are innumerable other areas where success comes not through moments of inspiration but through perseverance:

  • Keeping on top of emails
  • Keeping a backlog or to-do list up to date
  • Maintaining a sprint/status board
  • Writing unit tests
  • Testing with the right user roles not just admin
  • Adding testability/supportability properties to code
  • Having regular catch ups with team members
  • Diet
  • Exercise
  • Not having a cigarette (10 years so far for me)

The takeaway is that commitment matters here - you don't see the value, or get the power of incremental improvement, unless you stick at something. One of the great powers of the internet, blogs and conferences is that anyone with a great idea can contribute massively to their professional community. Looking at those people who enjoy universal respect amongst the groups that I communicate with, it is the ones that are persistently contributing, always putting in effort and maintaining their commitment that attract the greatest admiration.

image: https://www.flickr.com/photos/bejadin/17147146227

Thursday, 30 June 2016

Being Honest

It's safe to say that there has been a fair amount going about in the UK news recently about honesty, or lack thereof, surrounding a certain political event that I'm not going to discuss here. I've always been an honest kind of person. Whether driven through a higher moral compass, fear of getting caught out, or simply a lack of imagination that it doesn't really occur to me to fabricate, even to my own advantage, I'm rarely tempted to resort to fiction to resolve an awkward situation. I had a friend at school who was very different. He would tell me stories that, years later in adulthood, I have discovered were complete hogwash. Like the time that he told me that his cat had killed his neighbours parrot while he was at home alone ...

... a story that I later discovered he'd invented to excuse his missing a homework deadline and had told all of his school friends to reinforce his tale to the teacher. I'm still friends with him (I'm not entirely sure why) and even now every so often he reveals to me the truth behind some absolute porker he told me years before.

It's easy for me to say that honesty is always the best policy, but it doesn't always feel like the best option. A situation that occurs frequently in the working environment that can tempt anyone to stretch the truth, is when we're worried about the status or progress of our work. When this happens it is easy to fall into the trap of thinking that giving the impression of being in a better position than we actually are might satisfy those that we report to for a while, and allow us to catch up or fix the issue.

But you rarely do catch up. The fact that you are in a position where you feel the need to fabricate your progress in the first place must have come about because you're progress is not quite where you, or someone else, would have liked it to be. This will have happened for a very good reason, like the work being more challenging than you thought, which you're not going to change overnight.

Let's look at the potential impact of misrepresenting progress to some important people:-

To The Customer

The more distant the third party the easier to be economic with the truth so the customer is a prime candidate for inventive reporting. Of course you don't want the customer to know things are taking longer than expected, why would you? Perhaps you run the risk of exposing that perhaps your up front estimation processes were, actually well, estimates, and the development is taking slightly longer. Or, even worse, it could involve admitting that you are actually doing some work for, gasp, another customer. If the customer is on the other end of an email or phone then the temptation to give a false impression of progress is ever present.

I'd advise against it. Consider some of the disadvantages that can come down the line:-

  • If you misrepresent progress on a development, and then hit a technical challenge that requires a rethink, or at least some major decision making on the part of the customer, how are you going to broach that conversation when the customer thought that you were already well past making such fundamental decisions?
  • Also if you misrepresent progress then it diminishes your ability to negotiate scope as the development progresses. Better to be open early and often, at least then you can negotiate openly around what you are delivering.
  • If you misrepresent the level of rigour in your testing, this can really backfire. I've worked with more than one supplier who has insisted that they were thoroughly testing what was delivered. I therefore focussed my own testing on the integration points and relied heavily on their testing of their own features. On discovering a high number of issues in these features this immediately called into question either their competence, or their honesty. In subsequent developments I actually had to negotiate for the supplier to deliver the software slower to give adequate time for them to test it.
  • If you misrepresent the severity of an issue that has been discovered, then it is hard to justify taking time in testing a fix and therefore you place pressure on yourself to push out untested bug fixes. When I was running both support and testing I had to negotiate a reasonable delay on delivering changes to the customer a number of times. This was done on the basis that we had just found an issue and the last thing we wanted to do was rush in a change and introduce more issues, so we needed to take the time to do some thorough testing around the fix. Under-playing the severity of an issue makes such negotiations more difficult.

In the drive for 'responsiveness' to please the customer it's easy to lose sight of the value of honesty, however responsiveness in communication can deliver a huge amount of value without necessarily the need for the same responsiveness in scheduling or delivery. In my experience customers appreciate knowing where they stand, even if it is not necessarily where they want to be. They also find it hard to argue if you are putting effort into ensuring improvements to the quality of the end product you're delivering to them. Even with the awkward subject of other customers, software customers want to be buying from a stable supplier, so your having other customers is something that they should view in a positive light (though not all will). Showing respect for those other customers and being seen to uphold your commitments to them will reinforce your integrity as a supplier. If on the other hand, you are seen to default on your commitments just to please the customer in front of you, they are going to be under no illusions that they'll receive the same treatment once they're not in front of you.

To your colleagues

Misleading your progress to colleagues seems like a strange thing to do, yet I have encountered situations where this has happened when individuals are struggling or making mistakes.

  • On one specific occasion I can remember a tester working in an agile team felt that he wasn't keeping up with the user stories he was supposed to be picking up. Instead of admitting this, he misrepresented the level of testing that had been done on the user stories that he was working on, to the extent that I thought they were completed. I then reported to other teams in the business that they could make plans on the basis of that work being completed. On examination I discovered the status of the tests had been misreported and what had been done gave us little or confidence in those features. The result was me having to drop my other work in order to devote time to testing those items so that we could fulfil my commitment to the other teams. Had the tester been honest with me about his status, I wouldn't have misreported our progress and the problem simply wouldn't have occurred.
  • Similarly, if an individual has made a mistake then the temptation can be to try to cover it up. One lesson that I personally have had to learn over and over again, is that it is rarely as bad as you think, and getting a mistake out in the open is empowering. Keeping mistakes concealed, conversely, accumulates stress on the individual, and anyone else taking the responsibility to resolve a difficult situation will not appreciate it if those who have made mistakes hide the truth in an attempt to conceal their errors.

Being honest with your colleagues goes beyond operational benefit. In the world of self-managing teams and a move away from strict hierarchical control and micro-managements, creating a community of trust with your colleagues is essential. If you're co-workers feel that they can't trust your status reports in terms of what you have developed or tested, then they will likely resort to committing effort to double checking your work, at the expense of progressing their own, a situation which at best will result in ill-feeling and at worst could blow up into serious confrontation.

To management

Senior managers tend to be more removed from the immediate development and testing work and so often don't have the low level view of the software products to really make a good assessment of status. That's what they rely on you for, particularly the testers. It's therefore relatively easy to give a false impression of progress to senior management, but this can have dire consequences for them, your colleagues or you.

  • Something I've seen more than once is when a senior developer or architect takes it upon themselves to develop a pet project that they put together as they see value in it for the business and want to impress with their ingenuity. Much of this type of development has gone on outside of the core development processes and has had little or no formal testing. The software is presented to C-level executives as release ready in all but requiring a "rubber stamp" sign off from the testing team ( I have actually had a CEO say to me "now all we have to do is test it - right?" in relation to just such a development ). The result can be a lot of stress placed on colleagues who have to take this work and try to test it or support it to the level expected of them. The sad thing here is that there's no need to keep such projects away from others. The best chance of success of any new endeavour is to bring others on board with it and get them behind it, being honest about what you don't have the time or skills to do, such as the testing. If you are honest about getting others on board and giving them the credit for the contribution they make, they'll be more than happy to report your contribution to senior management and give you the credit that you are looking for.

  • Another route for misleading management comes from the manner by which we communicate with them. If we're communicating via metrics or aggregated reports and metrics then for some reason it becomes easier to massage the truth. Now I've admitted that that I strive for honesty, however I'm not a saint and admit that I have in the past been tempted to tweaking metrics and measurements to make a situation look better. For some reason the level of guilt around gaming metrics to give a positive spin on things is far greater than it would be doing the equivalent in an open conversation. If ever there was a need for an open and communicative relationship with senior management, this is one. Inevitably the truth will out and no metrics in the world can the reality of the status of development. If you don't realistically appraise management of the status of development then someone else will, and if that person ends up being frustrated customer who is suffering delays or poor quality, then the outcome will be far worse than the awkward conversation you could have had around more realistic metrics.

Honestly, Honesty - it's not that hard

Looking back at some of the most challenging times of my career so far, many of the biggest difficulties have been the direct result of someone not being open an honest as to the status of their work. The problem with taking a misleading line in relation to work is that, once done, it is so easy to escalate and compound your problems by telling more lies. Taking ownership of difficult situations, getting the problem out in to the open and driving through to a conclusion is an empowering activity that actually helps with personal development and builds experience and character. Whether in software development/testing, management or support - there isn't a successful technical leader out there who hasn't at some point had to tackle a late project, missed deadline or simple mistake head on and face the consequences.

Coming into a new organisation, as I have done this year, has been a difficult journey for me given the level of knowledge that I had developed around my team and processes in my former role. I've inevitably made some mistakes as I've worked to understand the culture and processes of a new organisation with a very different technology stack and customer base. Maintaining a level of openness around what I'm doing and any mistakes has hopefully allowed the same openness in communication to me and encouraged others to be honest with me about their work - as honesty is a two-way thing. By being honest with colleagues, management and customers I believe that we not only uphold our own principles but also encourage others to do the same.

image: https://www.flickr.com/photos/13923263@N07/1471150324

Monday, 23 May 2016

Rivers and Streams

After a productive start to the year in terms of writing, I realise that I didn't manage to post anything in April. Looking back I can identify four reasons for this:-

  1. The first is that I was working on an article for the Testing Circus Magazine on Agile Test strategy. If you haven't done so please take a look at the April Edition of Testing Circus - my article is in there along with many interesting articles and interviews.
  2. The second is that I was responding to an request for an interview from the folks at A1QA blog. I haven't done an interview before but in the spirit of having moved on from a long term role earlier in the year in the optimistic glow of change I said yes when asked this time. My interview hasn't appeared at the time of writing this post but I hope that it will be there soon. In the meantime there are some great interviews with testing experts to read.
  3. I was getting ready to turn 40 - that's out of the way now, back to business, nothing to see here
  4. I had been at my current engagement just long enough to get really busy

In relation to this last one, I now know that the lead time on getting really busy in a new role is about 2 months. Up until that time there's a sufficient level of inexperience of the context and isolation of your activities from the day-to-day work to keep your inbox blissfully quiet. Once two months are up then people start relating your presence with actual useful activity and the honeymoon period is over, you're busy.

The work that I've been doing has been setting up Product Owner and Testing functions at a company called River. The name River is a really interesting one for a company involved in software development, as the nature of rivers has some really interesting parallels with work - particularly how the nature of our activity in work reflects on our productivity. Anyone with a strong dislike of analogy should probably stop reading now.

The fast flowing river?

If I asked you to think of a fast flowing river you may be inclined to think of something like this

The rushing white water clearly moving very quickly as it cascades over the rocks. The interesting thing is that, whilst this river might look fast, a huge amount of the energy is not going into moving the water downstream. There are eddies and currents and energy is being used moving the water in many different directions that aren't always downstream. The net result is that, despite the steep gradient that it is moving down, the actual average velocity of the water downstream is slow.

What I've found both in my own personal work, and the teams that I've led, is that in a similar way it is possible to expend a huge amount of energy being busy and filling the working day, without making much progress towards your goals. An apparently busy individual or team can actually be making slow progress if much of that activity is directed at overcoming friction. I'm not talking about the friction of ill-feeling between employees here. What I'm talking about is the friction of energy expended on activities that unnecessarily distract from progressing work to the benefit of the organisation. Whilst organisational friction comes in too many forms to consider here, a lot of friction that can massively impact velocity is relatively straightforward in nature and easily avoided. Some 'velocity killing' sources of friction that I've encountered are both obvious, and easy to reduce, however still affect many companies and teams:

  • regular disruptions or distractions
    A good collaborative working environment is a boost to motivation and productivity, however too many distractions reduce the ability of individuals to take the time to focus on their work. Likewise regular ad-hoc requests coming into a team can disrupt the flow of work repeatedly, causing friction through context switching between tasks.

  • cutting corners
    Nothing reduces velocity like, say, bad code, and nothing generates bad code like cutting corners. I've seen a number of occasions where the decision to push ahead with a product that hadn't been developed with any thought to maintainability or sensible decoupling/separation of concerns has been made to save time. The inevitable result is that any team who has to support, maintain or extend that product later ends up getting slower and slower as they struggle to understand the structure of the code and how to safely change it. As a general rule, unless what you are working on is a temporary one time activity, any corner that you cut now will result in significantly harder work at some time in the future.

  • Handing over poor quality or incomplete work
    I can remember James Lyndsay teaching me once that Exploratory Testing was not an appropriate approach if the software was too poor in quality. My immediate thought was that this was an ideal time to explore as you would find loads of bugs - however on reflection his point was sound. Finding bugs is time consuming and disruptive - as Michael Bolton so clearly explains in this post and so making any kind of testing progress through a system that is awash with problems is going to be an exercise in being very busy but making little progress, and generating a lot of friction between developer and tester in finding and fixing the issues. The developer-tester interface is just one where pushing work before it's ready causes friction and slows everyone down. Similarly the Product Owner to developer interface suffers if the PO prioritises work into the team that is not ready. As I wrote in "A Replace-Holder for a Conversation" I'm a great believer in adding acceptance criteria to the story at the point that it's picked up. On a few occasions, however, I have received/delivered stories into a sprint where, on elaboration, it became clear that we didn't have a clear enough idea on the value being delivered and the result was delay or, worse, misdirected effort.

  • Repeating the same conversations
    This one is a real progress killer. After working on a product for some time, there are stock conversations around that product that certain individuals will revert to given any significant conversational time spent discussing features or architecture. No matter what you are trying to achieve it would all have been a lot easier if we'd have re-architected the whole system in the way that someone wanted to do a number of years ago, or if we just went back to having test phases after development had completed. Poring over the decisions that were made years ago can't help now and reverting to conversations of this nature in every meeting massively increases the friction of the process. Inevitably someone will at some point have to make a pragmatic decision that does not involve the re-architecture of the entire product stack, or reverting back to waterfall, and thirty minutes of everyone's time could have been saved in unnecessary conversation.

The slow flowing river?

By contrast, if I asked you to think of a slow flowing river you might conjure up images like this

We may think of such rivers in terms such as 'lazy' and 'placid', however the fact is that the average velocity of water downstream of such a river is actually greater than an apparently fast mountain stream. This efficiency comes about through the massively reduced friction on the water travelling downstream. Despite the fact that the gradient is less, the efficiency gained means that overall velocity is faster, and the fastest point is the centre of the River where the water is furthest from the friction of the sides.

It seems sensible to strive to minimise friction, yet we know that simply handing development teams a set of requirements and leaving them alone for 6 months is not an appropriate solution here. How do we maintain the potentially conflicting goals of minimising friction whilst also having regular collaboration, providing feedback and generally communicating regularly. Here are some ideas:-

  • Don't distract people if they're trying to concentrate
    If you want a social chat - see who is in the kitchen. If you have an issue, ask yourself if it is essential that you speak to the person right then, or could it wait? I'm personally quite bad at distracting people immediately on issues that could possibly wait, something that I've become more aware of through writing this post. I don't want to imply here that teams should work without conversation, a healthy team dynamic with strong social bonds is really important, however if one person is distracting another who is trying to focus then that not only introduces friction but also can erode team goodwill in that people will start to resent the distractions.

  • Keep it to yourself
    If every time you find a bug, or an email annoys you, you announce it to everyone within earshot then chances are you're the work friction equivalent of a boulder in the river, keep it to yourself until a natural break occurs and you can catch up with people then. I've seen whole teams slowed dramatically through the fact that an individual would announce each small thing that annoyed them through the day.

  • Avoid ad-hoc tasks
    If you have a task that might mean a context switch for someone e.g. a bug to investigate or an emergent task, wait for them to finish what they are working on, or raise up the need to do it in a stand-up so that the team can plan it in. Managers are notoriously guilty here of wandering in and disrupting teams with ad-hoc requests that derail them from their scheduled activities - having a way of capturing/tracking ad-hoc requests is a useful tool for teams who suffer heavily here. Again I like to capture this information on a team white-board and clearly mark any impacted tasks as 'Paused' whilst the ad-hoc task is addressed.

  • Have timely conversations to confirm assumptions.
    Identifying necessary conversations in a stand-up meeting and planning them into the day is a good way of avoiding blocks on progress whilst reducing the friction of trying to get people together ad-hoc. I like to see important conversations either booked in calendars or at least identified and written up e.g. on a scrum board so that everyone is aware of when they need to catch up, thereby avoiding the friction of people walking round the office trying to round up the necessary individuals, like children on a playground looking for volunteers to play "army" (the ultimate demonstration of friction in that it typically lasted the entire lunch-break and never actually left any time for playing army - see number 8 here).

  • Don't rush jobs that others will pick up.
    Maintain your standards and defer work if necessary. You being busy is not an excuse to multiply the efforts required of the person who has to pick up your stuff, whether that be poorly understood stories or badly tested features, no-one is going to thank you for 'getting the job done' if the result is someone else having to do twice as much.

  • Take a break
    One of the most interesting findings of this study referenced in this post, aside from the fact that someone thought that watching netflix and drinking vodka constituted work, is that taking regular breaks actually improves productivity.

Small changes yield big benefits

A colleague of mine was explaining to me this week how, off the back of reading Jeff Pattons "User Story Mapping", that it was making him look differently not only at his work but also his own activities. He'd really taken on board the idea of reducing friction across his life to the point of correlating his visits to the gym with the length of his commute so he didn't waste time in busier traffic, when on other days he could reduce the round trip time by going earlier. By looking at the friction of every day activities there are huge gains to be made in productivity.

Having worked with teams that did have predictable, organised processes where we had minimised the friction in our own processes as much as possible, I have found that it is easier to both

  • Identify and isolate the impact of higher level decisions
  • Work through issues and still maintain a stable velocity and predictable level of rigour
when you have a smoothly running team or department. Sometimes it's so easy to attribute problems to higher level organisational problems and major decisions that it's easy to overlook simple changes closer to home that could yield huge benefits. No company, individual or organisation gets it right every time, but by reducing the friction in your own individual and team activities we can ensure that we're not the limiting factor in achieving our collective goals.

References

Monday, 14 March 2016

Minimum Viable Automation

Things don't always work out as we planned.

For anyone working in IT I imagine that this is one of the least ground-breaking statements they could read. Yet it is so easy to forget, and we still find ourselves caught out when we have established carefully laid plans and they start to struggle when faced with grim reality.

The value of "Responding to change over following a plan" is one of the strongest, and most important in the Agile manifesto. It is one that runs most contrary to the approaches that went before and really defines the concept of agility. Often the difference between success and failure depends upon how we adapt to situations as they develop and having that adaptability baked into the principles of our approach to software development is invaluable. The processes that have grown to be ubiquitous with agility - the use of sprint iterations, prioritisation of work for each iteration, backlog grooming to constantly review changing priorities and delivering small pieces of working software - all help to support the acceptance and minimise the disruption of change.

Being responsive to change, however, is not just for our product requirements. Even when striving to be accepting of change in requirements we can still suffer if we impose inflexibility in the development of our testing infrastructure and supporting tools. Test automation in particular is an area where it is important to mirror the flexibility that is adopted in developing the software. The principles to "harness change for our customers' competetive advantage" is as important in automation as it is in product development, if we're to avoid hitting problems when the software changes under our feet.

Not Set in Stone

A situation that I encountered earlier this year whilst working at RainStor provided a great demonstration of how not taking a flexible approach to test automation can lead to trouble.

My team had recently implemented a new API to support some new product capabilities. As part of delivering the initial stories there was a lot of work involved in supporting the new interface. The early stories were potentially simple to test, involving fixed format JSON strings being returned by the early API. This led me to suggest whole string comparison as a valid approach to deliver an initial test automation structure that was good enough to test these initial stories. We had similar structures elsewhere in the automation whereby whole strings returned from a process were compared after masking variable content, and therefore the implementation was relatively much simpler to deliver than a richer parsing approach. This would provide a solution suitable for adding checks covering the scope of the initial work, allowing us to focus on the important back end installation and configuration of the new service. The approach worked well for the early stories, and put us in a good starting position for building further stories.

Over the course of the next sprint, however, the interface increased in complexity with the result that the strings returned became richer and more variable in sorting and content. This made a full string comparison approach far less effective as the string format was unpredictable. On returning to the office from some time away it was clear that the team working on the automation had started to struggle in my absence. The individuals working on the stories were putting in a valiant effort in supportchanging new demands, but the approach I'd originally suggested was clearly no longer the most appropriate one. What was also apparent was that the team were treating my original recommendation as being "set in stone" - an inflexible plan, and were therefore not considering whether an alternative approach might be more suitable.

What went wrong and why?

This was a valuable lesson for me. I had not been at all clear enough that my initial idea was just that, a suitable starting point. It was susceptible to change in the face of new behaviours. In hindsight I'd clearly given the impression that I'd established a concrete plan for automation and the team were attempting to work to it, even in the face of difficulty. In fact my intention had been to deliver some value early and provide a 'good enough' automation platform. There was absolutely no reason why this should not be altered, augmented or refactored appropriately as the interface developed.

To tackle the situation I collaborated with one of our developers to provide a simple JSON string parser in Python to allow tests to pick specific values from the string, against which a range comparison or other checks could be applied. I typically learn new technologies faster when presented with a starting point than having to research from scratch so, as I didn't know python when I started, working with a developer to deliver a simple starting point saved me hours here. Within a couple of hours we had a new test fixture which provided a more surgical means of parsing, value retrieval and range comparison from the strings. The original mechanism was still necessary for the API interaction and string retrieval. This new fixture was very quickly adopted into the existing automated tests and the stories delivered successfully thanks to an impressive effort by the team to turn them around.

Familiar terms, newly applied

We have a wealth of principles and terms to describe the process of developing through a series of incremental stages of valuable working software. This process of delivering can place pressure on automation development. Testers working in agile teams are exposed to working software to test far earlier than in longer cycle approaches, where a test automation platform could be planned and implemented whilst waiting for the first testing stage to start.

Just as we can't expect to deliver the complete finished product out of the first sprint, we shouldn't necessarily expect to deliver the complete automation solution to support an entire new development in the first sprint either. The trick is to ensure that we prioritise appropriately to provide the minimum capabilities necessary at the time they are needed. We can apply some familiar concepts to the process of automation tool development that help to better support a growing product.

  • Vertical Slicing - In my example the initial automation was intended as a Vertical Slice. It was a starting point to allow us to deliver testing of the initial story and prepare us for further development - to be built upon to support subsequent requirements. As with any incremental process, if at some point during automation development it is discovered that the solution needs restructuring/refactoring or even rewriting in order to support the next stage of development, then this is what needs to be done.
  • The Hamburger Method - For anyone not familiar with the idea, Gojko Adzic does a good job of summarising the "Hamburger Method" here, which is a neat way of approaching incremental delivery/vertical slicing that is particularly applicable to automation. In the approach we consider the layers or steps of a process as layers within the Hamburger. Although we require a complete hamburger to deliver value, not every layer needs to be complete and each layer represents incremental value which can be progressively developed to extend the maturities and capability of the product. Taking the example above, clearly the basic string comparison was not the most complete solution to the testing problem. It did, however, deliver sufficient value to complete the first sprint.



    We then needed to review and build out further capabilities to meet the extended needs of the API. This didn't just include proper parsing of the JSON but also parameterised string retrieval, security configuration on the back end and the ability to POST to as well as GET from the API.

  • Minimum Viable Product - The concept of a minimum viable product is one that allows a process of learning from the customers. I like this definition from Eric Ries

    "the minimum viable product is that version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort."
    It's a concept that doesn't obviously map to automation development, however essentially when developing test architectures then the people who need to use these are our customers. When developing automation capabilities, considering an MVP is a valid step. Sometimes, as in my situation above, we don't always know how we're going to want to test something. It is only in having an initial automation framework that we can elicit learning and make decisions about the best approach to take. By delivering the simple string comparison we learned more about how to test the interface and used this knowledge to build our capability in the python module, which grew over time to include more complex features such as different comparison methods and indexing structures to retrieve values from the strings.

    In another example where we recently implemented a web interface (rather fortunate timing after an 8 year hiatus, given my later departure and subsequent role in a primarily web-based organisation) our "Minimum Viable Automation" was a simple Selenium WebDriver page object structure driven through Junit tests of the initial pages. Once we'd established that we learned about how we wanted to test, and went on in subsequent sprints to integrate a Cucumber layer on top as we decided that it was the best mechanism for us given the interface and nature of further stories.

Being Prepared

Whilst accepting change is a core concept in agile approaches, Interestingly not all organisations or teams working with agile methods apply the same mindset to their supporting their structures and tools. Having autonomous self managing teams can be a daunting prospect for companies more accustomed to a command control structure and high levels of advanced planning. The temptation in this situation is to try to bring as many of the supporting tools and processes as possible into a centralised group to maintain control and provide the illusion of efficiency. This is an understandable inclination, and there are certainly areas of supporting infrastructure that benefit from being managed in one place, such as hardware/cloud provisioning for example. I don't believe that this is the correct approach for test automation. Attempting to solve the problems of different teams with a single central solution takes autonomy away from teams by removing their ability to solve their own problems and find out what works best on their products. This is a big efficiency drain on capable teams. In the example above, our ability to alter our approach was instrumental in successfully recovering when the original approach had faltered.

My personal approach to tool creation/use is simple - focus on preparation over planning, and put in place what you need as you go. Planning implies knowing, or thinking that you know, what will happen in advance. Planning is essential in many disciplines, however in an unpredictable work environment placing too much emphasis on planning can be dangerous. Unexpected situations undermine plans and leave folks confused about how to progress. Preparation is about being in a position to adapt effectively to changing situations. It is about having the right knowledge and skills to respond to change and adapt accordingly, for example through having a 'Square Shaped Team` with a breadth and depth of relevant skills to take on emergent tasks. By all means centralise a body of knowledge of a range of tools to assist teams in their selection of tools or techniques, but leave the teams to decide which is best for them. In the web testing example above, whilst we'd not tested a web interface before, through maintaining an up to date knowledge in house of technologies and approaches that others were using we were still well prepared to start the process of creating a suitable test architecture when the need arose.

By building a team with a range of knowledge and capabilities we are preparing ourselves for the tasks ahead. Using approaches like vertical slicing, the hamburger method and minimum viable product in the creation of our tools allows to implement that ability in a manner appropriate for the fluidity of agile product developments. Or to put it more simply, by accepting that change happens, and using tried and tested approaches to incrementally deliver value in the face of change, we can ensure that we are prepared when it comes.

References

Wednesday, 2 March 2016

A Replace-holder for a Conversation

Acceptance criteria in Agile user stories are funny things when you think about it. On one hand we describe our stories as 'placeholders for a conversation' and strive to minimise the documentation that we pass around the organisation in favour of face to face communication. On the other we insist on having a set of clear and unambiguous testable statements written up against each of these stories that we can use to define whether the story is 'done' or not. This could be viewed as something of contradiction. Whether it is or not, I believe, depends on how, and more importantly when, those criteria are created.

There is no shortage of material on Agile software development sites suggesting that a complete set of acceptance criteria should be in place before any attempt is made to size stories or plan sprints. This article from the scrum alliance, for example, states

"I can't overemphasize just how crucial it is to have proper acceptance criteria in place before user stories are relatively sized using story points!"

The comments that follow the article provide some interesting perspectives on the subject, both from other Agile practitioners and the author himself, who adds a significant caveat to the earlier statement

"Teams that have been together a long time can look at a new story and immediately see that it's relative to another story without the need for acceptance criteria"

Another here on backlog refinement recommends a "definition of ready" that includes, amongst other things

The story should be testable (that means acceptance criteria is defined).

I'd disagree with this sentiment. I don't personally adhere to the approach of adding acceptance criteria to stories on the product backlog. I perceive a number of issues around adding acceptance criteria before the story is sized which I think can undermine core agile and lean principles.

A replaceholder for a conversation

My primary concern is that once acceptance criteria are added to a story it ceases to be a "placeholder for a conversation". The product owner, in order to get stories to meet the 'definition of ready' can be pressured to add the criteria to the story themselves, outside of any conversations with the other key role in the team. There is then potential for developers to pick up and deliver the story based solely on these acceptance criteria, without at any point needing to discuss the story directly with the Product Owner or tester. There is a natural tendency when working to take the path of least resistance, and if a set of written criteria exist on the story that the developer can use to progress their work, without having to talk to anyone else, then many will be inclined to do so.

This problem is apparent in both directions. There is also an inevitable temptation for the PO to treat the creation of acceptance criteria as the means of communicating what is needed to the team. This could be seen as a means to 'make sure' the team do exactly what the PO wants, whilst reducing the personal drag that comes from having to have conversations with them. This undermines the primary value of the criteria, in my opinion, which is to capture the consensus reached between the PO and the team on the scope and value of that piece of work.

The purpose of the user story is not to define exactly what we want or how we want the software developed, but to strive to capture the essence of who it is for and why they want it. In the situation where the PO creates the criteria away from the team then essentially all that we have done is to relegate acceptance criteria to becoming a new flavour of up front written specification. In doing so we embed in them all of the risks that are inherent in such artefacts, such as inconsistent assumptions and inconsistent interpretation of ambiguous terms.

We also run the risk of contradicting at least one of the agile principles -

"The most efficient and effective method of conveying information to and within a development team is face-to-face conversation."

I'm sure you will have noticed that I've used terms such as "risk" and "tendency" liberally here. A well disciplined team can avoid the pitfalls I've highlighted by insisting on conversations at appropriate points and collaborating on the creation of the criteria. This brings me on to my second concern with fully defining acceptance criteria for backlog stories, that this practice contradicts the principles of lean development.

Eliminating waste

The first of the seven principles of lean software development, as defined by Tom and Mary Poppendieck, is to eliminate waste. Waste is anything that does not add value, for example work that is done unnecessarily or in excess to what is required. Given that there is no guarantee that every story that is sized on the backlog will be completed then putting the effort into establishing acceptance criteria for all of these is potentially wasteful. Learn principles originated in manufacturing, and one of the sources of waste identified in that context is "inventory". Whilst I do have many concerns with comparing software development to manufacturing, I think that considering detailed acceptance criteria on a backlog story as inventory, in that they are an intermediate work item that doesn't add value and may not get used, does highlight the wasteful nature of this practice. Investment in time in creating documentation that we are uncertain whether or not we want to use is a waste which we should avoid if we can refer that cost until such time as we're more confident that it will add value.

It is also wasteful to have more people than needed in conversations. If the entire team is present in creating a full set of acceptance criteria for every story as part of backlog refinement or sprint planning, then this could constitute a lot of waste as only 3 people are required for these conversations, the PO and the developer and tester on the story, or the 3-Amigos. While the whole team may be required for sprint planning, I don't believe that the whole team is required to elaborate each story in more detail and define the acceptance criteria. Having them do this presents a significant cost that could be avoided.

Waste also occurs in revisiting previous work. When we do progress a user story, I would expect diligent teams to want to revisit any pre-existing acceptance criteria with the Product Owner at that point to confirm that they are still valid and whether any additions or alterations are required. This again introduces an element of waste in revisiting prior conversations if acceptance criteria have been created in their entirety in advance - if no changes are required then we've wasted time revisiting, and if there are then we wasted time initially in doing the work that subsequently had to be changed. This neatly brings me on to my next argument against adding acceptance criteria to backlog stories.

Deferring decisions

The Lean principles of Software Development, as defined by the Poppendiecks, also advocate deferring commitments until we need to - the "last responsible moment". There's good reasons for this. Generally the later we can make a decision, the more relevant information we have to hand and the better educated that decision is.

Whilst user stories in a backlog should be independent, there will inevitably be some interdependence. For example in a database development we might have stories for supporting user defined views, and others around table level security principles. Depending on which feature was developed first, I'd certainly want to include criteria to test compatibility with that first feature when the second was delivered. If I was writing the acceptance criteria at the point of picking up the stories, this would be easily factored in, however if writing criteria for a wealth of backlog items up front then we're not in a position to know which would be delivered first. We would therefore need to introduce conditional elements into the criteria which needed to be revisited later to cater for the status of the other stories.

Not all such decisions will be as visible in advance. There will also be a process of learning around the development of a piece of software which may influence the criteria we place on the success of later stories. By establishing the details on what constitutes acceptable for a story long before the story is picked up and worked on, we're embedding decisions into the backlog long before we need to make them which may prove to be erroneous at a later stage depending on the discoveries that we have made in the interim. I've seen situations where the lessons that we learn from testing certain features can massively influence the criteria we place on others. For criteria embedded in the backlog this could involve significant rewriting should that occur.

Ensuring a conversation

Historically I've never found the need to add acceptance criteria to backlog items. I've always sized stories and estimated backlog items based on the story title and a conversation with the relevant individuals. These individuals, as a group, should have the expertise to understand the value to the user, and the relative complexity of that item to similar items that we've previously developed. Will the sizing be as 'accurate' as if we'd created detailed acceptance criteria, possibly not, however I'd suggest that the difference is negligible and the benefits minimal as compared to the potential wastes that I've described.

So when would I add acceptance criteria to a story? My personal preference would be to add acceptance criteria to a story at the point at which the story is actually picked up to be worked on. This is the point at which the story changes from being a placeholder on a backlog used for sizing and planning, into an active piece of work which requires elaboration before it is progressed. At this point we also have more information about the developers/testers working on the story and can have a more focused conversation between these individuals and the product owner. I describe in this post a structure that I've used at the point of starting work on a story to identify the acceptance criteria and ensure the individuals that are working on that piece have a good understanding of the value they are trying to deliver. If a user story is a placeholder for a conversation then, for me, this is the intiation of that conversation. This is where we reach mutual consensus on the finer details of the user story and establish an appropriate set of test criteria as a framework under which we can assess the delivery. At the same time we can establish what the team feel are the relevant risks that the work will present to the Product, and any assumptions in our understanding that require confirmation activity by the Product Owner. Creating the acceptance criteria collaboratively at this point ensures that the memories of that conversation are still fresh in the mind of the individuals as they are delivering the work. The conversation is the true the mechanism by which we convey understanding and agree on what needs to be delivered, and the acceptance criteria are rightly mere reminders of this conversation. Isn't that so much better than using them as the primary means of communicating this information?

References

image: https://www.flickr.com/photos/bean/3359500357

Sunday, 21 February 2016

Tester, PO, Armadillo

This month I started work in a new organisation for the first time in 9 years. Whilst inevitably daunting stepping into a new company after that time, even starting on a fixed term contract basis as I am, I'm predominantly feeling excited about the prospect of working in a new environment. I'm taking the opportunity to step into in a full time Agile Product Owner position. Whilst this could be perceived as a significant change in terms of the job title that I'm coming from, the fact is that Product Owner is a role that I've been doing for some years under the mantle of my existing Testing/Support title.

A Rose by Any Other Name

When I wrote the last post of my "Survival of the Fit-Tester" series back in 2012 I highlighted how I believed testers in agile teams can provide a natural extension to the Product Owner (PO) role. I've seen for a long time a pattern of testers crossing the boundaries between testing, business analysis and Product Ownership.

As my role at RainStor developed to incorporate the customer interface disciplines of documentation and support, and in the absence of devoted POs in the teams, I found myself taking on more and more of the responsibilities traditionally associated with the PO. I was involved in organising the backlog, working with the development team to establish implementation approaches, breaking epics into stories and working with the team in elaboration meetings to establish acceptance criteria for the work they were delivering. As the development team grew into multiple scrum teams, so again a member of my testing team stepped in to coordinate the backlog and work closely with the developers on deciding the work for the sprints.

I'm not the only tester to have blurred the boundaries as their role has developed, or who has at some point taken on an explicit Product Owner role. In my immediate circle of acquaintances I know that Rob Lambert and Alex Schladebeck have also transitioned from being influential testers to having a product ownership role at some point in their careers. I don't personally view this as a progression, as I'm sure they don't either, but more like a natural expansion and utilisation of the skill sets and experience of capable individuals in making key business decisions. Being an expert in identifying relevant information to inform business decisions by its definition implies an excellent understanding of what those decisions are. This will involve gaining an understanding of what users are trying to achieve, what they value and what will undermine their perception of quality. The separation between that level of knowledge and actually being the decision maker is a thin one indeed.

A hidden choice

In self managing agile teams the testing and Product Owner roles are both critical to the extent that concepts such as the 3-Amigos have arisen to stress the need for these roles being present in important conversations. The investment in both full time testers and Product Owners for every scrum team is one that is daunting to many companies. What is happening in response to this is that some companies are choosing between one or the other, whilst possibly unaware of this being the choice they are making.

In some places, as was the case at my previous employer RainStor, the focus is on the testing, with a strong investment in testers in every team, but an absence of devoted POs. This can result in the need for experienced testers to step in on the day to day PO responsibilities. This situation was not unique to my team. As Priyanka Hasija observes in her InfoQ article on her experiences as an Agile tester

"QAs can also help the Product Owner (PO) write acceptance test cases by playing the role of proxy product owner. Having QA fill in as a proxy for the Product Owner when they are not available helps keep the team moving forward at all times.".

It is natural that, when the level of commitment to the PO role is lower than required by the team, a pattern that evolves is that the tester fills the gap as a proxy for them.

In my latest engagement at River Agency the individual product complexity is lower but the range of products and the level of bespoke customer work higher. This has resulted in a priority to focus on having a devoted team of Product Owners, with responsibility for testing shared between these and the developers. Just last week I was at a South West Test meetup where Anna Baik was recounting her experience of a company with a similar approach, that she wrily described as 'secret testers', where the job description of the Product Owners included most of the responsibilities that one might more typically expect to be associated with testers. A brief search on the subject of Product Owners and testing responsibilities yields discussions prompted by similar situations, such as this, and this one, that indicate that this pattern is not unique to these limited examples. There are many teams out there foregoing formal testing roles and instead placing high expectations on their Product Owners to test stories.

Whilst my preference will always be for focused testers, and a devoted PO, I can see the reasoning behind both of these approaches. In both cases the outcome is that the responsibilities of the one role are naturally picked up by the other to fill the needs of the team. What concerns me about both patterns is that each is a skilled role. Expecting someone in another role to 'pick up' the responsibilities of the other is optimistic. If, for example, the PO role carries an implicit expectation of testing then I'd expect the people in that role to treat this as a formal requirement and skill up accordingly, just as I'd expect developers to develop their testing skills in the absence of devoted testers in a team.

Where do product owners hang out?

There doesn't seem to be a widespread community for Product Owners. There are a few product manager groups, however the difference in results between a Google search on Product owner conferences as opposed to testing ones is dramatic. Testing is a vibrant community with a wealth of conference and groups to support it. Given the absence of a strong product owner community, the testing community does seem to act as a second home to a lot of the material and discussion that is geared towards product ownership. Gojko Adzic is perhaps one of the best examples of a highly influential author who's material on challenging requirements and 'Bridging the Communication gap' between customers and development teams is highly suited to the PO role, yet he targets a lot of his work into the testing community.

Paul Gerrard has also for some time been including elements of requirements management and user story design into his UKTMF group, and he has clearly stated that he sees a future shift to incorporate the definition and testing of requirements as the responsibility of a single group within the organisation.

"why for all these years has one team captured requirements and another team planned the test to demonstrate they are met. Make one (possibly hybrid) group responsible."

Armadillos

The luxury of specialisation at the exclusion of other responsibilities is becoming a rarity, and the need to take on multiple 'hats' is increasingly an expectation when working in flexible and self organising teams. It is clear that testers and POs have a lot in common in this regard. A common need to understand customer value, a common interest in information that furnishes decisions on quality, a common desire to assess the solution relative to the problem domain. The blurring of the boundaries between the two roles is somewhat inevitable.

So what is the difference between a tester who takes on PO responsibilities and a PO who tests? There will definitely be differences in the emphasis of responsibility, and skill. Devoted testers will always be more focused on extracting information from the software, POs on prioritisation and delivering to the customer at an understood level of risk, however I can see a future where this gap closes even further. In our 'square shaped team' individuals will be hired to fill out the matrix of skills required to deliver the team goals. Testers will need to develop skills in decision making and customer communication when the PO role is absent, and POs are likely to need to develop testing skills to allow them to obtain the information they need to inform those decisions.

One of Kipling's famous Just So Stories involves a hedgehog and a tortoise trying to protect themselves from the jaguar who has learned to overcome the defenses of each animal. Hedgehog learns to swim and flatten his spines to create a shell more like a tortoise, whilst the tortoise learns to curl up more like hedgehog, with the result that both end up as armadillos.

Based on my recent experiences, it's possible that testers and Product Owners might end up in a very similar position.

Images : https://www.flickr.com/photos/myfwcmedia/12988547315, http://www.mainlesson.com/books/kipling/just/zpage117.gif

ShareThis

Recommended