Monday, 23 May 2016

Rivers and Streams

After a productive start to the year in terms of writing, I realise that I didn't manage to post anything in April. Looking back I can identify four reasons for this:-

  1. The first is that I was working on an article for the Testing Circus Magazine on Agile Test strategy. If you haven't done so please take a look at the April Edition of Testing Circus - my article is in there along with many interesting articles and interviews.
  2. The second is that I was responding to an request for an interview from the folks at A1QA blog. I haven't done an interview before but in the spirit of having moved on from a long term role earlier in the year in the optimistic glow of change I said yes when asked this time. My interview hasn't appeared at the time of writing this post but I hope that it will be there soon. In the meantime there are some great interviews with testing experts to read.
  3. I was getting ready to turn 40 - that's out of the way now, back to business, nothing to see here
  4. I had been at my current engagement just long enough to get really busy

In relation to this last one, I now know that the lead time on getting really busy in a new role is about 2 months. Up until that time there's a sufficient level of inexperience of the context and isolation of your activities from the day-to-day work to keep your inbox blissfully quiet. Once two months are up then people start relating your presence with actual useful activity and the honeymoon period is over, you're busy.

The work that I've been doing has been setting up Product Owner and Testing functions at a company called River. The name River is a really interesting one for a company involved in software development, as the nature of rivers has some really interesting parallels with work - particularly how the nature of our activity in work reflects on our productivity. Anyone with a strong dislike of analogy should probably stop reading now.

The fast flowing river?

If I asked you to think of a fast flowing river you may be inclined to think of something like this

The rushing white water clearly moving very quickly as it cascades over the rocks. The interesting thing is that, whilst this river might look fast, a huge amount of the energy is not going into moving the water downstream. There are eddies and currents and energy is being used moving the water in many different directions that aren't always downstream. The net result is that, despite the steep gradient that it is moving down, the actual average velocity of the water downstream is slow.

What I've found both in my own personal work, and the teams that I've led, is that in a similar way it is possible to expend a huge amount of energy being busy and filling the working day, without making much progress towards your goals. An apparently busy individual or team can actually be making slow progress if much of that activity is directed at overcoming friction. I'm not talking about the friction of ill-feeling between employees here. What I'm talking about is the friction of energy expended on activities that unnecessarily distract from progressing work to the benefit of the organisation. Whilst organisational friction comes in too many forms to consider here, a lot of friction that can massively impact velocity is relatively straightforward in nature and easily avoided. Some 'velocity killing' sources of friction that I've encountered are both obvious, and easy to reduce, however still affect many companies and teams:

  • regular disruptions or distractions
    A good collaborative working environment is a boost to motivation and productivity, however too many distractions reduce the ability of individuals to take the time to focus on their work. Likewise regular ad-hoc requests coming into a team can disrupt the flow of work repeatedly, causing friction through context switching between tasks.

  • cutting corners
    Nothing reduces velocity like, say, bad code, and nothing generates bad code like cutting corners. I've seen a number of occasions where the decision to push ahead with a product that hadn't been developed with any thought to maintainability or sensible decoupling/separation of concerns has been made to save time. The inevitable result is that any team who has to support, maintain or extend that product later ends up getting slower and slower as they struggle to understand the structure of the code and how to safely change it. As a general rule, unless what you are working on is a temporary one time activity, any corner that you cut now will result in significantly harder work at some time in the future.

  • Handing over poor quality or incomplete work
    I can remember James Lyndsay teaching me once that Exploratory Testing was not an appropriate approach if the software was too poor in quality. My immediate thought was that this was an ideal time to explore as you would find loads of bugs - however on reflection his point was sound. Finding bugs is time consuming and disruptive - as Michael Bolton so clearly explains in this post and so making any kind of testing progress through a system that is awash with problems is going to be an exercise in being very busy but making little progress, and generating a lot of friction between developer and tester in finding and fixing the issues. The developer-tester interface is just one where pushing work before it's ready causes friction and slows everyone down. Similarly the Product Owner to developer interface suffers if the PO prioritises work into the team that is not ready. As I wrote in "A Replace-Holder for a Conversation" I'm a great believer in adding acceptance criteria to the story at the point that it's picked up. On a few occasions, however, I have received/delivered stories into a sprint where, on elaboration, it became clear that we didn't have a clear enough idea on the value being delivered and the result was delay or, worse, misdirected effort.

  • Repeating the same conversations
    This one is a real progress killer. After working on a product for some time, there are stock conversations around that product that certain individuals will revert to given any significant conversational time spent discussing features or architecture. No matter what you are trying to achieve it would all have been a lot easier if we'd have re-architected the whole system in the way that someone wanted to do a number of years ago, or if we just went back to having test phases after development had completed. Poring over the decisions that were made years ago can't help now and reverting to conversations of this nature in every meeting massively increases the friction of the process. Inevitably someone will at some point have to make a pragmatic decision that does not involve the re-architecture of the entire product stack, or reverting back to waterfall, and thirty minutes of everyone's time could have been saved in unnecessary conversation.

The slow flowing river?

By contrast, if I asked you to think of a slow flowing river you might conjure up images like this

We may think of such rivers in terms such as 'lazy' and 'placid', however the fact is that the average velocity of water downstream of such a river is actually greater than an apparently fast mountain stream. This efficiency comes about through the massively reduced friction on the water travelling downstream. Despite the fact that the gradient is less, the efficiency gained means that overall velocity is faster, and the fastest point is the centre of the River where the water is furthest from the friction of the sides.

It seems sensible to strive to minimise friction, yet we know that simply handing development teams a set of requirements and leaving them alone for 6 months is not an appropriate solution here. How do we maintain the potentially conflicting goals of minimising friction whilst also having regular collaboration, providing feedback and generally communicating regularly. Here are some ideas:-

  • Don't distract people if they're trying to concentrate
    If you want a social chat - see who is in the kitchen. If you have an issue, ask yourself if it is essential that you speak to the person right then, or could it wait? I'm personally quite bad at distracting people immediately on issues that could possibly wait, something that I've become more aware of through writing this post. I don't want to imply here that teams should work without conversation, a healthy team dynamic with strong social bonds is really important, however if one person is distracting another who is trying to focus then that not only introduces friction but also can erode team goodwill in that people will start to resent the distractions.

  • Keep it to yourself
    If every time you find a bug, or an email annoys you, you announce it to everyone within earshot then chances are you're the work friction equivalent of a boulder in the river, keep it to yourself until a natural break occurs and you can catch up with people then. I've seen whole teams slowed dramatically through the fact that an individual would announce each small thing that annoyed them through the day.

  • Avoid ad-hoc tasks
    If you have a task that might mean a context switch for someone e.g. a bug to investigate or an emergent task, wait for them to finish what they are working on, or raise up the need to do it in a stand-up so that the team can plan it in. Managers are notoriously guilty here of wandering in and disrupting teams with ad-hoc requests that derail them from their scheduled activities - having a way of capturing/tracking ad-hoc requests is a useful tool for teams who suffer heavily here. Again I like to capture this information on a team white-board and clearly mark any impacted tasks as 'Paused' whilst the ad-hoc task is addressed.

  • Have timely conversations to confirm assumptions.
    Identifying necessary conversations in a stand-up meeting and planning them into the day is a good way of avoiding blocks on progress whilst reducing the friction of trying to get people together ad-hoc. I like to see important conversations either booked in calendars or at least identified and written up e.g. on a scrum board so that everyone is aware of when they need to catch up, thereby avoiding the friction of people walking round the office trying to round up the necessary individuals, like children on a playground looking for volunteers to play "army" (the ultimate demonstration of friction in that it typically lasted the entire lunch-break and never actually left any time for playing army - see number 8 here).

  • Don't rush jobs that others will pick up.
    Maintain your standards and defer work if necessary. You being busy is not an excuse to multiply the efforts required of the person who has to pick up your stuff, whether that be poorly understood stories or badly tested features, no-one is going to thank you for 'getting the job done' if the result is someone else having to do twice as much.

  • Take a break
    One of the most interesting findings of this study referenced in this post, aside from the fact that someone thought that watching netflix and drinking vodka constituted work, is that taking regular breaks actually improves productivity.

Small changes yield big benefits

A colleague of mine was explaining to me this week how, off the back of reading Jeff Pattons "User Story Mapping", that it was making him look differently not only at his work but also his own activities. He'd really taken on board the idea of reducing friction across his life to the point of correlating his visits to the gym with the length of his commute so he didn't waste time in busier traffic, when on other days he could reduce the round trip time by going earlier. By looking at the friction of every day activities there are huge gains to be made in productivity.

Having worked with teams that did have predictable, organised processes where we had minimised the friction in our own processes as much as possible, I have found that it is easier to both

  • Identify and isolate the impact of higher level decisions
  • Work through issues and still maintain a stable velocity and predictable level of rigour
when you have a smoothly running team or department. Sometimes it's so easy to attribute problems to higher level organisational problems and major decisions that it's easy to overlook simple changes closer to home that could yield huge benefits. No company, individual or organisation gets it right every time, but by reducing the friction in your own individual and team activities we can ensure that we're not the limiting factor in achieving our collective goals.

References

Monday, 14 March 2016

Minimum Viable Automation

Things don't always work out as we planned.

For anyone working in IT I imagine that this is one of the least ground-breaking statements they could read. Yet it is so easy to forget, and we still find ourselves caught out when we have established carefully laid plans and they start to struggle when faced with grim reality.

The value of "Responding to change over following a plan" is one of the strongest, and most important in the Agile manifesto. It is one that runs most contrary to the approaches that went before and really defines the concept of agility. Often the difference between success and failure depends upon how we adapt to situations as they develop and having that adaptability baked into the principles of our approach to software development is invaluable. The processes that have grown to be ubiquitous with agility - the use of sprint iterations, prioritisation of work for each iteration, backlog grooming to constantly review changing priorities and delivering small pieces of working software - all help to support the acceptance and minimise the disruption of change.

Being responsive to change, however, is not just for our product requirements. Even when striving to be accepting of change in requirements we can still suffer if we impose inflexibility in the development of our testing infrastructure and supporting tools. Test automation in particular is an area where it is important to mirror the flexibility that is adopted in developing the software. The principles to "harness change for our customers' competetive advantage" is as important in automation as it is in product development, if we're to avoid hitting problems when the software changes under our feet.

Not Set in Stone

A situation that I encountered earlier this year whilst working at RainStor provided a great demonstration of how not taking a flexible approach to test automation can lead to trouble.

My team had recently implemented a new API to support some new product capabilities. As part of delivering the initial stories there was a lot of work involved in supporting the new interface. The early stories were potentially simple to test, involving fixed format JSON strings being returned by the early API. This led me to suggest whole string comparison as a valid approach to deliver an initial test automation structure that was good enough to test these initial stories. We had similar structures elsewhere in the automation whereby whole strings returned from a process were compared after masking variable content, and therefore the implementation was relatively much simpler to deliver than a richer parsing approach. This would provide a solution suitable for adding checks covering the scope of the initial work, allowing us to focus on the important back end installation and configuration of the new service. The approach worked well for the early stories, and put us in a good starting position for building further stories.

Over the course of the next sprint, however, the interface increased in complexity with the result that the strings returned became richer and more variable in sorting and content. This made a full string comparison approach far less effective as the string format was unpredictable. On returning to the office from some time away it was clear that the team working on the automation had started to struggle in my absence. The individuals working on the stories were putting in a valiant effort in supportchanging new demands, but the approach I'd originally suggested was clearly no longer the most appropriate one. What was also apparent was that the team were treating my original recommendation as being "set in stone" - an inflexible plan, and were therefore not considering whether an alternative approach might be more suitable.

What went wrong and why?

This was a valuable lesson for me. I had not been at all clear enough that my initial idea was just that, a suitable starting point. It was susceptible to change in the face of new behaviours. In hindsight I'd clearly given the impression that I'd established a concrete plan for automation and the team were attempting to work to it, even in the face of difficulty. In fact my intention had been to deliver some value early and provide a 'good enough' automation platform. There was absolutely no reason why this should not be altered, augmented or refactored appropriately as the interface developed.

To tackle the situation I collaborated with one of our developers to provide a simple JSON string parser in Python to allow tests to pick specific values from the string, against which a range comparison or other checks could be applied. I typically learn new technologies faster when presented with a starting point than having to research from scratch so, as I didn't know python when I started, working with a developer to deliver a simple starting point saved me hours here. Within a couple of hours we had a new test fixture which provided a more surgical means of parsing, value retrieval and range comparison from the strings. The original mechanism was still necessary for the API interaction and string retrieval. This new fixture was very quickly adopted into the existing automated tests and the stories delivered successfully thanks to an impressive effort by the team to turn them around.

Familiar terms, newly applied

We have a wealth of principles and terms to describe the process of developing through a series of incremental stages of valuable working software. This process of delivering can place pressure on automation development. Testers working in agile teams are exposed to working software to test far earlier than in longer cycle approaches, where a test automation platform could be planned and implemented whilst waiting for the first testing stage to start.

Just as we can't expect to deliver the complete finished product out of the first sprint, we shouldn't necessarily expect to deliver the complete automation solution to support an entire new development in the first sprint either. The trick is to ensure that we prioritise appropriately to provide the minimum capabilities necessary at the time they are needed. We can apply some familiar concepts to the process of automation tool development that help to better support a growing product.

  • Vertical Slicing - In my example the initial automation was intended as a Vertical Slice. It was a starting point to allow us to deliver testing of the initial story and prepare us for further development - to be built upon to support subsequent requirements. As with any incremental process, if at some point during automation development it is discovered that the solution needs restructuring/refactoring or even rewriting in order to support the next stage of development, then this is what needs to be done.
  • The Hamburger Method - For anyone not familiar with the idea, Gojko Adzic does a good job of summarising the "Hamburger Method" here, which is a neat way of approaching incremental delivery/vertical slicing that is particularly applicable to automation. In the approach we consider the layers or steps of a process as layers within the Hamburger. Although we require a complete hamburger to deliver value, not every layer needs to be complete and each layer represents incremental value which can be progressively developed to extend the maturities and capability of the product. Taking the example above, clearly the basic string comparison was not the most complete solution to the testing problem. It did, however, deliver sufficient value to complete the first sprint.



    We then needed to review and build out further capabilities to meet the extended needs of the API. This didn't just include proper parsing of the JSON but also parameterised string retrieval, security configuration on the back end and the ability to POST to as well as GET from the API.

  • Minimum Viable Product - The concept of a minimum viable product is one that allows a process of learning from the customers. I like this definition from Eric Ries

    "the minimum viable product is that version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort."
    It's a concept that doesn't obviously map to automation development, however essentially when developing test architectures then the people who need to use these are our customers. When developing automation capabilities, considering an MVP is a valid step. Sometimes, as in my situation above, we don't always know how we're going to want to test something. It is only in having an initial automation framework that we can elicit learning and make decisions about the best approach to take. By delivering the simple string comparison we learned more about how to test the interface and used this knowledge to build our capability in the python module, which grew over time to include more complex features such as different comparison methods and indexing structures to retrieve values from the strings.

    In another example where we recently implemented a web interface (rather fortunate timing after an 8 year hiatus, given my later departure and subsequent role in a primarily web-based organisation) our "Minimum Viable Automation" was a simple Selenium WebDriver page object structure driven through Junit tests of the initial pages. Once we'd established that we learned about how we wanted to test, and went on in subsequent sprints to integrate a Cucumber layer on top as we decided that it was the best mechanism for us given the interface and nature of further stories.

Being Prepared

Whilst accepting change is a core concept in agile approaches, Interestingly not all organisations or teams working with agile methods apply the same mindset to their supporting their structures and tools. Having autonomous self managing teams can be a daunting prospect for companies more accustomed to a command control structure and high levels of advanced planning. The temptation in this situation is to try to bring as many of the supporting tools and processes as possible into a centralised group to maintain control and provide the illusion of efficiency. This is an understandable inclination, and there are certainly areas of supporting infrastructure that benefit from being managed in one place, such as hardware/cloud provisioning for example. I don't believe that this is the correct approach for test automation. Attempting to solve the problems of different teams with a single central solution takes autonomy away from teams by removing their ability to solve their own problems and find out what works best on their products. This is a big efficiency drain on capable teams. In the example above, our ability to alter our approach was instrumental in successfully recovering when the original approach had faltered.

My personal approach to tool creation/use is simple - focus on preparation over planning, and put in place what you need as you go. Planning implies knowing, or thinking that you know, what will happen in advance. Planning is essential in many disciplines, however in an unpredictable work environment placing too much emphasis on planning can be dangerous. Unexpected situations undermine plans and leave folks confused about how to progress. Preparation is about being in a position to adapt effectively to changing situations. It is about having the right knowledge and skills to respond to change and adapt accordingly, for example through having a 'Square Shaped Team` with a breadth and depth of relevant skills to take on emergent tasks. By all means centralise a body of knowledge of a range of tools to assist teams in their selection of tools or techniques, but leave the teams to decide which is best for them. In the web testing example above, whilst we'd not tested a web interface before, through maintaining an up to date knowledge in house of technologies and approaches that others were using we were still well prepared to start the process of creating a suitable test architecture when the need arose.

By building a team with a range of knowledge and capabilities we are preparing ourselves for the tasks ahead. Using approaches like vertical slicing, the hamburger method and minimum viable product in the creation of our tools allows to implement that ability in a manner appropriate for the fluidity of agile product developments. Or to put it more simply, by accepting that change happens, and using tried and tested approaches to incrementally deliver value in the face of change, we can ensure that we are prepared when it comes.

References

Wednesday, 2 March 2016

A Replace-holder for a Conversation

Acceptance criteria in Agile user stories are funny things when you think about it. On one hand we describe our stories as 'placeholders for a conversation' and strive to minimise the documentation that we pass around the organisation in favour of face to face communication. On the other we insist on having a set of clear and unambiguous testable statements written up against each of these stories that we can use to define whether the story is 'done' or not. This could be viewed as something of contradiction. Whether it is or not, I believe, depends on how, and more importantly when, those criteria are created.

There is no shortage of material on Agile software development sites suggesting that a complete set of acceptance criteria should be in place before any attempt is made to size stories or plan sprints. This article from the scrum alliance, for example, states

"I can't overemphasize just how crucial it is to have proper acceptance criteria in place before user stories are relatively sized using story points!"

The comments that follow the article provide some interesting perspectives on the subject, both from other Agile practitioners and the author himself, who adds a significant caveat to the earlier statement

"Teams that have been together a long time can look at a new story and immediately see that it's relative to another story without the need for acceptance criteria"

Another here on backlog refinement recommends a "definition of ready" that includes, amongst other things

The story should be testable (that means acceptance criteria is defined).

I'd disagree with this sentiment. I don't personally adhere to the approach of adding acceptance criteria to stories on the product backlog. I perceive a number of issues around adding acceptance criteria before the story is sized which I think can undermine core agile and lean principles.

A replaceholder for a conversation

My primary concern is that once acceptance criteria are added to a story it ceases to be a "placeholder for a conversation". The product owner, in order to get stories to meet the 'definition of ready' can be pressured to add the criteria to the story themselves, outside of any conversations with the other key role in the team. There is then potential for developers to pick up and deliver the story based solely on these acceptance criteria, without at any point needing to discuss the story directly with the Product Owner or tester. There is a natural tendency when working to take the path of least resistance, and if a set of written criteria exist on the story that the developer can use to progress their work, without having to talk to anyone else, then many will be inclined to do so.

This problem is apparent in both directions. There is also an inevitable temptation for the PO to treat the creation of acceptance criteria as the means of communicating what is needed to the team. This could be seen as a means to 'make sure' the team do exactly what the PO wants, whilst reducing the personal drag that comes from having to have conversations with them. This undermines the primary value of the criteria, in my opinion, which is to capture the consensus reached between the PO and the team on the scope and value of that piece of work.

The purpose of the user story is not to define exactly what we want or how we want the software developed, but to strive to capture the essence of who it is for and why they want it. In the situation where the PO creates the criteria away from the team then essentially all that we have done is to relegate acceptance criteria to becoming a new flavour of up front written specification. In doing so we embed in them all of the risks that are inherent in such artefacts, such as inconsistent assumptions and inconsistent interpretation of ambiguous terms.

We also run the risk of contradicting at least one of the agile principles -

"The most efficient and effective method of conveying information to and within a development team is face-to-face conversation."

I'm sure you will have noticed that I've used terms such as "risk" and "tendency" liberally here. A well disciplined team can avoid the pitfalls I've highlighted by insisting on conversations at appropriate points and collaborating on the creation of the criteria. This brings me on to my second concern with fully defining acceptance criteria for backlog stories, that this practice contradicts the principles of lean development.

Eliminating waste

The first of the seven principles of lean software development, as defined by Tom and Mary Poppendieck, is to eliminate waste. Waste is anything that does not add value, for example work that is done unnecessarily or in excess to what is required. Given that there is no guarantee that every story that is sized on the backlog will be completed then putting the effort into establishing acceptance criteria for all of these is potentially wasteful. Learn principles originated in manufacturing, and one of the sources of waste identified in that context is "inventory". Whilst I do have many concerns with comparing software development to manufacturing, I think that considering detailed acceptance criteria on a backlog story as inventory, in that they are an intermediate work item that doesn't add value and may not get used, does highlight the wasteful nature of this practice. Investment in time in creating documentation that we are uncertain whether or not we want to use is a waste which we should avoid if we can refer that cost until such time as we're more confident that it will add value.

It is also wasteful to have more people than needed in conversations. If the entire team is present in creating a full set of acceptance criteria for every story as part of backlog refinement or sprint planning, then this could constitute a lot of waste as only 3 people are required for these conversations, the PO and the developer and tester on the story, or the 3-Amigos. While the whole team may be required for sprint planning, I don't believe that the whole team is required to elaborate each story in more detail and define the acceptance criteria. Having them do this presents a significant cost that could be avoided.

Waste also occurs in revisiting previous work. When we do progress a user story, I would expect diligent teams to want to revisit any pre-existing acceptance criteria with the Product Owner at that point to confirm that they are still valid and whether any additions or alterations are required. This again introduces an element of waste in revisiting prior conversations if acceptance criteria have been created in their entirety in advance - if no changes are required then we've wasted time revisiting, and if there are then we wasted time initially in doing the work that subsequently had to be changed. This neatly brings me on to my next argument against adding acceptance criteria to backlog stories.

Deferring decisions

The Lean principles of Software Development, as defined by the Poppendiecks, also advocate deferring commitments until we need to - the "last responsible moment". There's good reasons for this. Generally the later we can make a decision, the more relevant information we have to hand and the better educated that decision is.

Whilst user stories in a backlog should be independent, there will inevitably be some interdependence. For example in a database development we might have stories for supporting user defined views, and others around table level security principles. Depending on which feature was developed first, I'd certainly want to include criteria to test compatibility with that first feature when the second was delivered. If I was writing the acceptance criteria at the point of picking up the stories, this would be easily factored in, however if writing criteria for a wealth of backlog items up front then we're not in a position to know which would be delivered first. We would therefore need to introduce conditional elements into the criteria which needed to be revisited later to cater for the status of the other stories.

Not all such decisions will be as visible in advance. There will also be a process of learning around the development of a piece of software which may influence the criteria we place on the success of later stories. By establishing the details on what constitutes acceptable for a story long before the story is picked up and worked on, we're embedding decisions into the backlog long before we need to make them which may prove to be erroneous at a later stage depending on the discoveries that we have made in the interim. I've seen situations where the lessons that we learn from testing certain features can massively influence the criteria we place on others. For criteria embedded in the backlog this could involve significant rewriting should that occur.

Ensuring a conversation

Historically I've never found the need to add acceptance criteria to backlog items. I've always sized stories and estimated backlog items based on the story title and a conversation with the relevant individuals. These individuals, as a group, should have the expertise to understand the value to the user, and the relative complexity of that item to similar items that we've previously developed. Will the sizing be as 'accurate' as if we'd created detailed acceptance criteria, possibly not, however I'd suggest that the difference is negligible and the benefits minimal as compared to the potential wastes that I've described.

So when would I add acceptance criteria to a story? My personal preference would be to add acceptance criteria to a story at the point at which the story is actually picked up to be worked on. This is the point at which the story changes from being a placeholder on a backlog used for sizing and planning, into an active piece of work which requires elaboration before it is progressed. At this point we also have more information about the developers/testers working on the story and can have a more focused conversation between these individuals and the product owner. I describe in this post a structure that I've used at the point of starting work on a story to identify the acceptance criteria and ensure the individuals that are working on that piece have a good understanding of the value they are trying to deliver. If a user story is a placeholder for a conversation then, for me, this is the intiation of that conversation. This is where we reach mutual consensus on the finer details of the user story and establish an appropriate set of test criteria as a framework under which we can assess the delivery. At the same time we can establish what the team feel are the relevant risks that the work will present to the Product, and any assumptions in our understanding that require confirmation activity by the Product Owner. Creating the acceptance criteria collaboratively at this point ensures that the memories of that conversation are still fresh in the mind of the individuals as they are delivering the work. The conversation is the true the mechanism by which we convey understanding and agree on what needs to be delivered, and the acceptance criteria are rightly mere reminders of this conversation. Isn't that so much better than using them as the primary means of communicating this information?

References

image: https://www.flickr.com/photos/bean/3359500357

ShareThis

Recommended