Monday, 23 April 2018

Textual description of firstImageUrl

An Emotional Journey

Of all roles in software development, the Product Owner is one that I find is most at risk of positive biases around their software. When creating a product there's a natural tendency to be overly optimistic around the positive reception that it will get from its target user community. I am in the process of developing an innovative new engagement and productivity product in my company and naturally am very excited and optimistic around the benefits of will give. As my colleagues and I started out in this endeavour we wanted to make sure we included a consideration of potential negative feelings into our development process - and our UX (User eXperience for the uninitiated) specialist came up with a great way of doing this...

Getting into UX

One of the things that I've found most rewarding in moving to Web and mobile work after many years of being in a world of APIs and command lines, is learning more about UX. I maintained an interest in UX during my years working on big data, but the absence of significant front end development work left this as a secondary concern to those of accuracy, scale and performance. The last two years working on Web and mobile technology has given me the opportunity to make up some ground on the UX learning curve - a process which has been accelerated thanks to the enthusiasm of our in-house UX team.

When we decided to create a product that attempted to bring together the worlds of data and employee engagement, the importance of establishing the right emotional connection with users was paramount to ensure the team experience of using the product was empowering rather than intimidating. In chatting with the UX specialist working on the product, who I'll call Phoebe, we discussed the need to identify the emotions that we wanted to promote in using the product. On the flip-side we also talked about the emotions that we did not want to promote, and how useful it would be to identify some of these up front so that we could design with these in mind.

An Emotional Journey

Phoebe had the bright idea of running an 'emotional user journey' workshop to help flush out both positive and negative emotions that could arise at key points through the process of using our product. This was something neither of us had done before but seemed like a great for for what we were trying to achieve.

The starting point was getting the right mix of people in the room. We pulled together a combination of Development and Commercial roles as well as some of the senior client services and Product Ownership people from our successful bespoke engagement and data programmes.

  • Phoebe started by presenting the different user personas that we had created for the product, explaining the personalities, pet hates and goals of each one.
  • She then progressedd to map out the primary elements of the flow of product behaviour that we had identified as our core journey.
  • At each stage she placed some leading 'question' cards with questions to make the attendees think about the emotions that the people and teams using the product might feel at each stage.
  • Phoebe then split the attendees and invited individuals to consider the journey from either a very positive and optimistic, or a very negative and cynical position.
  • These two sets of individuals added emotions to the journey at the key points - one colour of card for positive and one for negative emotions
  • At points where the cards were concentrated, we placed further cards to highlight the ideal emotions that we would want to promote to help avoid any potential emotional pitfalls that had emerged.

What was fascinating about the session was that, as the emotional cards were added to the wall what emerged were a small number of critical 'fulcrum' points which had the possibility of a engendering very strong emotions, but also risked very negative ones. Some areas that we had assumed would promote a positive response around visibility and openness actually had a high level of risk of people feeling exposed and monitored or judged. Additionally a strong set of potential emotions emerged around the product as a whole emerged around how people might feel if it was put in front of them. What we realised was that, without our perspective on the potential benefits there was a risk of suspicion around new business technology and its potential for 'big brother' monitoring that we needed to consider and mitigate with our product features and messaging.

Emotional Take-Aways

The workshop provided some invaluable insights into our target product from the perspective of the people who would be using it. Identifying the main points that carried the greatest emotional risk allowed us to focus on those areas to ensure that they encouraged the responses we were looking for. Through the development of the initial features we tailored our approaches to specifically include steps to encourage open and democratic behaviours and discourage command-and-control, autocratic ones. Our awareness of the general risks around the perception of new products was also empowering in ensuring that we provide the right support and messaging to back up the benefits that our product can provide.

Our company brand and sales and marketing collateral all carries the messaging our software and services are for everyone in a company, not just managers. With the help of the insight that came out of the emotional journey session, we're ensuring that this is a message that is reinforced rather than put at risk as we build the product features.

References

We didn't use it in the session but here's an interesting post by Chris Mears on using Robert Plutchik’s emotion wheel to measure emotions through user journeys that I would consider using if repeating the exercisehttps://theuxreview.co.uk/driving-more-valuable-customer-journeys-with-emotion-mapping-part-1/

Photo by Kasuma from Pexels

Monday, 19 March 2018

Textual description of firstImageUrl

Disruptive Work

Just over a year ago I instigated a move in my company to stop recording bugs. It was a contentious move, however it seemed that for the context in which we worked - tracking bugs just didn't seem to fit. I'd planned to write about it at the time but wanted to see the results first. In light of some recent internal discussion in my company in favour of re-instigating bugs, now seems like as good a time as any to write about what we do instead.

A scheduling problem

My current company, River, has historically worked on an agency basis. Rather than having development teams devoted specifically to one product or feature area, each team will have their time allocated on a sprint-by-sprint basis to one of the client programmes that we develop and maintaining software for.

This presents a number of challenges to effective agile development, probably the most significant being scheduling work. When product teams work in agile sprints they commit to delivering a backlog of work for each sprint. Inevitably I my experience sometimes things didn't always go as planned and work may need to be brought into a sprint that has come in urgently from another channel such as a Proof of Concept (POC) or the customer support team. In product teams I've worked on there was always an understanding across the business that these disruptive items would impact on the teams ability to tackle its scheduled work and subsequent sprints would be affected. As all of the sprints related to the same product this was manageable - we may have had to reduce the scope of a long term release deliverable, but for the most part the work could be absorbed as an overhead of the long term development work of the team.

The challenge we faced at River with the agency model was that for any given sprint for a team there was a good chance that the work would be completely unrelated to what they had done in the previous one. Any items being raised through other channels such as ad-hoc customer requests or support tickets may have come from a different programme from the one that was the subject of the active sprint and therefore the emergent work could not simply be slotted in to the team backlog at the appropriate priority level.

We saw some challenging tensions between undertaking new planned developments and maintaining existing programmes. I'd personally struggled to make progress on two high profile projects due to the impact of disruptions from existing ones, so I was keenly aware of the impact that unplanned work could have. A particular problem that I'd encountered was issue snowballing. As one sprint was disrupted by unplanned work, it would inevitably have a knock on effect as promised work would be rushed or carry over and impact a later iteration. Timescales on the new programme would get squeezed resulting issues on that programme, which would consequently come in as disruptions which impacted on later sprints..

Being Careful with metrics

Last year across the development group we set about establishing a set of meaningful goals around improving performance. Years of working in software testing provides me with a strong mistrust of metrics and I was keen to avoid false targets when it came to striving to improve the quality of the software. I'd never suggest that counting bugs provided any kind of useful metric to work to, however I did feel that examining the amount of time spent on bugs across the business could provide us with some useful information on where we could valuably improve and so I set about investigating use of bugs across the company.

What I found on examining the backlogs of the various teams was that each team took a different approach on recording bugs. Some teams would capture bugs for issues they found during development, others would restrict their 'bug' categorisation for issues that had been encountered during live use. Some teams went further and raised nearly everything as a user story - yielding such interesting titles as "As a user I don't want the website to throw an error when I press this button".

I and the group involved in looking at useful testing metrics took a first principles approach by taking a step back from what to track on bugs and actually understanding the problem that we were trying to solve. Whilst obviously the absence of bugs was a desirable characteristic in our programmes, more importantly was reducing the amount of disruption encountered within teams relating to software that they weren't working on at the time, whatever the cause and so we decided that looking at disruptions would give us more value than looking at bugs alone.

The Tip of the Iceberg

What became apparent was that the causes of disruptions went far deeper than just classic 'bugs' or coding flaws. The work that was causing the most disruption included items such as ad-hoc demands for reports from customers that we weren't negotiating strongly on, or changes requested as a result of customers demanding additional scope to that which had been delivered. I would have been personally happy to consider anything that came in this way as a bug, however I'm politically aware enough to know that describing an ad-hoc request from an account director as a 'bug' might be erring on the side of bloody-mindedness.

The approach that we decided to take was to categorise any item that had to be brought into an active sprint (i.e. something that had to impact the current sprint and could not be scheduled into a future sprint for the correct programme) as 'disruptive'. The idea was that we then track the time spent in the teams disruptive items and then look to reduce this time by targeting improvements, not just in the coding/testing, but in all of the processes that impacted here including prioritisation, product ownership and customer expectation management. A categorisation of a 'bug' was insufficient to identify the range of different types of disruptive work that we encountered. We therefore established a new set of categories to better understand where our disruptive backlog items were coming from :

  • Coding flaws
  • Missing behaviour that we hadn't previously identified (but the customer expected)
  • Missing behaviour that we had identified but hadn't done (due potentially to incorrect prioritisation)
  • Missing behaviour that would be expected as standard for a user of that technology
  • Performance not meeting expectation
  • Security vulnerability
  • Problem resulting from customer data

Each item raised in the backlog would default to be a 'normal' backlog item, but anything could be raised as 'disruptive' if a decision was made to bring it in to a team and to impact a sprint in progress.

Did it work?

After over a year of capturing disruptive work instead of bugs we are in a good place to review whether it worked. Typically and frustratingly the answer is both yes and no.

Some of the things that did really help

  • No-one argued any more about whether something was a bug. I have gone over a year in a software company without hearing anyone argue about whether something was a bug or not. I cannot overstate how important I think that I has been for relationships and morale.
  • We don't have big backlogs of bugs. The important stuff gets fixed as disruptive. The less important stuff gets backlogged. At my previous company we built up backlogs of hundreds of bugs that we knew would never get fixed as they weren't important enough, treating all items as backlog items avoids this 'bug stockpiling'.
  • All mistakes are equal. I've always disliked the situation where mistakes in coding are categorised and measured as bugs but mistakes in customer communication that have far bigger impact are just 'backlog'. There is a very murky area prompting some difficult conversations when separating 'bugs' from refactoring/technical debt/enhancements. These conversations are only necessary if you force that distinction.

What didn't work so well

  • People know where they are with bugs. In many cases they are easy to define - many issues are so clearly coding flaws that categorising them as bugs is easy and allows clear conversations between roles simply down to familiarity with bug management.
  • There was still inconsistency. As with bugs, different teams and product owners applied subtly different interpretations of what disruptive items were. Some were treating any items that impacted on their planned sprint as disruptive, even if they related to the programme that was the subject of that sprint, others only raised "disruptives" if the work was related to a different programme.
  • The disruptive category led to a small degree of gaming. Folks started asking for time at short notice to be planned into the subsequent sprint rather than the current one. This was still significantly disruptive to planning and ensuring profitable work, however it could be claimed that the items weren’t technically "disruptive items" as the time to address them had been booked in our planning system.

In Review

Right now the future of disruptive items is uncertain as one of the product owners in my team this week raised the subject of whether we wanted to consider re-introducing bugs. Although I introduced the concept I'm ambivalent on this front. Given the problems that we specifically faced at the time, tracking disruptive items was the right thing to do. Now that we have a larger product owner team and some more stability in the scheduling disruptive work is not the debilitating problem than it was. At the same time I'm not convinced that moving back to 'bugs' is the right move. Instead my inclination is to once again go back to first principles and look at the problems that we are facing now, and track appropriately to address those, rather than defaulting to a categorisation that, for me, introduces as many problems as it solves.

Sunday, 28 January 2018

Textual description of firstImageUrl

Aligning with the Business

I have over the last few months been concurrently involved in some of the most and least inspiring work of my career. Naturally having a software tester mindset I decided to write about the negative stuff as a priority. What can I say? My glass is usually half empty.

Engaging Through Alignment



I recently had the pleasure of inviting an inspiring lady named Susie Maguire to run a workshop at River. Susie has a wealth of experience in the field of engagement and motivation and was the perfect person to discuss, question and regions our own expertise in this area. In one discussion during the workshop Susie discussed the importance of aligning the goals of the individual, with the goals of the team, with the goals of the organisation to achieving true employee engagement.

As with many of the most powerful ideas in successful work, this is a blazingly simple concept yet surprisingly difficult to achieve and therefore depressingly rare. The divided and hierarchical nature of many organisational structures means that teams can aggressively optimise to their own established goals, which, over time, can deviate drastically away from those of the wider company. As we were talking I couldn't help thinking about a process that I was working through at the time which was an extreme case of when such a deviation of goals occurs.

A Painful Process

As I mentioned at the start I've also recently been involved in some of the least inspiring work of my career in relation to implementing a software programme into a large organisation. This is not in itself inherently painful and the relationships with the immediate client at the start of the programme were healthy and long standing. As part of the implementation we were required to work with an internal team from the wider global organisation to 'certify' that one element of our software meet their standards. We were happy to do this on the basis that we'd successfully delivered software to the client before and felt confident in meeting these standards based on our programmes with other global clients. Our confidence proved to be misplaced, however, when we discovered the details of the process.

  • It became apparent early on that the certification team were not going to engage with us in any kind of collaborative relationship at all but instead would operate primarily through the documented artefacts of the process
  • The requirements that we had to meet were captured in lengthy and convoluted documentation from which we had to extract the relevant information and interpret for our situation. Much of the documentation was targeted at in-house development in different technologies to our stack.
  • Some parts of the process involved different people submitting the same lengthy and detailed information into separate documents or systems, which were then all required to align exactly across the submission
  • Many of the requirements documented were either impractical or actually not possible in the native operating systems we were supporting
  • The process involved no guidance or iteration towards a successful outcome, instead certification involved booking a scheduled 'slot' which had to be reserved weeks in advance based on the predicted delivery date.
  • Any failure to meet the standards discovered during the certification slot were not fed back during the slot in time to be resolved towards a successful outcome, but were communicated via a lengthy PDF report once the process was complete
  • Items as minor as a repeated entry in a copyright list or a slight difference in naming between a help page and guidance prompts were classified as major failures resulting in failing the certification
  • Approaches were presented in the specification as reference examples, yet any deviation from the behaviour of the 'example' was treated as a major failure, even if the logical behaviour was equivalent.
  • The inevitable failure in the certification slot required a second 'slot' to be booked for a retest

The final straw came when, as part of the second booked review slot, new requirements were identified which we hadn't been told about in the initial certification, yet our failure to meet them still constituted a failure of the overall certification. Software components not raised in the first review were newly identified as 'unacceptable' in the second, and the missing behaviours stated in the first review were frustratingly in themselves insufficient to pass when it came to the second.

Misaligned

What was clear to me in going through this process was that here was a team where the goals of the team had diverged significantly from the goals of the company.

The goals of the team appeared to be

  • Ultimately protect the team budget by maintaining a healthy stream of failures and retests (the internal purchase only covered two test slots - a third retest resulted in an additional internal purchase and internal revenue for the team)
  • Tightly document the requirements of software solutions irrespective of value or practical applicability
  • Maximise failure through maintaining a position of zero tolerance for ambiguity or delivering value in different ways.
  • Maintain an internal view - limiting communication outside the team and Interfacing primarily through artefacts - such as requirements and failure reports

If this only affected us as a supplier then I would probably not be writing this. What was more frustrating was that I was working on behalf of a client company that was a component of the larger global organisation. The behaviours of this team were directly preventing the progression of an exciting and engaging programme. Instead of adding value to their programme they were using valuable budget on frustrating bureaucratic processes and inane adjustments that they saw very little value from and ultimately placed the programme at risk.

It could be argued that the team were protecting the company to ensure standards. I'd argue that a process of collaborative guidance and ongoing review would have been easier, cheaper in terms of both team and our costs, and far more likely to achieve a successful outcome. The process as designed was not aligned with the needs of wider company, including my client.

The other side of the fence

I get very frustrated in situations like the one above as they affect me on two fundamental levels.

Firstly, we only have a limited amount of time on the earth. Seeing so many talented people wasting their valuable time on such pointless activities is very frustrating. For me work is about more than making money. If intelligent and capable people are spending their time on undertakings that add little value beyond meeting the specific idiosyncrasies of a self-propagating process then they will start to question themselves and their work deeply. The nature of the process caused tension across all the people involved and caused anxiety for people that simply wouldn't have been required if the process had been structured differently. We wanted to deliver work that benefitted the programme and pleased the customer, yet we were unable to do so due the effort required simply to adhere to the process imposed on us.

Secondly, the process that I described above was essentially a testing process. It's true that the process was so unrecognizable from what I would describe as testing that it took me a while to appreciate it, but testing it was. The process fitted exactly the pattern of:

  • a strict requirements document written before the software was developed
  • a predefined sequence of checks based on adherence to the documentation performed subsequent to and in isolation from the development process
  • the absence of feedback loops that would allow issues to be resolved in a timely fashion
  • communication via artefacts and failure reports rather than direct communication between the person performing the checks and the developing team

Which could describe testing processes in many organisations throughout the world of software development.

Not what I call Testing

Being on the other side of such a testing process was a new and enlightening experience. It gave me an insight into how frustrating it can be working with a testing unit that refuses to engage. I can understand some of the suspicion and even hostility that developers historically have felt towards isolated testing teams. When your best efforts to meet the documents expectations are fed back via reports covered in metaphorical red pen, it's hard to harbour positive feelings for the people involved.

It's heartening then, that each year I read the results of the "State of Testing" survey and see a testing community that is in parts rejecting this kind of approach and embracing more communication and collaboration. In fact the testing skill rated as most important in both 2016 and 2017 survey reports was communication. Whilst this is encouraging, the level of importance placed on communication for testers did drop from '16 to '17 - which is not a trend that I'd want to see continue.

I recommmend if you are a tester reading this that you take the time to take part in this years "State of Testing" survey here http://qablog.practitest.com/state-of-testing/

Our ability to communicate risks and guide and inform decisions is paramount in delivering a testing activity that prioritises the needs of the business over the delivery of the testing process. Going back to Susie's lessons on employee engagement - if alignment with the goals of the wider business is the key to successful engagement then testers whose approach is focussed more on continuous communication and helping to guide towards business goals, are ultimately the ones who will improve their own satisfaction at work but also the others whose lives they impact in the process.

ShareThis

Recommended