Tuesday, 23 May 2017

The Most Effective Form of Communication

Have you ever had trouble explaining what your job is to someone? Whilst struggling to explain your role outside work may provide some social awkwardness, when the same situation arises with work colleagues it can be more of a problem. If those colleagues interact with you directly and have a very different expectation of what you should do than you, it becomes a genuine concern.

One thing that has characterised the roles that I've held since I first became involved in setting up and leading teams is the need to establish an understanding amongst others of what I and my teams do. When I was focused solely on testing this was typically due to the need to correct a restricted and out of date view over what testing involved. When running technical support it was more around establishing an understanding of what support could and should be doing for others and the appropriate ways to interact with the support team. More recently, as I've overseen the introduction of Product Owners into River, I've seen it in relation to understanding what a Product Owner does and how they work.

I've tried various ways to communicate out what's involved in different roles.

  • Presentations to talk people through the processes and activities undertaken by the team
  • Group sessions on how to work together
  • Taking each new starters through individually to discuss what we do
  • I've even created graphic user journeys in prezi showing people might interact with the team

All of these have worked well to some extent. There is, however, one approach that I've found consistently communicates am understanding of what a role entails better than any other. That is by focusing on doing a great job.

Not as easy as it sounds

It sounds simple, however this isn't always the case. If within your company there are those who misunderstand your job, responsibilities or approach then it is likely that they will make demands of you that are inconsistent with what you know will deliver value from your role.

I've had many situations in my work in software testing where the expectations of others differed greatly from my own opinion of good work

  • Being asked to test a piece of software where the only reference point for target behaviour is the software itself ('can you just find the bugs?')
  • The perception of testing as a process of creating test scripts and running them
  • Testers being expected to ignore the risky architectural concerns in a piece of software and focus on trivial bug finding in the user interface
  • The perception of a lack of need for testing other than creating automated unit tests and running them

...and the same is true of other areas that I've worked in

  • Product owners being expected to deliver an already defined list of features simply by 'turning them into user stories' and assigning them to sprints
  • Product owners expected to predict which features will be delivered in which exact sprints to strict timescales throughout a lengthy development
  • Support staff being expected to repeatedly deal with the same issues in flawed software without raising their concerns and recommendations for improvement with the product team

In all of these cases, the situation that the individuals or teams can find themselves is a frustrating one. There is expectation, often associated with a certain level of pressure, to perform a role that is fundamentally different to the one that you should be, or want to be, doing.

Turning in around

As I said in my post 'Knuckling down' - I believe in putting in your best effort to resolve problem situations rather than being too quick to walk away based on a role not meeting your expectations. Clearly if your organisation shows no sign of changing despite all efforts to improve then the door is an option, but I'd always strive to try to turn this around first. But how to do this?

I suggest to start with, ask yourself why the mis-perception exists. Do you believe that what you see as the role will genuinely deliver more value than simply delivering the work in the way that is anticipated?

Presumably the answer is yes. Therefore by changing your behaviour to deliver in the way that you envisage you should deliver more value to the stakeholders in the process than they were hoping for. The problem here can be that, making major changes can impact an existing flow of work. The last thing you want is for your 'improvements' to be associated with a big disruption. Instead I've found that a more incremental approach, focusing on introducing small changes and steering the pipeline of future work rather than what is currently in progress, is much more easily digested.

  • Do some small elements of the work your way 'guerilla' fashion and then demonstrate the value from those small pieces - nothing demonstrates the value of exploratory testing more than a shed load of risks and problems exposed that wouldn't have been discovered by your test scripts.
  • Know when to bend and when to push back - if people ask you to deliver tasks that aren't appropriate, potentially agree to it this time to avoid disruption but state clearly that on the next occasion you will be tackling it in a different way
  • As you deliver, go 'above and beyond' on the work, but make sure any extra effort clearly demonstrates the value of your preferred approach
  • Use what you have done to provide information on status or risk that would not have been available previously
  • Avoid a backlog of inappropriate work building up - take the opportunity when discussing new work to introduce new ideas at that point and establish a change in expectation for future programmes

The what, not the how

You inevitably need a level of stubbornness here. No matter how much you believe in what you should be doing, if you consistently acquiesce to others demands then attempting to steer a role away from their misguided expectations of it is going to be a challenge. One of my greatest failings in the past has been too easily assuming that the approaches taken by others are appropriate without questioning and asserting my own ideas on a process. Over time I've found what really helps to reinforce stubbornness is passion. The more passionate I am about something the more likely I am to research it, discuss it, reinforce my beliefs around it and belligerently strive to deliver value by doing it, even in the face of conflicting expectations.

If you find yourself in such a situation the most important thing to remember here is that other people ultimately aren't really interested in how you do your job so if you persist you are more likely to win through. Others are probably not that excited by discussions around why your approach is better so trying to present people with explanations of theory is going to have limited success. What people do care about its achieving success in their own roles, so focus on how you can help them with that. The most effective way of convincing anyone of the value of a different approach is by showing what it can do for them, and the best way of achieving this is by doing it. Really damn well.

Image https://upload.wikimedia.org/wikipedia/commons/thumb/9/99/Bristol_MMB_43_SS_Great_Britain.jpg/1200px-Bristol_MMB_43_SS_Great_Britain.jpg - there are many who believed that a metal ship would be too heavy to float, or that a screw propellor would not work as well as a paddle. Brunel proved them all wrong when he built the SS Great Britain.

Wednesday, 15 March 2017

When is a Prototype not a Prototype

When is a door not a door ...

... when it's ajar

Throwing away things that we've put effort into creating is not something that comes easy to most of us. It is hard to look at our outputs without seeing the hours and days of effort that went into their creation. Yet throwing away previous efforts is exactly the right thing to do when it comes to prototyping, or the consequences can be dire.

When is a prototype not a prototype?...

I'm not going to discuss in this post the merits of prototyping, suffice to say that an effective prototype can provide a massive learning opportunity. Whether or not to prototype in any situation is not really pertinent to the focus of this post. What I do find very interesting is that, whilst I have seen seen lots of what I would consider to be prototype code, it's rare that I've encountered a capability that was openly described as a 'prototype'.

What I have seen are 'proof of concepts' to demonstrate and trial a capability, or 'pilot' software where a product is being trialled on a subset of users and doesn't have to support the full scale of production use. I've seen 'innovations' coming from 'innovation silos' (where an individual or small group in a company has cornered the market in product innovation) which serve to demonstrate a new development. But I've rarely seen an honest, openly labelled prototype.

...when it becomes a product

Is there any problem with calling our prototypes something else? Well yes, there is a gigantic problem with it. Using the appropriate shared terminology is vital in understanding the situation we are in. In my opinion the correct approach in all of the situations I described above at the point of deciding to progress these capabilities into production would be to... wait for it... write it again using the existing code as a throwaway prototype. The team developing the production code would take valuable learning into that development from the existing capability and create a high quality piece of software from it. Unfortunately I've seen too many situations where this is not what happens.

Given that so many involved in developing and testing software are so aware of the perils of implementing poor code, it seems somewhat surprising that we'd ever allow the situation whereby we're trying to take prototype code into production, yet I've seen this happen many times, and often the development teams have little choice in doing so:-

  • In some cases where the code has come from an innovation silo, it has not been made clear that something is only a prototype. This is usually because doing so would rely on the innovator admitting in delivering low quality throwaway code, something that they're disinclined to do.
  • If a pilot capability hasn't been delivered to production quality, it can be a huge challenge to persuade the business of the need to rebuild when it is already being used, albeit in a limited capacity.
  • It is an unfortunate truth that, if you demonstrate a working piece of software to a C-level executive or customer account owner, it is then hard to persuade them that the next step is to throw it away and build it again, but this time 'properly', after all - you've just demonstrated it working.

Instead what I've seen happen on to many occasions is the decision is made to take the existing code as a starting point and turn it into a production quality piece of software.

This rarely ends well.

Turning prototype code to production code is a time consuming activity that yields little observable value outside the development team, as it generally involves such 'trivial' concerns as:-

  • Adding error handling and validation
  • Adding logging for monitoring and diagnosing faults
  • Putting in place proper decoupled architecture, isolating component responsibilities and defining interfaces
  • Adding transactionality around operations

All of this is critical to having a product that is genuinely ready for live use, yet delivers little in terms of new user facing capability that business owners can see as progress. Inevitably the development team come under pressure for taking so long to tidy up something that appeared to work with all of the features they needed. In one extreme example I had the CEO say to me of a particularly risky piece of research code written in a loosely typed script language -

'now we just need to test it, right?'.

Well wrong actually, what we needed to do was rewrite the whole thing in a more appropriate coding language that supported strong typing and at least compile time validation of parameters across function calls and interfaces, but that was not what he wanted to hear.

Shooting fish in a barrel

For testers this is a confusing and frustrating situation. Their initial explorations will typically expose a wealth of issues which should result in rethinking the entire approach but inevitably do not. Instead one of two things happens.

  • A massive onus is placed on testing to reduce the risk of release through bug finding.

    This is clearly an expectation which is not only impossible but demonstrates a lack of understanding of what testing is there to achieve (like giving someone a dilapidated old Datsun Cherry and expecting them to 'test' it into a Ferrari). The code has not been built incrementally to a high quality and so finding bugs in this situation is far too easy and, whilst potentially fun, is wasteful for the company and the skills of the tester. Error handling and validation and other basic requirements are eventually built in but, as these are done in response to the bugs found the result is that this takes longer and yields less consistent results than if it had been done as a consolidated activity during the creation of the software.

  • The testers are warned by management not to test 'too thoroughly'.

    I'm not quite sure what they are expected to do in this situation. All I can assume is that in order to release if the process is that code has to have been sprinkled with the 'magic fairy dust' of testing, then if this can be done without slowing things down by actually (gasp) looking for problems that would be a great help. On one occasion years ago I was actually told to test the 'easy bits' whilst avoiding any of the high risk areas I'd identified in my initial analysis of the system - sigh.

Other options

There are, of course, other options. Rather than attempting to test out an entire process with throwaway quality code another option is to create a 'tracer bullet'. A tracer bullet has a very different purpose to a prototype. Instead of a throwaway model designed to learn about the viability of a feature, a tracer bullet is a fully production quality but very narrow implementation of a small slice of the target feature set, which can be used as a starting point for evolutionary development. In the reference links there are some discussions around the purpose of tracer bullets. Here I'm referring to a thin slice of production code to help answer questions around architecture and interfaces, rather than a more incremental concept of developing slices and getting feedback, which some might consider 'evolutionary prototyping' however I consider implicit in an agile approach.

I recently had the pleasure of being involved in a cross team activity involving multinational teams to demonstrate an integration capability. I mentioned my determination to avoid the expectation of being able to implement prototype code with the developers. I was pleasantly surprised when they adopted a tracer bullet approach. In the session they developed an initial very thin but working slice of the integration that we could then directly implement and expand upon to create a production capability. It would have been all too easy to deliver a low quality prototype here and generate false expectation in the group around a realistic pace of development and the level of progress. Instead adopting a higher standard to a narrower piece of functionality provided a great starting point for the ongoing development.

Kidding ourselves

We do have a habit in software of kidding ourselves. Like cigarette smokers we go into a kind of state of denial that our bad habits will ever catch up with us and tend to repeat them over and over. I remember when I was a smoker I was terrified of the word 'cancer'. I didn't want to associate the grievous potential outcomes of my habit with the habit itself. Openly giving things the right names helps us to be honest about our situation, and software development is no different. By hiding prototypes behind terms like 'proof of concept','pilot' or 'spike' we are creating a loophole in our quality standards. We're excusing an approach of coding without rigour and testing, which is fine for a prototyping or research activity, but not the basis of a solid product.

My recommendation if you see this situation in the future is to openly and loudly use the word prototype at every opportunity.

  • I'm just working on some PROTOTYPE code
  • I'm not planning on testing the PROTOTYPE any further as I've found to many bugs, I assume I'll have more testing time when we develop the real one?
  • Is that the customer who are still using the PROTOTYPE version?

OK maybe you need to be careful with this, however promoting honest conversation around the state of what we're working with is the first step to avoiding some painful soul searching later.


Image: Old Door by Oumaima Ben Chebtit https://unsplash.com/search/door?photo=pJYYqA_4Kic

Sunday, 19 February 2017

The Innovation Silo

Who is responsible for product innovation? Is it an individual, a specific role, a devoted team, or is it everyone? I can tell you now that unless the answer is the last of these options then you could well have a situation that undermines the motivation and performance of your development group.

The Innovation Silo

A couple of weeks ago I was discussing with a colleague the challenges of innovation. We'd both encountered in previous roles the situation where one individual or group was given a free role on innovation. These people went by different titles, either as individuals or teams : Chief Scientist, Architect, Research Team. The result was the same: those given the freedom to research and innovate would perform those activities in isolation from those working on the main product streams, operating in what was effectively an 'innovation silo'. The focus of the innovator's activities would be to come up with advances in technology or capability. Their output would be aimed at senior management, with the goal of demonstrating to them a 'working' capability of an idea that they had developed and obtain approval to push it through to production.

The Motivation Sink

Arrangements like this were great ... if you happened to be the ones in the innovator roles. These people enjoyed a high level of job satisfaction and operated with a low fear of failure as their work was rarely exposed directly to live use, instead it fell on the regular development team to take these initiatives and carry them through into production.

In addition to inevitable practical issues of handing over prototype code, (which I've decided is a big enough topic to merit a post of its own shortly), the problem was that this arrangement caused a huge amount of frustration for the individuals that weren't part of the innovation activity. The freedom that one individual or team enjoyed came at the expense of many other developers and testers working hard to deliver the commercial obligations of the company. The people who were responsible for the day to day delivery of release schedules and customer deadlines would rarely have the time or the freedom to try out new ideas, unless done in their personal time. The closest many would come to innovation would be in struggling to understand, implement and test designs (if they were lucky) or code (if they weren't) of new products or features which had been created by someone else.

The inevitable outcome of this type of activity on the development and testing teams is demotivation. Fundamentally what we are saying to these people is - "we don't trust you to come up with good ideas". Not only is the exciting work of innovation and research out of reach to the rest of the development group, but to add insult to injury they are then asked to take the output of that innovation and perform the necessary yet uninspiring activities that are so crucial in software, and deliver these in an atmosphere of unrealistic expectation and high fear of failure. I've never encountered the situation where a developer was ecstatic to be working on someone else's code, however at least when that code has been working as part of a live system there's an element of respect in it. When the author of the code has never had to expose it to formal testing or the rigours of production use then that respect is absent and resentment can result.

The arrangement places testers in a frustrating situation as well. Being isolated from the process of innovation results in a separation from the goal of the development. In order to perform effective testing a tester needs to be on board not only with the behaviour of a solution but also to have a keen grasp of the problem that is being targeted. When product research is performed in isolation from the testing group then testers are forced to work within the solution domain. Any benefits of having the critical thinking of a tester integrated in the design process to question decisions and refine early designs is lost. Even if the tester does have valid concerns it can be politically difficult to raise these in the face of support from influential technical staff backed by senior management. Instead the tester politically and practically is compelled to limit their frame of reference to what functional understanding they can gain from behaviour of the developed solution. Their role is diminished to a confirmatory one of verifying existing behaviour rather than a wider scope of questioning value.

An open innovation

I'm hoping that I've just been rather unfortunate to see this type of relationship in more that one of my past jobs - I'd be interested to know in the comments here whether anyone reading this has encountered a similar situation. It it a common scenario for opportunities for innovation to be restricted to a chosen few?

I'm sure that many companies out there have very open and inclusive approaches to innovation, and I'm keen to help make sure that my current company is one of those - so what are the alternatives to innovation silos? In a recent product workshop I ran, some ideas that we identified to avoid this kind of situation included:-

  • Getting everyone in the company on a user feedback forum to share their innovation ideas and vote internally for their favourites
  • Allowing people time to pitch their ideas and provide the support for small cross-functional teams to be created to develop them together.
  • On a similar vein one of the developers at River pitched an interesting idea recently of having 'ideation sessions' where at regular intervals different cross functional groups were given the chance to try out an idea and develop it.
  • Having innovation sessions at whole company days to get everyone involved at the same time to generate ideas.
  • Developing and trialling new capabilities into our own internal version of our engagement software

Whatever form it takes, I think the important thing here is that everyone should be able to contribute and feel part of innovation activities. At the very least everyone should feel included in the process and able to actually raise their concerns over any new approaches being adopted.

As I stressed in my post on 'sharing the vision' there's a huge amount of value in simply having all of your team members engaged with a vision and able to identify connections and opportunities that help to move towards it. You don't have to be the smartest person in the company, with a job title of 'Scientist' or 'Architect' to come up with an idea that advances your capabilities. Naturally some individuals are more inclined to innovation and will generate more ideas than others, however at the same time many great advances come from simply making connections, such as applying an existing approach to a different problem, which anyone can do. The less we think of innovation as part of a role, and more part of an organisational culture, the more likely we are to get everyone driving and supporting our innovations rather than resenting them.

Image: https://www.flickr.com/photos/vitahall/9337199735