Wednesday 15 March 2017

When is a Prototype not a Prototype

When is a door not a door ...

... when it's ajar

Throwing away things that we've put effort into creating is not something that comes easy to most of us. It is hard to look at our outputs without seeing the hours and days of effort that went into their creation. Yet throwing away previous efforts is exactly the right thing to do when it comes to prototyping, or the consequences can be dire.

When is a prototype not a prototype?...

I'm not going to discuss in this post the merits of prototyping, suffice to say that an effective prototype can provide a massive learning opportunity. Whether or not to prototype in any situation is not really pertinent to the focus of this post. What I do find very interesting is that, whilst I have seen seen lots of what I would consider to be prototype code, it's rare that I've encountered a capability that was openly described as a 'prototype'.

What I have seen are 'proof of concepts' to demonstrate and trial a capability, or 'pilot' software where a product is being trialled on a subset of users and doesn't have to support the full scale of production use. I've seen 'innovations' coming from 'innovation silos' (where an individual or small group in a company has cornered the market in product innovation) which serve to demonstrate a new development. But I've rarely seen an honest, openly labelled prototype.

...when it becomes a product

Is there any problem with calling our prototypes something else? Well yes, there is a gigantic problem with it. Using the appropriate shared terminology is vital in understanding the situation we are in. In my opinion the correct approach in all of the situations I described above at the point of deciding to progress these capabilities into production would be to... wait for it... write it again using the existing code as a throwaway prototype. The team developing the production code would take valuable learning into that development from the existing capability and create a high quality piece of software from it. Unfortunately I've seen too many situations where this is not what happens.

Given that so many involved in developing and testing software are so aware of the perils of implementing poor code, it seems somewhat surprising that we'd ever allow the situation whereby we're trying to take prototype code into production, yet I've seen this happen many times, and often the development teams have little choice in doing so:-

  • In some cases where the code has come from an innovation silo, it has not been made clear that something is only a prototype. This is usually because doing so would rely on the innovator admitting in delivering low quality throwaway code, something that they're disinclined to do.
  • If a pilot capability hasn't been delivered to production quality, it can be a huge challenge to persuade the business of the need to rebuild when it is already being used, albeit in a limited capacity.
  • It is an unfortunate truth that, if you demonstrate a working piece of software to a C-level executive or customer account owner, it is then hard to persuade them that the next step is to throw it away and build it again, but this time 'properly', after all - you've just demonstrated it working.

Instead what I've seen happen on to many occasions is the decision is made to take the existing code as a starting point and turn it into a production quality piece of software.

This rarely ends well.

Turning prototype code to production code is a time consuming activity that yields little observable value outside the development team, as it generally involves such 'trivial' concerns as:-

  • Adding error handling and validation
  • Adding logging for monitoring and diagnosing faults
  • Putting in place proper decoupled architecture, isolating component responsibilities and defining interfaces
  • Adding transactionality around operations

All of this is critical to having a product that is genuinely ready for live use, yet delivers little in terms of new user facing capability that business owners can see as progress. Inevitably the development team come under pressure for taking so long to tidy up something that appeared to work with all of the features they needed. In one extreme example I had the CEO say to me of a particularly risky piece of research code written in a loosely typed script language -

'now we just need to test it, right?'.

Well wrong actually, what we needed to do was rewrite the whole thing in a more appropriate coding language that supported strong typing and at least compile time validation of parameters across function calls and interfaces, but that was not what he wanted to hear.

Shooting fish in a barrel

For testers this is a confusing and frustrating situation. Their initial explorations will typically expose a wealth of issues which should result in rethinking the entire approach but inevitably do not. Instead one of two things happens.

  • A massive onus is placed on testing to reduce the risk of release through bug finding.

    This is clearly an expectation which is not only impossible but demonstrates a lack of understanding of what testing is there to achieve (like giving someone a dilapidated old Datsun Cherry and expecting them to 'test' it into a Ferrari). The code has not been built incrementally to a high quality and so finding bugs in this situation is far too easy and, whilst potentially fun, is wasteful for the company and the skills of the tester. Error handling and validation and other basic requirements are eventually built in but, as these are done in response to the bugs found the result is that this takes longer and yields less consistent results than if it had been done as a consolidated activity during the creation of the software.

  • The testers are warned by management not to test 'too thoroughly'.

    I'm not quite sure what they are expected to do in this situation. All I can assume is that in order to release if the process is that code has to have been sprinkled with the 'magic fairy dust' of testing, then if this can be done without slowing things down by actually (gasp) looking for problems that would be a great help. On one occasion years ago I was actually told to test the 'easy bits' whilst avoiding any of the high risk areas I'd identified in my initial analysis of the system - sigh.

Other options

There are, of course, other options. Rather than attempting to test out an entire process with throwaway quality code another option is to create a 'tracer bullet'. A tracer bullet has a very different purpose to a prototype. Instead of a throwaway model designed to learn about the viability of a feature, a tracer bullet is a fully production quality but very narrow implementation of a small slice of the target feature set, which can be used as a starting point for evolutionary development. In the reference links there are some discussions around the purpose of tracer bullets. Here I'm referring to a thin slice of production code to help answer questions around architecture and interfaces, rather than a more incremental concept of developing slices and getting feedback, which some might consider 'evolutionary prototyping' however I consider implicit in an agile approach.

I recently had the pleasure of being involved in a cross team activity involving multinational teams to demonstrate an integration capability. I mentioned my determination to avoid the expectation of being able to implement prototype code with the developers. I was pleasantly surprised when they adopted a tracer bullet approach. In the session they developed an initial very thin but working slice of the integration that we could then directly implement and expand upon to create a production capability. It would have been all too easy to deliver a low quality prototype here and generate false expectation in the group around a realistic pace of development and the level of progress. Instead adopting a higher standard to a narrower piece of functionality provided a great starting point for the ongoing development.

Kidding ourselves

We do have a habit in software of kidding ourselves. Like cigarette smokers we go into a kind of state of denial that our bad habits will ever catch up with us and tend to repeat them over and over. I remember when I was a smoker I was terrified of the word 'cancer'. I didn't want to associate the grievous potential outcomes of my habit with the habit itself. Openly giving things the right names helps us to be honest about our situation, and software development is no different. By hiding prototypes behind terms like 'proof of concept','pilot' or 'spike' we are creating a loophole in our quality standards. We're excusing an approach of coding without rigour and testing, which is fine for a prototyping or research activity, but not the basis of a solid product.

My recommendation if you see this situation in the future is to openly and loudly use the word prototype at every opportunity.

  • I'm just working on some PROTOTYPE code
  • I'm not planning on testing the PROTOTYPE any further as I've found to many bugs, I assume I'll have more testing time when we develop the real one?
  • Is that the customer who are still using the PROTOTYPE version?

OK maybe you need to be careful with this, however promoting honest conversation around the state of what we're working with is the first step to avoiding some painful soul searching later.

references

Image: Old Door by Oumaima Ben Chebtit https://unsplash.com/search/door?photo=pJYYqA_4Kic

Anonymous said...

Hi Adam, very good post, it talks about a real problem that has been misunderstood and mishandled by technology departments for decades.

Let me tell you how I avoid the problem. Instead of talking about prototype it talk about the why of the prototype.

A prototype is an experiment that is built to validate a hypothesis, otherwise it should not be built in the first place.

Two typical examples are:
1. Hypothesis: I believe i can provide some type of value using technology X. I haven't used technology X before, so I will validate my hypothesis with prototype A
Test: I try to build value with technology X
Results:If I manage to build prototype A to deliver value, I have proven I can use technology X in the future to build value for my customers. If I can't I will look at different technology

2. Hypothesis: I believe we might have unlocked some new revenue stream with idea Y.
Test: I will build prototype B to quickly validate whether my idea has potential customers or not.
Results: If I see that customers want prototype B i know that in the future I can build a product based on the idea in prototype B that will unlock new revenue. If not I will not build it.

I speak in terms of experiments to my customers, so the focus shifts from the prototype to the validation of the hypothesis.

Like you i have seen this process handled poorly and produce sellotaped solutions for final customers that caused more problems than resolved for the customer and the company

Whatsapp Button works on Mobile Device only

Start typing and press Enter to search