Sunday, 20 November 2016

Rewrites and Underestimation

In my opinion, the word 'rewrite' is one that should not exist in software. The word is most commonly used to describe the process of recreating functionality or behaviour of an existing piece of software through the process of writing a new one. On the face of it this presents a straightforward task - does the previous incarnation of the software provide us with the perfect specification for the development and testing of the new capability? Unfortunately things are rarely that simple. Use of the term rewrite implies such a level of simplicity in the activity that belies the difficulty involved in such an undertaking. In my experience rewrites are some of the most underestimated of all development projects.

Why rewrite?

So what would possess us to attempt to recreate a piece of software? Surely if we have something that does the job then we don't need to rebuild it? I've encountered two main reasons for performing rewrites - either to support the same use with improved technology/architecture, or to allow the same feature set to be delivered into a new context. In both of these cases there is an inherent assumption that recreating the same feature set will yield the value that you are looking for, which is by no means guaranteed.

Another false assumption in these situations is that development will inevitably be quicker because we've 'done it before'. The fact that we want to rewrite implies a certain level of success in the existing capability in terms of adoption and use. Despite the effort involved on both the initial build, and also in the ensuing months and years of maintenance and improvement, to deliver that success we seem to have an incredible knack of forgetting what it has taken to get to where we are and consistently underestimate rewrite projects.

How do I underestimate thee? Let me count the ways

Let's look at some of the many ways that these underestimations come about

  • Underestimating how smart you were before
  • The attitude of those approaching or proposing a rewrite is usually along the lines of insisting that we won't make the same mistakes that were made in the original development. This can be particularly amusing when it is the same individuals approaching the rewrite that created the original. How easy it is to forget that we were actually pretty smart when we created the original software, and the decisions that we made were done for genuine and valid reasons. I've lost count of the number of times I've looked back at work I created in the past somewhat sceptically and ended up quietly impressed with myself for the quality of it under scrutiny. We shouldn't assume that anything we create now will inevitably be better than what went before. If we don't give ourselves enough credit for the work done first time around then we risk simply repeating the same mistakes, only worse because we have given ourselves even less time to develop this time around.

  • Underestimating the power of incremental improvements
  • It's amazing how prone we are to forgetting how much a piece of software has improved over time relative to the initial release. One of my earliest tasks at RainStor was to test a new version of our data import tool. We already had a data importer that was suitable for our range of existing supported types. The reasoning behind the rewrite was to allow us to more easily extend to add more types to the supported range. The work had been performed by the chief scientist in the company, a man without whom "there would be no company" according to my introduction on my first day. As a new recruit to the organisation, testing this work was understandably a daunting prospect. What I discovered repeatedly through testing it was the performance of the importer was nowhere near what we were seeing for the previous one. Ultimately in attempting the rewrite the focus had been on improvements relating to architecture and maintenance, yet an important customer characteristic of performance had been sacrificed. It had been assumed that a better architecture would be more performant, however a huge number of iterative improvements had gone into improving the performance of the previous version over time, to the effect that the newly written version struggled to deliver equivalent performance.

  • Underestimating the growth in complexity
  • Just as quality and value will incrementally grow in a product over time, similarly complexity also builds. The simple logic structures that we started with (we were, after all, pretty smart) may have become significantly more complicated as new edge cases and scenarios arose that needed to be supported. On a recent development rewrite I worked on, I thought I had a good understanding of the user/roles structure after a spike task to research that exact area of the old system had been completed. As the development progressed it became apparent that my understanding was limited to the simple original roles structure first created in the product. Subsequent to that a number of much more nuanced rules had been added to the system which made the roles structure far richer and more complex than I understood it to be.

  • Underestimating how bespoke it is
  • It is tempting to look at an existing system and think that the logic can be 're-used' in a new application. This can be dangerous if the differences in application context aren't considered. Even very generically designed systems can grow and mould over time to the shape of the business process in which they are applied through maintenance, support and feature development requested by the users.

  • Underestimating how much people are working around existing behaviour
  • As I wrote in my piece "The workaround denier" the ways that people use a system are often very different to how we want or expect them to be. If we don't understand the workarounds that folks are using to achieve their ends with our products then we risk removing important capabilities. One one system I worked on, some users were working around restrictions on the data available under an individual account by having multiple accounts and logging in and out of the system to view different data linked to each user. When, as part of a rewrite, we provided them with a single sign-on capability this actually hindered them as they no longer had control of which account to log in with.

  • Underestimating external changes
  • Even if the users of a product haven't changed, the environment they are working in and the constraints of it will have. I was caught out not long ago on a project working with an external standards body. The previous software had achieved an accreditation with a standards organisation. As we'd essentially copied the logic of the original we mistakenly assumed that we would not encounter any issues in gaining the same accreditation. Unfortunately the body were somewhat stricter in their application of the accreditation rules and we were required to perform significant extra development in order to fulfil their new exacting criteria. Our old software may not have moved on, but the accreditation process had for new software.

  • Underestimating how many bugs you've fixed
  • When creating a new piece of software we will accept that there will be issues apparent due to working on a new code base and that we will improve these over time. It's strange how when creating new code to rewrite existing software we somehow expect those issues to be absent. It's an unwritten assumption that if you've created a product and fixed the bugs in it, that you'll be able to rewrite the same capability with new code without introducing any new bugs.

A fresh start

The chance to re-develop a product is a golden opportunity to take a fresh start, yet often this opportunity is missed because we've left ourselves far too little time. What can we do to avoid the pitfalls of dramatic underestimation in rewrites? It is tempting here to product a long list of things that I think would help in these situations, but ultimately I think that it boils down to just one:-

Don't call it a rewrite

It's that simple. At no point should we be 'rewriting' software. By all means we should refresh, iterate, replace or any other term that implies that we're creating a new solution and not simply trying to bypass the need to think about the new software we are creating. As soon as we use the term rewrite we imply the ability to switch off the need to understand and address the problems at hand, and just roll out the same features parrot fashion. We assume that because we've had some success in that domain that we don't need to revisit the problems, only the solutions we've already come up with. This is dangerous thinking and why I'd like to wipe the term 'rewrite' from the software development dictionary.

Yes, a pre-existing system may provide all of the following:-

  • A window to learning
  • A valuable testing oracle
  • A source of useful performance metrics
  • A communication tool to understand a user community
  • A basis for discussion

What it is not, is a specification. Once we accept that, then our projects to create new iterations of our software stand a much better chance of success.

Image: Chris Blakeley https://www.flickr.com/photos/csb13/78342111

No comments:

Post a Comment

Thanks for taking the time to read this post, I appreciate any comments that you may have:-

ShareThis

Recommended