Tuesday, 20 January 2015

Variations On A Curve

On joining a new company (see my previous post) one of the most interesting activities for me is learning about the differences between their approaches and yours. Working in an organisation, particularly for a long period, leaves you vulnerable to institutionalisation of your existing testing practices and ways of thinking. No matter how much we interact in a wider community our learning will inevitably be interpreted relative to our own thinking, and the presence of the Facebook Effect can inhibit our openness to learning in public forums. Merging with another company within the walls of your own office provides a unique opportunity to investigate the differences between how you and another organisation approach testing. As part of the acquired company I'll be honest and say there is an inclination to guardedness about our own processes and how these are perceived by the new larger company. The reverse is not true, and as part of an entirely new organisation I'm free, if not obliged, to learn as much as I can about their culture and processes.

In the early stages I've not yet had the opportunity to speak to many of the testers, though I have some exciting conversations pending on tools and resources that we can access. One of the things I have done is browse through a lot of the testing material that is available in online training and documentation. Whilst reading through some of the large body of training material I found this old gem, or something looking very much like it:

Now I'm under no illusion that those responsible for testing place any credence on this at all, the material in question was in some externally sourced material and not prepared in-house. It is, however, interesting how these things can propagate and persist away from public scrutiny.

I remember seeing the curve many years ago in some promotional material. Working in a staged waterfall process at the time the image appealed to me. My biggest bugbears at the time were that testers get complete and unchanging requirements; and that we got earlier involvement in the development process. The curve therefore fed my confirmation bias, it seemed convincing because I wanted to believe it. It is hardly surprising therefore that the curve enjoyed such popularity among software testers and is still in circulation today.

Some hidden value?

Since I first encountered it the curve has been widely criticised as having limited and very specific applicability. Given that it originated in the 1970s this is hardly surprising. It has been validly argued that XP and Agile practices have changed the relationships between Specification, Coding and Testing such as to significantly flatten the curve, Scott Ambler gives good coverage of this in this essay. In fact, the model is now sufficiently redundant that global head of testing at a major investment bank received some criticism for using the curve as reference material in a talk at a testing conference.

I'm not going to dwell on the limitations of the curve here, that ground has been well covered. Suffice to say that there are many scenarios whereby a defect from the design may be resolved quickly and cheaply both through development and testing activities, and in a production system. The increasing success of 'testing in live' approaches in SAAS implementations are testament to this. The closer, more concurrent working relationships between coding and testing also reduce the likelihood and impact of exponential cost increases between these two activities.

Whilst seriously flawed, there is an important message implicit in the curve which I think may actually suffer from being undermined by the associated problems with the original model. The greatest flaw for me is that it targets defects. I believe that defects are a poor target for a model designed to highlight the increasing cost of change as software matures. The scope of defects is wide ranging and not necessarily tightly coupled to the design such that they can't be easily resolved later. Michael Bolton does an excellent job of providing counter-examples to the curve here There are, however, other characteristics of software which are tied more tightly to the intrinsic architecture such that changing these will become more costly with increasing commitment to a specific software design.

If we don't consider defects per-se, but rather any property, the changing of which necessitates a change to the core application design, then we would expect an increasing level of cost to be associated with changing that design as we progress through development and release activities. Commitments are made to existing design - code, documentation, test harnesses, customer workflows, all of which carry a cost if the design later has to change. In some cases I've experienced, it has been necessary to significantly rework a flawed design, whilst maintaining support for the old design, and additionally having to create an upgrade path from one to the other. Agile environments, whilst less exposed, are not immune to this kind of problem. Any development process can suffer from missed requirements which render an existing design redundant. In this older post I referenced a couple of examples where a 'breaking the model' approach during elaboration avoided expensive rework later, however this is not infallible and as your customer base increases, consequently so does the risk of missing an important use case.

Beyond raw design flaws, I have found myself thinking of some amusing alternatives that resonate more closely with me. Three scenarios in particular spring to mind for me when I recount personal experiences of issues where design changes were required, or at least considered, that would have been significantly more expensive later than if they'd been thought of up front.

The cost of intrinsic testability curve

Testability is one. As I wrote about in this post, forgetting to include intrinsic Testability characteristics into software design can be costly. In my experience if testability is not built into the original software design then it is extremely difficult to prioritise a redesign purely on the basis of adding those characteristics in retrospectively. Justifying redesigning software to add intrinsic testability becomes increasingly difficult through the development process. I describe in this post the challenge that I faced in trying to justify getting testability features added to a product retrospectively. So I'd suggest the cost of adding intrinsic testability curve probably looks something like this:-

The cost of changing something in your public API curve

API design is another classic. While fixing bugs that might affect user interfaces can be low impact, changing design of APIs which someone has coded against can cause a lot of frustration. We as a customer of other venors' APIs have suffered recently with having to repeatedly rework the code of our solutions due to breaking changes in minor versions of their APIs. Speaking from bitter experience, if you have made breaking changes to your API then you're probably not your customers' favourite vendor right now.

The cost of fixing bugs the customer doesn't care about curve

Of course, lest we forget, there is always the chance that the stuff that you thought was important is less so to the customer. The cost curve for fixing bugs in features drops significantly if you realise that no-one is interested in the fix. Some features seem to be particular favourites with product owners or marketing teams but receive little production use. Testers are obliged to raise issues with them, however practically once the software is released these issues are unlikely to get prioritised by the customer.

Please note that these curves are entirely frivolous and have no basis on empirical evidence. That said I'd put money on them being more representative than the original in organisations where I've worked. And given the fact that they'll appeal to many testers' own opinions, perhaps through the power of confirmation bias they may just be showing up in a keynote near you a few years from now.

References

Friday, 19 December 2014

New Beginnings, Old Desk

This week it was announced publicly that RainStor has been acquired by Teradata. For anyone not familiar with the name, Teradata is a US based company and its one of the largest data warehouse companies in the world with over 10 thousand employees - a marked difference to the roughly 50 employees of RainStor.

Having worked for RainStor for 8 years (I celebrated my 8 year anniversary at the start of December) this is a change that will have a huge impact on me and my colleagues. At the same time I am confident that, for the most part, the changes will be positive.

8 Years a startup

RainStor has been a fantastic small company to be a part of. Through the whole time that I've been lucky enough to be on board we've managed to maintain the vibrant intensity of a startup company. During this sustained period of intense activity, the team ethic has been amazing. It almost became predictable that the 'teamwork' subject in retrospective meetings would be full of happy yellow post-it notes praising the great attitude shared amongst the team.

Being a perennial startup has its disadvantages though. A small company inevitably has to 'chase the ball' in terms of market, with the result that focus of testing can be distributed too thinly across the capabilities and topologies supported to meet each customer's need. I'm optimistic that a company with a wider portfolio of products will allow us to focus on fewer topologies and really target our core strengths.

There have been ups and downs in fortune over the years. The credit crunch was a troubled time for organisations relying on funding and the fact that we got through it was testament to the strong belief in the product and the team. It seemed scant consolation though for the colleagues who lost their jobs. Counter that with the jubilant times when we won our first deals with household name companies the likes of which we'd previously only dreamed of, and you are reminded that those working in a small software company can typically expect to face a turbulent spectrum of emotions (tweet this).

So what now?

I've every reason to be positive about the future of RainStor. From my conversations with the new owners they seem to have an excellent approach to the needs of research and development teams. The efforts that have been undertaken to provide an open and flexible development environment give a lot of confidence. Combine this with an ethic of trying to minimise the impact that the acquisition has on our development processes and I'm optimistic that the approaches that we've evolved over time to manage testing of our big data product in an agile way will be protected and encouraged.

The resources available to me in running a testing operation are way beyond what we could previously access. As well as access to a greater scale of testing hardware, I'm also going to relish the idea of interacting with others who have a wealth of experience in testing and supporting big data products. I know that my new company have been in the business for a long time and will have a well developed set of tools and techniques in testing data products that I'm looking forward to pinching learning from.

What about me?

Whilst I've known about the change for some time, it is only since this week's public announcement that I've realised how long it has been since I worked for a large company. I've been well over 14 years in small companies, or as a contractor, so the change is going to take some getting used to. I love the uncertainty and autonomy of working for small companies, where every problem is shared and you never know what you'll have to take on next. As a smaller part of a large product portfolio I imagine we'll sacrifice some of that 'edge of the seat' excitement for more stability and cleaner boundaries.

A change such as this, particularly at this time of year, inclines one towards retrospection over the events leading up to this, and the possibilities for the future. I'm both proud of what I've achieved at RainStor the small company, and regretful of the areas that I know I could have achieved more. Nothing will bring back those missed opportunities, however the fact that, with just a handful of individuals we've written tested, documented and supported a big data database to the point of being used in many of the largest telecommunications and banking companies in the world leaves me with very few regrets. I hope that as we grow into our new home I can take the chance to work with my colleagues in delivering more of the amazing work that they are capable of.

So that's it, my days of testing in a startup well and truly are over for the foreseeable future. I obviously don't know for certain what's to come. One thing I am confident in is that we couldn't have found a better home. As well as having a great attitude to development my new company also have an excellent ethical reputation, which at least mitigates some of the uncertainty over not personally knowing who is running the company. I am already enjoying building relationships with some excellent people from our new owners.

I imagine that few folks reading this will be ending the year with quite as much upheaval, but I'm sure some are looking forward to new jobs or new opportunities or experiencing sadness at leaving cherished teams. Whatever your position, thanks for reading this post and thanks to those who've followed this blog through the last few years, I'm sure I'll have some interesting new experiences to write about over the coming months. I hope that, if you take some moments in the coming days to look at the past years and what is to come, you have as many reasons to be proud and excited as I do.

https://www.flickr.com/photos/stignygaard/2151056741 .

Wednesday, 10 December 2014

Not Talking About Testing, and Talking About Not Testing

I recently spent 3 very pleasurable days at the EUROSTAR Conference in Dublin. I was genuinely impressed with the range and quality of talks and activities on offer. For me the conference had taken great strides forward since I last attended in 2011 and congratulations have to go to Paul Gerrard and the track committee for putting the excellent conference together.

As with all conferences, while the tracks were really interesting, I found that I personally took away more from the social and informal interactions that took place around the event. One of the subjects that came up for discussion more than once was the fact that, for many of the permanently employed folks attending, our roles had changed significantly in recent years. For some of us our remits had extended beyond testing to include other activities such as technical support, development or documentation. For others the level of responsibility had changed more to management and decision making and less on testing activities involving interfacing directly with our products.

The new challenges presented by these new aspects of our roles dominated much of the conversation over the pub and dinner table, for example Alex Schladebeck in this post mentions exactly such a conversation between Alex , Rob Lambert , Kristoffer Nordström and I. Subjects discussed during these interactions included

  • How to motivate testers in flat organisational hierarchies.
  • How to advance the testing effort and introduce new technologies and techniques, sometimes in the face of demand to deliver on new features.
  • Making difficult decisions when there are no clear right answers.
  • Balancing the demands of testing when you have other responsibilities
  • How to ensure that areas of responsibility are tackled and customers needs met without putting too much pressure on individuals or teams. In my case this can include how to fit maintenance releases for existing products into the agile sprint cycle.
  • How to approach the responsibilities of managerial roles in organisations with self managing scrum teams.
  • Exerting influence and exhibiting leadership whilst avoiding issues of ego and 'pulling rank'.

It was clear in conversations that, whilst testing was an area that we had studied, talked on, debated and researched, these areas of 'not testing' were ones where we were very much more exploring our own paths. Traditional test management approaches seem to be ill suited to the new demands of management in the primarily agile organisations in which we worked.

The problem with not testing

Whilst testing is a challenging activity, for some of us 'not testing' was presenting as many problems. One of the things that I know was concerning a number of us was balancing the up to date testing and product knowledge with performing a range of other duties. As the level of responsibility for an individual increases, inevitably the time spent actively working on tasks testing the product itself can diminish such that it is easy to lose some of the detailed knowledge that comes from interacting with the software.

One thing that I personally have found definitely does not work is kidding myself that I can carry on regardless and take on testing activities like I always have. Learning to accept the fact that I'm not able to pick up a significant amount of hands on testing whilst facing a wealth of other responsibilities has been a hard lesson at times. It is a sad truth though that in recent years when I have assigned myself sprint items to test these have ended up being the stories most at risk in that sprint, due to my having not been able to focus enough time to test them appropriately. Whilst I enjoy testing on new features, the reality that I have had to accept is that the demands of my other responsibilities mean that others in the team are better placed to perform the testing of new features than I am.

Dealing with not testing

I won't pretend to be entirely winning the battle of not testing, however I find that most of the other activities that I do perform contribute to my general product knowledge and can balance and complement the time spent not testing:

  • Attending elaboration and design meetings for as many features as possible
  • Reviewing acceptance criteria for user stories, both at the start and also the confidence levels at the end
  • Having regular discussions with testers on test approaches taken and reviewing charters

Additionally some of the areas that I have become responsible for help to give different perspectives on the product

  • Discussing the format and content of technical documentation reveals how we are presenting our product to different users
  • Scheduling update releases and compiling release notes helps me to maintain visibility of new features added across scrum teams
  • Time spent on support activities helps give an insight into how customers use the product, how they perceive it (and any workarounds that folks are using)

These all help, however I think that as a company and product grow it is inevitable that individuals will be unable to maintain an in depth understanding of all areas of the product. Something that I have had to get accustomed to is deferring to the levels of expertise that exists in others on specific areas of product functionality which are much higher than I can expect to maintain across the whole product.

The future

I came away with the feeling that I'm part of a generation who are having to provide their own answers to questions on managing within agile companies. We are seeing a wave of managers who haven't had agile imposed on their existing role structures. (tweet this) Instead we are growing into managerial roles within newly maturing agile organisations which don't necessarily have a well defined structure of management on which to base our goals and expectations. We are attempting to provide leadership both to those who have come from more formal cultures and to those who have only ever known work in agile teams, and have very different expectations as a result. Consequently there is no reference structure of pre-defined roles to base our expected career paths around. Some, like Rob, have taken on the mantle of managing development. Others, like myself, have the benefit of colleagues at the same level who are focused on programming activities, however this does mean learning to work in a matrix management setup where the lines of responsibility are not always clear given the mixed functions of the team units.

It was clear from the conversations at EuroSTAR that for many of us our perspective on testing, and test management was changing. Whether this was simply the result of individuals with similar roles and at similar points in their careers gravitating towards each other, or the signs of something more fundamental, only time will tell. The conversations that I was involved in at the event were only a microcosm in relation to the wider discussions and debates over testers and testing that were going on amongst the conference attendees.

If anything can be taken from the range of different management roles and responsibilities that the individuals that I know have evolved to adopt, it is this - there is no clearly defined model for test management in modern agile companies - (tweet this). It is really up to the testers in those organisations to champion the testing effort in a way that works for them, and to build a testing infrastructure that fits with the unique cultures of their respective organisations.

ShareThis

Recommended