Wednesday, 4 February 2015

The Evolution of Tools

I mentioned in my last post some exciting conversations that I had in the pipeline with the testing team of my new owners. I had one such conversation last week, in a meeting where some of the team demonstrated a range of testing tools that they had developed in house that were available for us to use.

Over the course of a one hour Web Meeting two very pleasant chaps from San-Diego talked us through three of the tools that they had developed in house for database testing tasks. Particularly interesting for me was the fact that, for a couple of these tools, we had equivalent in-house developed tools and I was interested to look at the similarities and differences between their tools and our equivalent ones.

  • SQL Generation

    I spent a chunk of my time last year looking into random SQL data generation. As a good excuse to teach myself Ruby at the same time, I developed a prototype SQL generation utility that could produce a range of SQL syntaxes based on a model defined in input files. I was therefore particularly interested in looking at SQL generation tools that the team had developed. As the San-Diego team talked through their utility it was clear that, predictably, their tool was at a much higher level of maturity than mine. At the same time, I started to notice some very familiar structures in the configuration files. The way that functions were defined, the parameterisation of specific variables, the definition of data schema all bore close resemblance to the ones in the prototype that I had worked on. I recognised structures which reflected what I had done myself to solve the problems that I'd faced in attempting to generate correct, relevant SQL queries.

  • Test harness

    As we moved on to discussing a more generic test harness the parallels to our own testing harness were even more apparent. The way that workflows, parallel scheduling and multi-machine tasks were defined all had direct parallels in our in-house harnesses. There were some differences in the mechanisms used, however the underlying structural elements were all present and the nature of the solutions overlapped extensively. The location and means of definition of test data, metadata and scheduling mechanisms across servers were similar solutions to challenges that we'd tackled in our own harness as our product evolved in line with our product over the past few years. I found myself standing at the screen gesturing to my colleagues the elements of the harness structure which could be mapped directly to our own.

Convergent evolution

My conference talk at EuroSTAR 2011 is testament to the fact that I am fond of applying evolutionary principles to the development of tools and processes. The fact that these very similar tools had been developed by independent teams in separate organisations reminded me of the idea of homoplacy - a process sometimes known as convergent evolution, in which separate species evolve highly similar characteristics not present in a common ancestor, to fill the same evolutionary roles.

For example many Australian marsupials bear a remarkable resemblance to mammals from other parts of the world despite having evolved completely independently.

There are many other examples, such as the similarities between Cactus and euphorbia, and Mantidfly and Mantis, whereby evolution creates the same or similar solutions to environmental problems.

Much as with evolution in nature, tools and processes evolve based on needs that arise within organisations. Working in similar product markets, it is understandable that our tools have developed independently to meet the same needs. What struck me when comparing my tools with those being demonstrated was the extent of the similarities - there were many familiar structures that had clearly been implemented to address the same challenges, within the context of a specific testing problem, that I or my team had previously encountered.

Time wasted?

The examples here are representative of what I have seen elsewhere - I imagine that it must be a common phenomenon. The 'convergent evolution' of tools must be going on throughout development organisations around the world. Taking SQL generation as an example, through my research on the subject I know that Microsoft, DB2 and MySQL have all developed similar utilities.

One obvious question that arises here is 'is this time wasted?'. If parallel efforts are going towards creating equivalent tools then would it not make sense to collaborate on these? Surely it would be more efficient if these teams combined their resources to create common tools rather than duplicating effort? Or perhaps that's a rather naive perspective.

  • Tools are valuable assets. Microsoft at least have never released their RAGS tool outside of their organisation. I suppose that the theory here is that releasing the tool would provide an advantage to others developing competitive products who haven't invested the time to developing them. A good tool can provide a competitive edge in just the same way as a well designed feature set or a talented team. For an open standard such as SQL, Microsoft have little incentive to release tools that allow others to develop competitive products. By way of an interesting comparison, for Microsoft's own standard of ODBC, they have made tools freely available - perhaps the improved adoption of their own standards provides sufficient incentive to provide free tools in this situation.

  • Joint ventures are prone to failure. As part of my higher education I studied joint ventures between organisations, with a focus on examining the high failure rate of this kind of collaborative project between companies. Based on the examples looked at, they rarely deliver to the satisfaction of both contributors. Some reasons for potential failure include an inability to relinquish full control of the project, and an imbalance between the level of input and the benefit gained between the contributing parties. When it comes to the relatively small and well encapsulated nature of tool development, it makes more sense to take this on in-house to maintain ownership and avoid the pitfalls of joint development.

  • Ownership and control are powerful motivators. Another option for avoiding parallel efforts on tools is if an organisation develops and maintains a tool commercially for others to licence and use. Whilst SQL generation may not have sufficient commercial application to be attractive here, I've seen many scenarios where teams develop their own tools even when there are good commercial alternatives. The incentive behind such an approach can be that a simpler toolset is required and so the cost of expensive commercial options is no justified. Another likely reason is that having control of your tools is a compelling advantage, it certainly is for me. One of the main reasons for developing our own tools has been the knowledge that we can quickly add in new features or interfaces as the need arises, rather than waiting for a commercial supplier with conflicting priorities.

A positive outcome

I came away from the session feeling rather buoyant. One one hand I was pleased that in-house tools development received such a strong focus in my new organisation. The team that we spoke to were clearly very proud of the tools that they had developed, and rightly so. A strong culture of developing the appropriate tools to assist your testing, rather than trying to shoe-horn your testing to fit standard testing tools, is an approach that I have believed in for some time. Whilst the approaches taken between the two teams was somewhat different, what was clear was that we shared a common belief that having the ability and flexibility to develop your own tools was a powerful testing asset.

More importantly, I felt proud of what my team and I had achieved with our tools. With a fraction of the manpower of our new owners we had evolved tools which could be compared favourably with those of a much larger and well resourced organisation. Whilst there was clearly some overlap such that moving to the some of the new tools would make sense over continued development of our own, in some cases the strength of our own tools meant that this was not a foregone conclusion. As I wrote about in my post on the Facebook Effect - we're never quite sure how our work stands up in the market given the positive angle that most folks come from when discussing their work in social media. Given this chance of a more honest and open comparison, I was justifiably pleased with how well the work of my team compared.

References

Tuesday, 20 January 2015

Variations On A Curve

On joining a new company (see my previous post) one of the most interesting activities for me is learning about the differences between their approaches and yours. Working in an organisation, particularly for a long period, leaves you vulnerable to institutionalisation of your existing testing practices and ways of thinking. No matter how much we interact in a wider community our learning will inevitably be interpreted relative to our own thinking, and the presence of the Facebook Effect can inhibit our openness to learning in public forums. Merging with another company within the walls of your own office provides a unique opportunity to investigate the differences between how you and another organisation approach testing. As part of the acquired company I'll be honest and say there is an inclination to guardedness about our own processes and how these are perceived by the new larger company. The reverse is not true, and as part of an entirely new organisation I'm free, if not obliged, to learn as much as I can about their culture and processes.

In the early stages I've not yet had the opportunity to speak to many of the testers, though I have some exciting conversations pending on tools and resources that we can access. One of the things I have done is browse through a lot of the testing material that is available in online training and documentation. Whilst reading through some of the large body of training material I found this old gem, or something looking very much like it:

Now I'm under no illusion that those responsible for testing place any credence on this at all, the material in question was in some externally sourced material and not prepared in-house. It is, however, interesting how these things can propagate and persist away from public scrutiny.

I remember seeing the curve many years ago in some promotional material. Working in a staged waterfall process at the time the image appealed to me. My biggest bugbears at the time were that testers get complete and unchanging requirements; and that we got earlier involvement in the development process. The curve therefore fed my confirmation bias, it seemed convincing because I wanted to believe it. It is hardly surprising therefore that the curve enjoyed such popularity among software testers and is still in circulation today.

Some hidden value?

Since I first encountered it the curve has been widely criticised as having limited and very specific applicability. Given that it originated in the 1970s this is hardly surprising. It has been validly argued that XP and Agile practices have changed the relationships between Specification, Coding and Testing such as to significantly flatten the curve, Scott Ambler gives good coverage of this in this essay. In fact, the model is now sufficiently redundant that global head of testing at a major investment bank received some criticism for using the curve as reference material in a talk at a testing conference.

I'm not going to dwell on the limitations of the curve here, that ground has been well covered. Suffice to say that there are many scenarios whereby a defect from the design may be resolved quickly and cheaply both through development and testing activities, and in a production system. The increasing success of 'testing in live' approaches in SAAS implementations are testament to this. The closer, more concurrent working relationships between coding and testing also reduce the likelihood and impact of exponential cost increases between these two activities.

Whilst seriously flawed, there is an important message implicit in the curve which I think may actually suffer from being undermined by the associated problems with the original model. The greatest flaw for me is that it targets defects. I believe that defects are a poor target for a model designed to highlight the increasing cost of change as software matures. The scope of defects is wide ranging and not necessarily tightly coupled to the design such that they can't be easily resolved later. Michael Bolton does an excellent job of providing counter-examples to the curve here There are, however, other characteristics of software which are tied more tightly to the intrinsic architecture such that changing these will become more costly with increasing commitment to a specific software design.

If we don't consider defects per-se, but rather any property, the changing of which necessitates a change to the core application design, then we would expect an increasing level of cost to be associated with changing that design as we progress through development and release activities. Commitments are made to existing design - code, documentation, test harnesses, customer workflows, all of which carry a cost if the design later has to change. In some cases I've experienced, it has been necessary to significantly rework a flawed design, whilst maintaining support for the old design, and additionally having to create an upgrade path from one to the other. Agile environments, whilst less exposed, are not immune to this kind of problem. Any development process can suffer from missed requirements which render an existing design redundant. In this older post I referenced a couple of examples where a 'breaking the model' approach during elaboration avoided expensive rework later, however this is not infallible and as your customer base increases, consequently so does the risk of missing an important use case.

Beyond raw design flaws, I have found myself thinking of some amusing alternatives that resonate more closely with me. Three scenarios in particular spring to mind for me when I recount personal experiences of issues where design changes were required, or at least considered, that would have been significantly more expensive later than if they'd been thought of up front.

The cost of intrinsic testability curve

Testability is one. As I wrote about in this post, forgetting to include intrinsic Testability characteristics into software design can be costly. In my experience if testability is not built into the original software design then it is extremely difficult to prioritise a redesign purely on the basis of adding those characteristics in retrospectively. Justifying redesigning software to add intrinsic testability becomes increasingly difficult through the development process. I describe in this post the challenge that I faced in trying to justify getting testability features added to a product retrospectively. So I'd suggest the cost of adding intrinsic testability curve probably looks something like this:-

The cost of changing something in your public API curve

API design is another classic. While fixing bugs that might affect user interfaces can be low impact, changing design of APIs which someone has coded against can cause a lot of frustration. We as a customer of other venors' APIs have suffered recently with having to repeatedly rework the code of our solutions due to breaking changes in minor versions of their APIs. Speaking from bitter experience, if you have made breaking changes to your API then you're probably not your customers' favourite vendor right now.

The cost of fixing bugs the customer doesn't care about curve

Of course, lest we forget, there is always the chance that the stuff that you thought was important is less so to the customer. The cost curve for fixing bugs in features drops significantly if you realise that no-one is interested in the fix. Some features seem to be particular favourites with product owners or marketing teams but receive little production use. Testers are obliged to raise issues with them, however practically once the software is released these issues are unlikely to get prioritised by the customer.

Please note that these curves are entirely frivolous and have no basis on empirical evidence. That said I'd put money on them being more representative than the original in organisations where I've worked. And given the fact that they'll appeal to many testers' own opinions, perhaps through the power of confirmation bias they may just be showing up in a keynote near you a few years from now.

References

Friday, 19 December 2014

New Beginnings, Old Desk

This week it was announced publicly that RainStor has been acquired by Teradata. For anyone not familiar with the name, Teradata is a US based company and its one of the largest data warehouse companies in the world with over 10 thousand employees - a marked difference to the roughly 50 employees of RainStor.

Having worked for RainStor for 8 years (I celebrated my 8 year anniversary at the start of December) this is a change that will have a huge impact on me and my colleagues. At the same time I am confident that, for the most part, the changes will be positive.

8 Years a startup

RainStor has been a fantastic small company to be a part of. Through the whole time that I've been lucky enough to be on board we've managed to maintain the vibrant intensity of a startup company. During this sustained period of intense activity, the team ethic has been amazing. It almost became predictable that the 'teamwork' subject in retrospective meetings would be full of happy yellow post-it notes praising the great attitude shared amongst the team.

Being a perennial startup has its disadvantages though. A small company inevitably has to 'chase the ball' in terms of market, with the result that focus of testing can be distributed too thinly across the capabilities and topologies supported to meet each customer's need. I'm optimistic that a company with a wider portfolio of products will allow us to focus on fewer topologies and really target our core strengths.

There have been ups and downs in fortune over the years. The credit crunch was a troubled time for organisations relying on funding and the fact that we got through it was testament to the strong belief in the product and the team. It seemed scant consolation though for the colleagues who lost their jobs. Counter that with the jubilant times when we won our first deals with household name companies the likes of which we'd previously only dreamed of, and you are reminded that those working in a small software company can typically expect to face a turbulent spectrum of emotions (tweet this).

So what now?

I've every reason to be positive about the future of RainStor. From my conversations with the new owners they seem to have an excellent approach to the needs of research and development teams. The efforts that have been undertaken to provide an open and flexible development environment give a lot of confidence. Combine this with an ethic of trying to minimise the impact that the acquisition has on our development processes and I'm optimistic that the approaches that we've evolved over time to manage testing of our big data product in an agile way will be protected and encouraged.

The resources available to me in running a testing operation are way beyond what we could previously access. As well as access to a greater scale of testing hardware, I'm also going to relish the idea of interacting with others who have a wealth of experience in testing and supporting big data products. I know that my new company have been in the business for a long time and will have a well developed set of tools and techniques in testing data products that I'm looking forward to pinching learning from.

What about me?

Whilst I've known about the change for some time, it is only since this week's public announcement that I've realised how long it has been since I worked for a large company. I've been well over 14 years in small companies, or as a contractor, so the change is going to take some getting used to. I love the uncertainty and autonomy of working for small companies, where every problem is shared and you never know what you'll have to take on next. As a smaller part of a large product portfolio I imagine we'll sacrifice some of that 'edge of the seat' excitement for more stability and cleaner boundaries.

A change such as this, particularly at this time of year, inclines one towards retrospection over the events leading up to this, and the possibilities for the future. I'm both proud of what I've achieved at RainStor the small company, and regretful of the areas that I know I could have achieved more. Nothing will bring back those missed opportunities, however the fact that, with just a handful of individuals we've written tested, documented and supported a big data database to the point of being used in many of the largest telecommunications and banking companies in the world leaves me with very few regrets. I hope that as we grow into our new home I can take the chance to work with my colleagues in delivering more of the amazing work that they are capable of.

So that's it, my days of testing in a startup well and truly are over for the foreseeable future. I obviously don't know for certain what's to come. One thing I am confident in is that we couldn't have found a better home. As well as having a great attitude to development my new company also have an excellent ethical reputation, which at least mitigates some of the uncertainty over not personally knowing who is running the company. I am already enjoying building relationships with some excellent people from our new owners.

I imagine that few folks reading this will be ending the year with quite as much upheaval, but I'm sure some are looking forward to new jobs or new opportunities or experiencing sadness at leaving cherished teams. Whatever your position, thanks for reading this post and thanks to those who've followed this blog through the last few years, I'm sure I'll have some interesting new experiences to write about over the coming months. I hope that, if you take some moments in the coming days to look at the past years and what is to come, you have as many reasons to be proud and excited as I do.

https://www.flickr.com/photos/stignygaard/2151056741 .

ShareThis

Recommended