Sunday 9 December 2012

Lessons Learned Testing in a Startup

This month I celebrated my sixth anniversary with my company. Whilst five years is a more traidtional milestone, for me six feels like a more fundamental step as it takes me into new stage of longevity, the six to ten year group. It seems like only yesterday that I was sat at my first Christmas party with twenty strangers wondering whether my gamble would pay off and the company would be successful. At this year's Christmas party I sat with many of the same colleagues, who I would now consider friends, discussing how some of the biggest companies in the world are implementing and using our software. It has been an amazing journey so far and, whilst still not guaranteed continued success, I have a lot more confidence in the long term success of the company now than I did when I joined. It seems, then, an opportune time to look at the last few years at how we've grown from the early stages of a startup through the first customer adoptions and into the next phase of evolution as a company. In this post I highlight some lessons that I have picked up along the way. All are relevant to testing although many have more general applicability for, as with many startup members, I've worn a few hats and been involved in many aspects of the growth of the company. I've included here both things that, with hindsight, I wish I had given greater consideration to and also things that I'm very pleased that I made the effort to get right from the start:-

 

Sapling

 

Get your infrastructure right up front

A temptation in startup companies is to postpone infrastructure work until it is really neeeded, instead focusing on adding features and getting those early customers. It is easy to defer building a robust infrastructure, particularly on the testing side, by convincing yourself that you'll have more time to sort such things out once you've got some customers to pay the bills and please the investors. In my experience you only get busier as your customer base grows, and new priorities emerge on a daily basis. WIth this in mind it pays to get the infrastructure right to support you through the initial phases of customer adoption at least:-

  • Build appropriate versioning into your build system to allow for multiple releases (e.g. using svn revision number for labelling builds only works if you only have one branch)
  • Ensuring that you build support for release branching and multiple supported software releases in any testing and bug management systems
  • Ensure that you build your tests in the code control system along with the code to help with multiple version support
  • Consider the need for extending support out to multiple operating systems and try to use generic technologies (Java, compiled code, Perl, generic shell) in your test harnesses rather than OS specific ones (e.g. Linux bash) 
  • Ensure that your tools and processes can be scaled to multiple agile teams from the single team that you are likely to start with

 

Don't build up a backlog of manual tests

As with ignoring infrastructure, another temptation in early startups is to forego the development of automated testing structures to concentrate on getting functionality out the door, instead relying on manual and/or developer led testing. Whilst this may be a practical approach with a small feature set, it soon becomes less so as the product grows and your manual testing is rightly focused around the new features.  Getting a set of automated checks in place around key feature points to detect changes in the existing feature set will pay dividends once the first customers are engaged and you want to deliver the second wave of features quickly to meet their demands.

Also as the company grows and you do want to bring more testers on, an existing suite of tests can help to act as a specification of the existing behaviour of the system. Without this, and in the absence of thorough documentation, then the only specification that testers will have available is the feature set of the product itself, an inherently risky situation. I personally have encountered the situation when joining a maturing startup where there was no test automation and no product documentation and it was unclear exactly what some features were doing. The approach being taken for testing these areas was to run manual checks that they did the same as last time those checks were run, 

 

Prepare for growth in the team

When starting out the testing contingent of your company is probably going to be small, possibly just one tester (or none at all). This will inevitably change with growth. I have found a great approach to plan for future growth is to put a set of principles put in place on key areas such as your automation strategy for new members to refer to. This will allow individuals to ensure that they are working consistently with other testers in the company, even when their activitities are being performed in isolation. 

It is also a good idea to document your bespoke tools and processes thoroughly as well. You and your early colleagues will learn your tools inside out as they are developed, but for new starters who don't have this experience your tools can be a confusing area. This is something that I wish I had done better at my company, as documenting tools retrospectively is difficult to prioritise and achieve. It is a shame when you've put the effort to develop powerful and flexible tools to automate testing against the system if future members of the team won't know how best to use them. .

 

Prepare for growth in the product

When starting out with a product development then the focus is likely to be on moderately sized uses or implementations. This is natural as larger scale adoption seems a long way off when starting out and larger customers often want evidence of successful smaller scale use before committing. If you are initially successful then demand for increased capacity can come very quickly and catch you out if the product does not scale. Testers can help avoid this situation by identifying and raising scalability issues. In the early days at RainStor a lot of the focus was on querying few large data partitions for small scale use cases. I was concerned with the lack of testing on multiple partitions, so I raised the issue in the scrum and performed some exploratory testing around join query scalability, exposing serious scaling issues. Getting these addressed helped to improve the performance of querying larger sets of results which proved timely when our first large scale customers came on board later that year.

The development teams can also help here by ensuring that the early feature designs are structured to allow for scalabiity in the future. If this is not done then you can end up having to rework entire feature sets when the size of your implementation grows. I've experienced cases where we've ended up with two feature sets that do exactly the same thing, the original one which we developed for early customers and a new version for larger implementations to overcome scalability blocks in the earler design. This is confusing for the customer and increases risk and testing effort.

 

Prepare to Patch

Yes we are software testers and admitting that you have had to patch the software is like admitting failure, but for most early developments it is a fact of life. One of the main challenges when testing in a startup is that you don't always know what your customers are going to do with the system. If your requirements gathering is working, and agile methods certainly help with this, then you'll know what they want to do, What you don't know, however, are the wide range of sometimes seemingly irrational things that people will expect your system to do. These will often only emerge in the context of active use, yet will merit hasty resolution. Whether running an online SAAS type system or a more traditional installed application such as the one I work on, I think a very good rule of thumb is to have the mechanism in place to upgrade to your second release version before you go live with your first.

 

Consider Future Stakeholders

Startups are characterised by having a few key personnel who carry a lot of expertise and often wear multiple hats. It is not uncommon to find senior developers and architects doubling up to take on sales channel activities or implementation work in small startups. As a result, in the early stages, these roles require little support and assistance in installing and using the system. If the company enjoys successful growth, however, there will typically follow an influx of specialist individuals to fill those roles who will be operating without the background in developing the system. Assuming that they will seamlessly comprehend all of the idiosyncracies of the product in the same way is unrealistic. I think a great idea is to Include future stakeholder roles in your testing personas as a way of preparing your product for the changes in personnel that will inevitably come as the company grows. 

 

Design temporary things to be permanent

With a small and reactive company chasing a market you cannot predict the directions you will be pulled in the future. Don't assume that the 'temporary' structure you put in place won't still in place in 5 years time. Some of the test data that I created to test ODBC connectivity 6 years ago is still run today as a nightly test and used extensively for exploratory testing. Given that you never know which of the items that you implement will get revisited at some point in the future, it pays to design things to last. I learned this lesson the hard way from working on integrating a suppliers product two years ago. I deferred putting in place an appropriate mechanism of merging their releases into our continuous integration. This happens so infrequently that it is hard to prioritise revisiting this, however when we do have to do it it is a cause of error and unnecessary manual effort. As we found in this case, the overall cost in time from repeatedly working with a poorly structured utility is inevitably greater than the time that we would have spent had we invested in getting it right from the start. Every time I have to follow this process I regret not spending more time on it when our focus was on that area.

 

These are just a few of the lessons that I've picked up from working specifically with a startup. If I could go back and give myself some advice, the points I mention here would certainly have helped me to avoid some of the pitfalls that I have faced. In another six years I may well be writing about tips for taking testing through the next phase of development from a small company to a market leader, or I may have moved on and have a new set of lessons from another context. In the meantime here's to the last six years and I hope you find some useful tips in the points presented here.

halperinko said...

preparing for growth can be done almost for free - but many small & even larger organizations ignore that.
Using free recording tools such as InstantDemo or even qTrace during presentations, can save further cycles of same presentations when team grows.
We normally keep "HowTo" directory too, with short textual explanation of main activities,
easy to be maintained by anyone.

Setting a free ALM in advance, saves further work of seeking Requirements and Tests documentation.
It does not mean you need to elaborate every test - just test idea header can be enough (with/out purpose of the test)

Whatsapp Button works on Mobile Device only

Start typing and press Enter to search