Friday, 19 December 2014

New Beginnings, Old Desk

This week it was announced publicly that RainStor has been acquired by Teradata. For anyone not familiar with the name, Teradata is a US based company and its one of the largest data warehouse companies in the world with over 10 thousand employees - a marked difference to the roughly 50 employees of RainStor.

Having worked for RainStor for 8 years (I celebrated my 8 year anniversary at the start of December) this is a change that will have a huge impact on me and my colleagues. At the same time I am confident that, for the most part, the changes will be positive.

8 Years a startup

RainStor has been a fantastic small company to be a part of. Through the whole time that I've been lucky enough to be on board we've managed to maintain the vibrant intensity of a startup company. During this sustained period of intense activity, the team ethic has been amazing. It almost became predictable that the 'teamwork' subject in retrospective meetings would be full of happy yellow post-it notes praising the great attitude shared amongst the team.

Being a perennial startup has its disadvantages though. A small company inevitably has to 'chase the ball' in terms of market, with the result that focus of testing can be distributed too thinly across the capabilities and topologies supported to meet each customer's need. I'm optimistic that a company with a wider portfolio of products will allow us to focus on fewer topologies and really target our core strengths.

There have been ups and downs in fortune over the years. The credit crunch was a troubled time for organisations relying on funding and the fact that we got through it was testament to the strong belief in the product and the team. It seemed scant consolation though for the colleagues who lost their jobs. Counter that with the jubilant times when we won our first deals with household name companies the likes of which we'd previously only dreamed of, and you are reminded of the turbulent spectrum of emotions that typifies working in a small software company.

So what now?

I've every reason to be positive about the future of RainStor. From my conversations with the new owners they seem to have an excellent approach to the needs of research and development teams. The efforts that have been undertaken to provide an open and flexible development environment give a lot of confidence. Combine this with an ethic of trying to minimise the impact that the acquisition has on our development processes and I'm optimistic that the approaches that we've evolved over time to manage testing of our big data product in an agile way will be protected and encouraged.

The resources available to me in running a testing operation are way beyond what we could previously access. As well as access to a greater scale of testing hardware, I'm also going to relish the idea of interacting with others who have a wealth of experience in testing and supporting big data products. I know that my new company have been in the business for a long time and will have a well developed set of tools and techniques in testing data products that I'm looking forward to pinching learning from.

What about me?

Whilst I've known about the change for some time, it is only since this week's public announcement that I've realised how long it has been since I worked for a large company. I've been well over 14 years in small companies, or as a contractor, so the change is going to take some getting used to. I love the uncertainty and autonomy of working for small companies, where every problem is shared and you never know what you'll have to take on next. As a smaller part of a large product portfolio I imagine we'll sacrifice some of that 'edge of the seat' excitement for more stability and cleaner boundaries.

A change such as this, particularly at this time of year, inclines one towards retrospection over the events leading up to this, and the possibilities for the future. I'm both proud of what I've achieved at RainStor the small company, and regretful of the areas that I know I could have achieved more. Nothing will bring back those missed opportunities, however the fact that, with just a handful of individuals we've written tested, documented and supported a big data database to the point of being used in many of the largest telecommunications and banking companies in the world leaves me with very few regrets. I hope that as we grow into our new home I can take the chance to work with my colleagues in delivering more of the amazing work that they are capable of.

So that's it, my days of testing in a startup well and truly are over for the foreseeable future. I obviously don't know for certain what's to come. One thing I am confident in is that we couldn't have found a better home. As well as having a great attitude to development my new company also have an excellent ethical reputation, which at least mitigates some of the uncertainty over not personally knowing who is running the company. I am already enjoying building relationships with some excellent people from our new owners.

I imagine that few folks reading this will be ending the year with quite as much upheaval, but I'm sure some are looking forward to new jobs or new opportunities or experiencing sadness at leaving cherished teams. Whatever your position, thanks for reading this post and thanks to those who've followed this blog through the last few years, I'm sure I'll have some interesting new experiences to write about over the coming months. I hope that, if you take some moments in the coming days to look at the past years and what is to come, you have as many reasons to be proud and excited as I do.

https://www.flickr.com/photos/stignygaard/2151056741 .

Wednesday, 10 December 2014

Not Talking About Testing, and Talking About Not Testing

I recently spent 3 very pleasurable days at the EUROSTAR Conference in Dublin. I was genuinely impressed with the range and quality of talks and activities on offer. For me the conference had taken great strides forward since I last attended in 2011 and congratulations have to go to Paul Gerrard and the track committee for putting the excellent conference together.

As with all conferences, while the tracks were really interesting, I found that I personally took away more from the social and informal interactions that took place around the event. One of the subjects that came up for discussion more than once was the fact that, for many of the permanently employed folks attending, our roles had changed significantly in recent years. For some of us our remits had extended beyond testing to include other activities such as technical support, development or documentation. For others the level of responsibility had changed more to management and decision making and less on testing activities involving interfacing directly with our products.

The new challenges presented by these new aspects of our roles dominated much of the conversation over the pub and dinner table, for example Alex Schladebeck in this post mentions exactly such a conversation between Alex , Rob Lambert , Kristoffer Nordström and I. Subjects discussed during these interactions included

  • How to motivate testers in flat organisational hierarchies.
  • How to advance the testing effort and introduce new technologies and techniques, sometimes in the face of demand to deliver on new features.
  • Making difficult decisions when there are no clear right answers.
  • Balancing the demands of testing when you have other responsibilities
  • How to ensure that areas of responsibility are tackled and customers needs met without putting too much pressure on individuals or teams. In my case this can include how to fit maintenance releases for existing products into the agile sprint cycle.
  • How to approach the responsibilities of managerial roles in organisations with self managing scrum teams.
  • Exerting influence and exhibiting leadership whilst avoiding issues of ego and 'pulling rank'.

It was clear in conversations that, whilst testing was an area that we had studied, talked on, debated and researched, these areas of 'not testing' were ones where we were very much more exploring our own paths. Traditional test management approaches seem to be ill suited to the new demands of management in the primarily agile organisations in which we worked.

The problem with not testing

Whilst testing is a challenging activity, for some of us 'not testing' was presenting as many problems. One of the things that I know was concerning a number of us was balancing the up to date testing and product knowledge with performing a range of other duties. As the level of responsibility for an individual increases, inevitably the time spent actively working on tasks testing the product itself can diminish such that it is easy to lose some of the detailed knowledge that comes from interacting with the software.

One thing that I personally have found definitely does not work is kidding myself that I can carry on regardless and take on testing activities like I always have. Learning to accept the fact that I'm not able to pick up a significant amount of hands on testing whilst facing a wealth of other responsibilities has been a hard lesson at times. It is a sad truth though that in recent years when I have assigned myself sprint items to test these have ended up being the stories most at risk in that sprint, due to my having not been able to focus enough time to test them appropriately. Whilst I enjoy testing on new features, the reality that I have had to accept is that the demands of my other responsibilities mean that others in the team are better placed to perform the testing of new features than I am.

Dealing with not testing

I won't pretend to be entirely winning the battle of not testing, however I find that most of the other activities that I do perform contribute to my general product knowledge and can balance and complement the time spent not testing:

  • Attending elaboration and design meetings for as many features as possible
  • Reviewing acceptance criteria for user stories, both at the start and also the confidence levels at the end
  • Having regular discussions with testers on test approaches taken and reviewing charters

Additionally some of the areas that I have become responsible for help to give different perspectives on the product

  • Discussing the format and content of technical documentation reveals how we are presenting our product to different users
  • Scheduling update releases and compiling release notes helps me to maintain visibility of new features added across scrum teams
  • Time spent on support activities helps give an insight into how customers use the product, how they perceive it (and any workarounds that folks are using)

These all help, however I think that as a company and product grow it is inevitable that individuals will be unable to maintain an in depth understanding of all areas of the product. Something that I have had to get accustomed to is deferring to the levels of expertise that exists in others on specific areas of product functionality which are much higher than I can expect to maintain across the whole product.

The future

I came away with the feeling that I'm part of a generation who are having to provide their own answers to questions on managing within agile companies. We are seeing a wave of individuals who haven't had agile imposed on their existing role structures. Instead we are growing into managerial roles within newly maturing agile organisations which don't necessarily have a well defined structure of management on which to base our goals and expectations. We are attempting to provide leadership both to those who have come from more formal cultures and to those who have only ever known work in agile teams, and have very different expectations as a result. Consequently there is no reference structure of pre-defined roles to base our expected career paths around. Some, like Rob, have taken on the mantle of managing development. Others, like myself, have the benefit of colleagues at the same level who are focused on programming activities, however this does mean learning to work in a matrix management setup where the lines of responsibility are not always clear given the mixed functions of the team units.

It was clear from the conversations at EuroSTAR that for many of us our perspective on testing, and test management was changing. Whether this was simply the result of individuals with similar roles and at similar points in their careers gravitating towards each other, or the signs of something more fundamental, only time will tell. The conversations that I was involved in at the event were only a microcosm in relation to the wider discussions and debates over testers and testing that were going on amongst the conference attendees.

If anything can be taken from the range of different management roles and responsibilities that the individuals that I know have evolved to adopt, it is that there is no clearly defined model for test management in modern companies. It is really up to the testers in those organisations to champion the testing effort in a way that works for them, and to build a testing infrastructure that fits with the unique cultures of their respective organisations.

Thursday, 20 November 2014

In The Spirit of Windows

As I wrote in my post on The FaceBook effect, as you progress in your career you build up more examples of when you got it wrong, or you exhibited opinions which change over time to the extent that you later come to disagree with your former self. Perhaps the most fundamental example of this for me was my position on programmer/tester collaboration that I wrote about in this post. Another good example comes from an earlier role when I was testing a marketing data analytical client/server system, when my position on the need for explicit requirements was very different...

Demanding the Impossible

At the time the process that I was working under was a staged waterfall style process. We didn't follow any specific process model but it was compared on occasion to RUP. There was a database which contained formal requirements written by the product managers, and a subset of these were chosen for each long term (6-9 month) release project.

Whilst the focus of most of the development, and all of the testing, was on the main engine, a parallel development had been underway for some time to deliver a customer framework for housing the client components and managing the customers data rules, objects and models. This had been done on a much more informal, interactive manner than the core features with the result that requirements had been added to the database thick and fast as the product managers thought of them. These were much less rigidly specified up front than was usual in the company with behaviour established instead through ongoing conversations between the programmes and the product manager.

Enter the testers

After a long period of development, it was decided to perform a first release of the new framework. At this point the test team were brought into the project. What we found was a significant accumulation of requirements in the database, some of which were delivered as written, many had changed significantly in the implementation, and for many of them it was unclear whether they had been delivered or not. To clear up the situation the product owners, testers and architects day down for a couple of lengthy meetings to work through this backlog and establish the current position.

One requirement in particular caused the most discussion, and the most consternation for the testers. I don't have the exact wording but the requirement stated something like

“The security behaviour within the system will be consistent with the security of the windows operating system”

We pored over this one, questioning it repeatedly in the meetings. What did it mean specifically and how could we test it? What were the exact characteristics of the security model that we are looking to replicate? How could we map the behaviour of an operating system to a client server object management framework? In somewhat exasperated response the Development Manager tried to sum up the requirement in his own words :

“It should be done in the spirit of windows”

This caused even more consternation from the testers. Behind closed doors we ranted about the state of the requirements and the difficulties we faced. How could we test it when it was so open to interpretation? How could you write explicit test cases against a requirement “in the spirit of” something? How did you know whether a specific behaviour was right or wrong? We complained and somewhat ridiculed the expectation that we were to test "in the spirit of" something.

A rose by any other name

Looking back on that time, I can see that most of the requirements were closer to user stories than the formal requirements that we were accustomed to write our test cases against . They were not intended to explicitly define the exact behaviour, but to act more as placeholders for conversations between the product owners and the relevant development team members on how the value was to be delivered. These were small summaries of target value which were open to interpretation on their implementation and required discussion to establish the detailed acceptance criteria. The problem was that, within agile context the expectation here would be that this conversation be held between the '3 Amigos' of Product Owner, Programmer and Tester. Unfortunately in the process that we were working, the 'Third Amigo', the tester, had been missed from the conversation, with the result that the testers only had the requirement as written to refer to, or so we felt.

The Spirit of an Oracle

Let us examine the requirement that so vexed me and the other testers so much at that time - that the user security model should work in the 'spirit of windows'. As a formal requirement yes it was ambiguous, however as a statement of user value there was a lot of information here for testers to make use of. The requirement instantly provides a clear test oracle, that of the windows file system security model, from which we could establish some user expectations.

  • Windows security on files is hierarchically inherited such that objects can inherit their permissions from higher level containers
  • inherited properties can be overridden such that an object within a container has distinct security settings from its parent container
  • Permissions are applied based on the credentials of the user that is logged into windows
  • Permissions on objects may be applied either to individuals or at a group level
  • Ability to perform actions is based on roles assigned to the user
  • Allow permissions are additive so a user will have the highest level of permissions applied to any group that they are a member of
  • Deny permissions for any object overrode Allow permissions And so on.

The important concept that we failed to recognise here is that it didn't really matter what the exact behaviour was. The need for explicitly defined behaviour up front in this scenario was not present. The value was in the software providing as familiar and consistent a user experience to windows as possible whilst also providing an appropriate security structure for the specific product elements.

It is true that the ambiguity of the requirement made it more difficult to design explicit test cases and expected results in advance of accessing the software. What we were able to do was examine another application which possessed the characteristics of our target system to establish expectations and compare actual behaviour. As Kaner points out in this authoritative post on the subject, and Bolton explains eloquently through this fictitious conversation - oracles are heuristic in nature in that they help guide us in making decisions. Through the presence of such a clear testing oracle we were able to explore the system and question the behaviour of any one area through comparison with an external system. If there were behaviours that were inconsistent with our expectation, based on the windows system, then we were able to discuss the behaviour directly with the product owners who sat in close proximity to the testers. This required judgement, and sometimes compromise given that the objects managed in our system and the relationships between them were inherently different to Windows files and directories. As with all heuristic devices, our oracle was fallible and required the judgement of the testers to decide whether inconsistencies corresponded to issues or acceptable deviations. In many ways it was a very forward thinking setup, it just wasn't what we were accustomed to, and the late introduction of the testers into this process resulted in our exclusion from important discussions over which of the above behaviours we needed to deliver, and therefore restricted limited our judgement in relation to the oracle system. This, combined with our unfamiliarity with this way of working, resulted in our resistance to the approach taken.

The spirit of testing

I find great personal value in examining situations from the past to see how my opinions have changed over time. Not least this provides some perspective on my current thinking around any problems and can act as a reminder that my position on an issue may not be constant as my experience I grows. In the years since my 'spirit of windows' incident I've grown more pragmatic around the need in testing for rigidly specified requirements. In particular experience has demonstrated that specifications are themselves fallible, yet are treated as if they should be unambiguous and exhaustive instead of being treated as simply another type of oracle, to be used with judgement in making decisions. This can have damaging consequences - I have seen the situation where incorrect behaviour was implemented on the basis of a specification, when there was an excellent test oracle available that was not referenced during testing as there was no perceived need to do so.

The availability of a clear testing oracle provides an excellent basis for exploring a system that is being actively developed with the fluidity of minimising documentation to focus on asking questions and discussing design decisions through the development process, such as when working with agile user stories. What the example above clearly highlights is the importance of early tester engagement in this process if the testers are to understand the value in the feature, the decisions that go into the design and, crucially, the characteristics of the test oracle that we are looking to replicate.

References

Image: https://www.flickr.com/photos/firemind/30189049

ShareThis

Recommended