Monday, 8 September 2014

The FaceBook Effect

I recently celebrated the birth of my 4th child. Whilst my wife was recovering from the birth I enjoyed the opportunity to take my older children to school and to speak to friends and other parents wanting to pass on their congratulations and wishes. One such day I was chatting with a friend of my wife's and the conversation strayed into an area which she felt particularly passionate about, which also struck a chord with me both in a personal and professional capacity. The friend was telling me that she was so excited to hear news of the birth that she had logged onto Facebook for the first time in months to check my wife's status. She explained that she had previously stopped using Facebook as she felt that it compelled her to present a false image of her life. Whilst I occasionally use Facebook I was inclined to agree with her that there is a lot of pressure on social media to present a very positive image of yourself and your life. Indeed this pressure is such that many people seem more focused on staging personal occasions to take photographs and post details to social media than on actually enjoying the occasion themselves.

But What's this got to do with Testing?

Whatever your position on social media, you're probably wondering why I'm recounting this conversation on a testing post. The reason that the conversation struck a chord with me on a professional level was because I think that there is a similar pressure in professional communities and social media groups, with those associated with Software Testing being particularly prone for reasons I'll go into.

With the advent of professional social media in the last decade, professionals now enjoy far greater interaction with others in their field than ever possible before. In general I think that this is a hugely positive development. It allows us to share opinions and discuss ideas and accelerates the distribution of new methods and techniques through the industry. Social media channels such as Twitter, LinkedIn and discussion forums also provide less experienced members of the community far greater access to experienced individuals with a wealth of knowledge and expertise than ever possible than when I started testing. More importantly social media allows us to be vocal and criticise activities which could damage our profession and find other individuals who share the same concerns. The recent rallying behind James Christie's anti ISO 29119 talk would simply not have been possible without the social media channels that allowed like minded individuals to find a collective voice in the resulting online petition (I don't suggest that you go immediately and sign the petition, I suggest that you read all of the information that you can, and make up your own mind. I'd be surprised if you decided not to sign the petition ). Social media has the power to give a collective voice where many individual voices in isolation would not be heard.

On the flipside of the positive aspects, Social Media carries an associated pressure - let's call it the 'Facebook effect' - where those contributing within a professional community feel the need to present a very positive image of what they are doing. It is easy to compare one's own work in a negative light and engender feelings of professional paranoia as a consequence. This is not something that is specific to testing and the phenomenon has been highlighted by those writing on other industries, such as this post highlighting problems in the marketing community.

For the many who make their living in or around the social media industry, the pressure to be or at least appear to be an expert, the best, or just a player is reaching a boiling point.

The message is clear. Advancing in the modern professional work is as much about climbing the social media ladder as the corporate one and in order to do that we need to present the right image.

Living up to the image

Based on the article above, and the references at the end of this post, it is clear that the negative side of social media affects other industries, so what is it about testing in particular that I think makes us particularly conscious of how we present ourselves?

Before putting this post together I approached a number of testers to ask whether they had ever experienced or observed the symptoms of the Facebook effect in their testing careers. I received some very interesting and heartfelt responses.

Most accepted the need to present a sanitised, positive image on social media

You don't want to wash your dirty linen in public

And the need for constant awareness that everything posted publicly was open to scrutiny

I know that anything I say is open to public 'ridicule' or open for challenge

Some went further to admit that they had experienced occasions where they felt paranoid or intimidated as a result of interactions with testing based social media. One tester I spoke to highlighted the problem of opening dialogues requesting help or input from others and being made to feel inferior

when you make an opening to someone in our industry asking their thoughts or opinions and they seem to automatically assume that this means you are a lesser being who has not figured it all out already

I don't think either myself or the person writing that feel that assumptions of this type are always the case, but I've certainly experienced the same thing. A few years ago, as an experienced tester finding my feet in the agile world, I found myself frustrated by the 'you are doing it wrong' responses to questions I posted in agile testing lists. I don't want a sanctimonious lecture when asking questions on my problems, I want some open help that acknowledges the fact that if I'm asking for assistance in one area it doesn't mean that I don't know what I am doing in others.

Why is testing so affected?

I think there are a number of factors that contribute to testing being particularly prone to the 'Facebook effect'.

  • what are we admitting?

    I obviously can't comment on how it is for other professions, but I think for testing the pressure of presenting a positive image is particularly prevalent due to the implications of any negative statements on perception of our work or organisations. Any admission of fault in our testing is implicitly an admission of the risk of faults in our products, or worse, of risks to their data as customers. Whilst we may be prepared to 'blame the tester' in the light of problems encountered, it is not the case that we want to do this proactively by openly admitting mistakes. Some testers also have the additional pressure of having competitor companies who can pick up on revelations of mistakes to their advantage. As a tester working for a product company with a close competitor told me:

We are watching them on social media as I am assuming they are watching us. So I do need to be guarded to protect the company (in which) I am employed.
  • what are we selling?

    Some of the most active participants in any professional communities will be consultancy companies or individuals, and testing is no different. These folks have both a professional obligation not to criticise their clients, and also a marketing compulsion to present the work that they were involved in as successful so as to present value for money to other prospective customers. The result is a tendency towards very positive case studies from our most vocal community members on any engagements, and an avoidance of presenting the more negative elements to protect business interests.

  • Where are we from?

    Testing is a new profession. I know that some folks have been doing it for a long time, but it just doesn't have the heritage of law, medicine or accountancy that provides stability and a structure of consistency across the industry. Whereas this does result in a dynamic and exciting industry in which to work, it also means that It workers operate in a volatile environment where new methodologies compete for supremacy. Attempts to standardize the industry may appear to be an attractive option in response to this, offering a safety net of conformity it a turbulent sea of innovation, however the so far flawed attempts to do so are rightly some of the greatest points of contention and result in the most heated debate in the world of testing today. The result is an industry where new ideas are frequent and it can be hard to tell the game-changing innovations from the snake-oil. Is it really possible to exhaustively test a system based on a model using MBT? Are ATDD tools a revolutionary link between testing and the business or really a clumsy pseudocode resulting in inflexible automation? In such an industry it is naturally hard to know whether you have taken the right approaches, and easy to feel intimidated by others' proclamations of success.

  • Where are we going?

    For some an online presence is part and parcel of looking for advancement opportunities. LinkedIn is particularly geared towards this end. Therefore presenting only the most successful elements of your work is a prudent approach of you want to land the next big job. Similarly for companies who are recruiting, if you want to attract talented individuals then presenting the image of a successful and competent testing operation is important.

Facing Your Fears

One of the problems that we face individually when interacting with professional social media is the fact that the same 'rose tinted' filtering that is applied to the information that we read on other testers and their organisations is not applied to our own working lives. We see our own jobs 'warts and all' which, during the leaner times, can lead to professional paranoia. This is certainly something that I have experienced in the past when things have not been going as well as I would like in my own work. I found that this was more of a problem earlier in my career and has got less as I gain more experience which provides perspective on my work in relation to others. The FaceBook Effect does still rear is head during periods of sustained pressure when I have little chance to work on testing process improvements, as I have experienced this year.

The manner in which we deal with these emotions will inevitably depend on the individual. I think that, whilst easy to fall into a pattern of negativity, there are responses that show a positive attitude and that can help to avoid the negative feelings that can otherwise haunt us.

  • look to the experienced

    Ironically it seems to be the most experienced members of a profession that are most willing to admit mistakes. This could be because many of those mistakes were made earlier in their careers and can be freely discussed now. It could be that having a catalogue of successful projects under your belt furnishes folks with the confidence to be more open about their less successful ones. It could also be that the more experienced folks appreciate the value of discussing mistakes to help a profession to grow and build confidence in its younger members. These are all things that I can relate to and as my experience grows I find an increasing number of previously held opinions and former decisions that I can now refer to personally, and share with others, as examples of my mistakes.

  • get face to face

    I wrote a while ago about an exchange that I did with another company to discuss our relative testing approaches. I have since repeated this exercise with other companies and have another two exchange visits planned for later this year. The exchanges are done on the basis of mutual respect and confidentiality, and therefore provide an excellent opportunity to be open about the issues that we face. There is an element of security about being face to face, particularly within the safe environment of the workplace, which allows for a open conversations even with visitors that you have known for a short time.

  • consultancy

    I don't rely extensively on external consultancy, however I have found it useful to engage the services of some excellent individuals to help with particular elements of my testing and training. In addition to the very useful 'scheduled' elements to the engagement, almost as useful is having an expert with a range of experiences available to talk to in a private environment. As I mention above, consultants should maintain appropriate confidentiality, and they will also have a wealth of experience of different organisations to call on when discussing your own situation. Having had the benefit of a 'behind the doors' perspective of other companies provides a far more balanced view of people's relative strengths and can put your own problems into a more realistic context as a result. There are few more encouraging occasions for a test leader in an organisation than being told that your work stands up in a positive light to other organisations (even if they aren't at liberty to tell you who these are) .

  • closed groups

    I was fortunate to be involved in a project recently that involved being a member of a closed email list. I found this to be a liberating experience. The group discussed many issues that affect testers and test managers openly and without fear of our words being misinterpreted by our organisations or others in the community. There were disagreements on a number of the subjects and I personally found it to be much easier discussing contentious issues with reference to my own position in the closed group environment. The problem with discussing internal issues in an open forum is obviously the risk that your candid talk is seen by the wrong eyes, a closed group avoids this problem and allows for open and candid discussion with sympathetic peers. In fact I obtained some really interesting input from exactly that group prior to writing this post.

  • trusted connections

    I am lucky to have some fantastic testers on my private email address list who I can turn to in times of uncertainty or simply to bounce ideas off before putting them into the public domain. For example I recently had some questions around Web testing. This is not something that I've looked at for some time, having been focussed on big data systems. I received some invaluable guidance from a couple of people in my contacts list without any stigma around my 'novice' questions, as the individuals I spoke to know me and respect the testing knowledge that I have in other areas. Their advice allowed me to provide an educated view back to my business and make better decisions on our approach as a result. As with the closed group, I approached a number of personal contacts for their experiences and opinions to contribute to writing this post.

Don't worry be happy

When my brother left his last job in water treatment engineering his colleagues gave him one piece of parting advice.

Lose the nagging self doubt - you are great at your job

So it could well be that I suffer from some familial predilection to being self critical. You may not suffer from the same. Whether you do or not, I think that when using social media as a community we should maintain awareness of how others will feel when reading our input. We should try to remember that just because others ask for help doesn't mean that they don't know what they are doing. We should consider admitting failures as well as successes so others can learn from our mistakes and gain confidence and solace in making their own.

If interacting with social media personally leaves you with a taste of professional paranoia - I recommend reading this excellent short post from Seth Godin, and remind yourself that the simple fact that you are looking outside your work to the wider professional community to improve your testing, probably means you're doing a fine job.

Other links

Image: Sourced from twitter @michaelmurphy https://twitter.com/michaelmurphy/status/492648065619492864

Monday, 14 July 2014

A Map for Testability

Here Be Dragons

Chris Simms (@kinofrost) asked me on Twitter last week whether I'd ever written anything about raising awareness of testability using a mind map. Apparently Chris had a vague recollection of me mentioning this. This is certainly something that I did, however I couldn't remember where I had discussed it. I've not posted about it, which is surprising as it is a good example of using a technique in context to address a testing challenge. As I mentioned in my Conference List, I have a testability target for this year. It therefore feels like an opportune moment to write about the idea of raising awareness of testability and an approach to this that I found effective.

Promoting Testability

As I wrote in my post Putting your Testability Socks On there are a wealth of benefits to building testability into your software. Given this, it is somewhat surprising that many folks working in Software don't consider the idea of testability. In environments where this is the case it is a frustrating task getting testability changes incorporated into the product, as these are inevitably perceived as lower priority than more marketable features. As Michael Bolton stated in his recent post, testers should be able to ask for testability in the products they are testing. The challenge comes in promoting the need for testability, particularly in products where it has not been considered during early development. This is a responsibility which will, in all likelihood, fall on the tester.

A great way that I found, almost by accident, to introduce the idea of testability in my company was to run a group session to the whole department on the subject. I say by accident as I'd initially prepared the talk for a UKTMF quarterly meeting and so took the opportunity to run a session on the subject internally in a company off-site meeting by way of a rehearsal for that talk. The internal presentation was well received. It prompted some excellent discussions and really helped to introduce awareness of the concept of software testability across the development team.

The Way Through the Woods

Even with a good understanding of testability in the organisation it is not always plain sailing. As I mentioned in my previous post, developments that persist without the involvement of testers are most at risk of suffering from lack of the core qualities of testability. It is hard to know how to tackle situations, such as the one I was facing, where lack of testability qualities are actually presenting risks to the software. The job title says 'software tester' so as long as we have software we can test, right?

On that occasion I took a somewhat unconventional approach to raise my concerns with the management team and present the problems faced in attempting to test the software. I created a mind map. As anyone who has read To Mind Map or not to Mind Map will know that I don't tend to use mind maps to present information to others. In this case I generated the map for personal use to break down a complex problem, and the result turned out to be an appropriate format for demonstrating the areas of the system that were at risk due to testability problems to others.

The top level structure of the map was oriented around the various interfaces or modes of operation of the software features. This orientation was a critical element in the map's effectiveness as it naturally focussed the map around the different types of testability problem that we were experiencing. The top level groupings included command line tools, background service processes, installation/static configuration and dynamic management operations such as adding or removing servers.

  • The installation/static configuration areas suffered from controllability problems due to their difficulty to automate and harness
  • The asynchronous processes suffered from lack of controllability and visibility around which knowing which operations were running at which time
  • The dynamic management operations lacked simplicity and stability due to inconsistent workflows depending on the configuration.

One of the key benefits of mind maps, as I presented in my previous post on the subject, is to allow you to break down complexity. After creating the map I personally had a much clearer understanding of the specific issues that affected or ability to test. Armed with this knowledge I was in a much better position to explain my concerns to the product owners, so the original purpose of the map had been served.

Presenting the Right Image

What I have said in my previous post on mind maps was that I don't tend to use them to present information to others, but if they are to be used for this purpose then they need to be developed with this in mind. In this case I felt that the map provided a useful means to assist in developing a common understanding between the interested parties and so tailored my personal map into a format suitable for sharing. I used two distinct sets of the standard Xmind icons, one to represent the current state of the feature groups in terms of existing test knowledge and harnessing, and the second representing the testability status of that area.

Mind Map Key

The iconography in the map provided a really clear representation of the problem areas.

Mind Map

Driving the conversation around the map helped to prompt some difficult decisions around where to prioritise both testing and coding efforts. I won't claim that all of the testability problems were resolved as a result. What I did achieve was to provide clear information as to the status of the product and the limitations that were imposed on the information we could obtain from our testing efforts as a result.

Highlighting the testability limitations of a system in such a way opens up the possibility to getting the work scheduled to address these shortfalls. It is difficult to prioritise testability work without an understanding amongst the decision makers of the impact of these limitations on the testing and development in general.

In an agile context such as mine then legacy testability issues can be added to the backlog as user stories. These may not get to the top of the priority list, but until they do there will at least be an appreciation that the testing of a product or feature will be limited in comparison to other areas. What's more it is far more effective to reference explicit backlog items, rather than looser desirable characteristics, when trying to get testability work prioritised.

Flexibility

Hopefully this post has prompted some ideas on how raise awareness of testability, both proactively and in light of problems that inhibit your testing. As well as this I think that the key lesson here is about coming up with the most appropriate way to present information to the business. In this case, for me, a mind map worked well. In all likelihood a system diagram would have been just as effective. Some of the customers that I work with in Japan use infographic type diagrams to great effect to represent the location of problems within a system in a format which works across language boundaries - something similar could also have been very effective here.

Testing is all about presenting information and raising awareness. The scenarios that we face and the nature of the information that we need to convey will change, and it pays to have a range of options at your disposal to present information in a manner that you feel will best get it across. There's absolutely no reason why we should restrict these skills to representing issues that affect the business or end user. We should equally be using our techniques to represent issues that affect us as testers, and testability is one area where there should be no need to suffer in silence.

References

Both my previous post Putting your Testability Socks On and Michael Bolton's recent Testability post ask for testability contain good starting references for further research on Testability.

Monday, 23 June 2014

The Conference List

Conference List Notebook

I'm really pleased to be presenting a talk at EuroSTAR again this year. Having spoken before, I know this is a great opportunity and with Paul Gerrard as conference chair I'm sure it will be a fantastic event in Dublin.

There are many benefits to speaking at a conference, the most obvious being the opportunity attend a high profile testing events without having to pay for a ticket. There is also a lot to gain from discussing your work with your peers, as I discussed in my post sparing the time .

There are some less obvious benefits too. These are useful to be aware of, particularly for permanent employees such as myself who are not looking to achieve any marketing value for their product or service from attending. One particularly subtle positive for me results from the shift in perception that arises at the thought of presenting my work to others, and my response to looking at my work more critically.

Getting your house in order

One of the hidden benefits for me in speaking at a conference comes from the extra effort that I put in to completing my in-house background projects leading up to a speaking engagement. In order to attend and present to other testers I need to be coming from a position of confidence in the work that I'm doing. Whilst it should be the case that I have confidence in the testing that we do at all times, it is also the case that for inherently self critical individuals like me, things are rarely exactly as I want them. I always have projects in the pipeline that are aimed at improving the way that we work and filling the gaps that I see in our testing approach. Some of these may be background improvements to our processes and tools, some may be areas of testing areas that I think need attention. Whatever the situation, a speaking deadline provides an excellent incentive to get my house in order and progress those areas that I feel need improvement before I can discuss them with others.

What's on my Conference List?

Here are a few of the things that are on my list to try to do before EuroSTAR this November :

  • Team adoption of stochastic SQL generation

    Last year I set myself the task of creating a tool capable of 'randomly' generating SQL queries based on a data and syntax model designed by the tester. In order to do this I spent some time teaching myself Ruby, as I find including the learning of a new skill helps to maintain enthusiasm for any personal project. I'll save the details for another post, but just an at the stage now where I'd like to get more people involved and enthused about this, with the aim of making it part of our standard testing activities. I'm kicking this off this week with an introductory session with query team.

  • Customised Ganglia monitoring on all test machines

    [Ganglia] ( ganglia.sourceforge.net/ "ganglia") cluster monitoring has become a core element in our soak and scale testing activities. It supports both generic monitoring of operating system resources and also custom monitoring of metrics relevant to our software operation. In our case this includes, amongst other things, the memory of our processes; the disk space utilised in key areas; the numbers of tasks in our processing and pending work queues. Of course we need to be careful of the Observer Effect here. The collection of metrics on the behaviour and performance of processes and resources inevitably impacts that which it is monitoring. From discussions with our customers I know that the software can start to impact application performance as you increase metrics and machines,. Ganglia has proved very useful in pinpointing resource problems, and I intend to get it installed and running on all of our other test servers in the next few weeks.

  • Improved testability in background services.

    As I wrote about in my post Putting your Testability Socks On we did introduce some testability issues into parts of the system a while ago. Whilst many of these have since been tackled, there are some background processes which still exhibit issues with controllability and observability that I hope to resolve. Splitting a maintenance process down into a series of individually callable operations will allow us to explicitly trigger each of the operations managed by the process. This should provide more control for tests involving those operations, but also prevent the introduction of non-deterministic behaviour into other tests which are currently volatile. Needless to say we will also have the full process running for many other tests.

  • Replace our bug tracking system

    Our bug tracking system was adequate when we were a single team company with few customers. Now that we have a larger engineering department with multiple teams, many customers and, most importantly, multiple supported release branches, the system is struggling and I want to replace it with something more suitable. (Of course we could move towards not using a tracking system at all - Gojko Adzic's old post Bug Statistics are a Waste of Time is a good starting point for anyone wanting to go down that road).

What's your Conference List?

in my experience it is often the background projects that individuals work on that provide the greatest advances in the way we work. The user stories or developments that we are working rarely improve our work processes directly. It is the ideas that arise laterally out of my work that triggers the best ideas, yet these need cultivating by motivated individuals and can languish uncompleted without some targets to deliver.

Of course, does not have to be conferences that we might establish as arbitrary targets to deliver some longer term goals. Presenting on testing at internal company meetings, meeting other testers at testing meetups, recruiting new team members or even family personal milestones (My wife and I have baby number 4 due in July) can act as a target for completing long term tasks. Even in an open culture it is often hard to prioritise personal or background goals in the face of more immediate business needs. Establishing a personal deadline helps me in giving the incentive to deliver, particularly when that target involves presenting on your testing in front of a room full of testing experts. If you are struggling to deliver your background projects - try looking at your calendar and put some targets in place around events that you have coming up - it might just provide the push you need to turn that great idea into reality, and may even provide some material for your next talk.

ShareThis

Recommended