Thursday, 20 November 2014

In The Spirit of Windows

As I wrote in my post on The FaceBook effect, as you progress in your career you build up more examples of when you got it wrong, or you exhibited opinions which change over time to the extent that you later come to disagree with your former self. Perhaps the most fundamental example of this for me was my position on programmer/tester collaboration that I wrote about in this post. Another good example comes from an earlier role when I was testing a marketing data analytical client/server system, when my position on the need for explicit requirements was very different...

Demanding the Impossible

At the time the process that I was working under was a staged waterfall style process. We didn't follow any specific process model but it was compared on occasion to RUP. There was a database which contained formal requirements written by the product managers, and a subset of these were chosen for each long term (6-9 month) release project.

Whilst the focus of most of the development, and all of the testing, was on the main engine, a parallel development had been underway for some time to deliver a customer framework for housing the client components and managing the customers data rules, objects and models. This had been done on a much more informal, interactive manner than the core features with the result that requirements had been added to the database thick and fast as the product managers thought of them. These were much less rigidly specified up front than was usual in the company with behaviour established instead through ongoing conversations between the programmes and the product manager.

Enter the testers

After a long period of development, it was decided to perform a first release of the new framework. At this point the test team were brought into the project. What we found was a significant accumulation of requirements in the database, some of which were delivered as written, many had changed significantly in the implementation, and for many of them it was unclear whether they had been delivered or not. To clear up the situation the product owners, testers and architects day down for a couple of lengthy meetings to work through this backlog and establish the current position.

One requirement in particular caused the most discussion, and the most consternation for the testers. I don't have the exact wording but the requirement stated something like

“The security behaviour within the system will be consistent with the security of the windows operating system”

We pored over this one, questioning it repeatedly in the meetings. What did it mean specifically and how could we test it? What were the exact characteristics of the security model that we are looking to replicate? How could we map the behaviour of an operating system to a client server object management framework? In somewhat exasperated response the Development Manager tried to sum up the requirement in his own words :

“It should be done in the spirit of windows”

This caused even more consternation from the testers. Behind closed doors we ranted about the state of the requirements and the difficulties we faced. How could we test it when it was so open to interpretation? How could you write explicit test cases against a requirement “in the spirit of” something? How did you know whether a specific behaviour was right or wrong? We complained and somewhat ridiculed the expectation that we were to test "in the spirit of" something.

A rose by any other name

Looking back on that time, I can see that most of the requirements were closer to user stories than the formal requirements that we were accustomed to write our test cases against . They were not intended to explicitly define the exact behaviour, but to act more as placeholders for conversations between the product owners and the relevant development team members on how the value was to be delivered. These were small summaries of target value which were open to interpretation on their implementation and required discussion to establish the detailed acceptance criteria. The problem was that, within agile context the expectation here would be that this conversation be held between the '3 Amigos' of Product Owner, Programmer and Tester. Unfortunately in the process that we were working, the 'Third Amigo', the tester, had been missed from the conversation, with the result that the testers only had the requirement as written to refer to, or so we felt.

The Spirit of an Oracle

Let us examine the requirement that so vexed me and the other testers so much at that time - that the user security model should work in the 'spirit of windows'. As a formal requirement yes it was ambiguous, however as a statement of user value there was a lot of information here for testers to make use of. The requirement instantly provides a clear test oracle, that of the windows file system security model, from which we could establish some user expectations.

  • Windows security on files is hierarchically inherited such that objects can inherit their permissions from higher level containers
  • inherited properties can be overridden such that an object within a container has distinct security settings from its parent container
  • Permissions are applied based on the credentials of the user that is logged into windows
  • Permissions on objects may be applied either to individuals or at a group level
  • Ability to perform actions is based on roles assigned to the user
  • Allow permissions are additive so a user will have the highest level of permissions applied to any group that they are a member of
  • Deny permissions for any object overrode Allow permissions And so on.

The important concept that we failed to recognise here is that it didn't really matter what the exact behaviour was. The need for explicitly defined behaviour up front in this scenario was not present. The value was in the software providing as familiar and consistent a user experience to windows as possible whilst also providing an appropriate security structure for the specific product elements.

It is true that the ambiguity of the requirement made it more difficult to design explicit test cases and expected results in advance of accessing the software. What we were able to do was examine another application which possessed the characteristics of our target system to establish expectations and compare actual behaviour. As Kaner points out in this authoritative post on the subject, and Bolton explains eloquently through this fictitious conversation - oracles are heuristic in nature in that they help guide us in making decisions. Through the presence of such a clear testing oracle we were able to explore the system and question the behaviour of any one area through comparison with an external system. If there were behaviours that were inconsistent with our expectation, based on the windows system, then we were able to discuss the behaviour directly with the product owners who sat in close proximity to the testers. This required judgement, and sometimes compromise given that the objects managed in our system and the relationships between them were inherently different to Windows files and directories. As with all heuristic devices, our oracle was fallible and required the judgement of the testers to decide whether inconsistencies corresponded to issues or acceptable deviations. In many ways it was a very forward thinking setup, it just wasn't what we were accustomed to, and the late introduction of the testers into this process resulted in our exclusion from important discussions over which of the above behaviours we needed to deliver, and therefore restricted limited our judgement in relation to the oracle system. This, combined with our unfamiliarity with this way of working, resulted in our resistance to the approach taken.

The spirit of testing

I find great personal value in examining situations from the past to see how my opinions have changed over time. Not least this provides some perspective on my current thinking around any problems and can act as a reminder that my position on an issue may not be constant as my experience I grows. In the years since my 'spirit of windows' incident I've grown more pragmatic around the need in testing for rigidly specified requirements. In particular experience has demonstrated that specifications are themselves fallible, yet are treated as if they should be unambiguous and exhaustive instead of being treated as simply another type of oracle, to be used with judgement in making decisions. This can have damaging consequences - I have seen the situation where incorrect behaviour was implemented on the basis of a specification, when there was an excellent test oracle available that was not referenced during testing as there was no perceived need to do so.

The availability of a clear testing oracle provides an excellent basis for exploring a system that is being actively developed with the fluidity of minimising documentation to focus on asking questions and discussing design decisions through the development process, such as when working with agile user stories. What the example above clearly highlights is the importance of early tester engagement in this process if the testers are to understand the value in the feature, the decisions that go into the design and, crucially, the characteristics of the test oracle that we are looking to replicate.

References

Image: https://www.flickr.com/photos/firemind/30189049

Tuesday, 21 October 2014

The Workaround Denier

Your software is a problem, to someone. It may be uncomfortable to accept but for somebody that uses your software there will be behaviours in it that inhibit their ability to complete the task that they are trying to achieve. Most of the time people don't notice these things as the limitations are within the realm of accepted limitations of computer technology (the word processor doesn't type the words as I think of them; the online shop doesn't let me try the goods before ordering). In some cases the limitation falls outside of accepted technological inadequacy, with the most likely result being a mildly disgruntled user having to reluctantly change their behaviour or expectations. In cases where the difference is more profound, but not sufficient for them to move to another product, it may be that the person involved will attempt to work around the limitation. The result in such situations is that the software features can be used in a manner that they have not been designed or tested for.

As a tester I find myself slipping towards hypocrisy on the subject of workarounds. Whilst I am happy to consider any (legal) workarounds at my disposal to achieve my goals with the software of others, when testing my own software my inclination is to reject any use of the system outside of the scope of operation for which it has been designed and documented. I think this position of 'workaround denial' is something that will be familiar to many testers. Any use of the product that suits outside our understanding of how it will be used is tantamount to cheating on the part of the user. How are we expected to test for situations that we didn't even know were desirable or practicable uses of the product?

An example last week served as a great contradiction to the validity of such a position, demonstrating how important some workarounds are to our customers. It also reminded me of some of the interesting and sometimes amusing workarounds that I have encountered both in software that I use and which I help to produce.

The software I use

I am constantly having to find ways around when one of the many software programs that I use on a daily basis doesn't quite do what I want. In some cases the results of trying to work around my problems are a revelation. As I wrote about in this post on great flexible free tools, the features that you want might be right under your nose. In other cases the actions taken to achieve my goals are somewhat more convoluted and probably sit well outside the original intention of the development team when designing the software.

  • The support tracking system that I implemented for our support teams does not support the ability for our own implementation consultants to raise or track tickets on behalf of their clients. It will assume that any response from our company domain is a support agent response and forward it to the address of the owner of the ticket. To avoid problems of leakage we restrict the incoming mails to the support mailbox on the exchange account, but this does limit the possibility of including our own implementation consultants as 'proxy customers' to help in progressing investigations into customer problems.
  • The bug tracking system that I currently use has poor support for branching. Given the nature of our products and implementations we are maintaining a number of release branches of the software at any one time and sometimes need to apply a fix back to a previous version on which the customer is experiencing a problem. With no inherent support for branching in our tracker I've tried a number of workarounds, including at times directly updating the backend database, all with limited success.
  • As I wrote about in this post, I was intending to replace the above tracking system with an alternative this year. Interestingly the progress on this project has stalled on the presence of exactly the same limitation, poor branching support in the tool we had initially targeted to move to, Jira. The advice within the Jira community suggested that people were resorting to some less than optimal workarounds to tackle this omission.
  • Outlook as an email client has serious workflow limitations for the way that I want to manage a large volume of email threads. I've found the need to write a series of custom macros in order to support the volume and nature of emails that I need to process on a daily basis. These include a popup for adding custom follow up tags so I can see not only that a follow up is required but a brief note of what I need to do, a macro to pull this flag to more recent messages in the conversation so that it will display on the collapsed conversation, and also the ability to move one or more mails from my inbox to the directory that previous mails in that conversation are stored.
  • The new car stereo that I purchased has a USB interface to allow you to plug in a USB stick of mp3 files to play. On first attempting to use this I found that all of the tracks in each album were playing in alphabetical order rather than the order that the songs appeared in the albums. A Google search revealed that the tracks were actually being played in the order that they are added to the FAT32 file system on the stick. Renaming the files using a neat piece of free software and re-copying to the stick resolved the issue. On reading various forums it appears that this was a common problem, but the existence of the workarounds of using other tools to sort the files was apparently sufficient for the manufacturer not to feel the need to enhance the behaviour.
  • The continuous integration tool Jenkins has a behaviour whereby it will 'helpfully' combine identical queued jobs for you. This has proved enough of a problem for enough people that it prompted the creation of a plug in to add a random parameter to Jenkins jobs to prevent this from happening, which we have installed.

The software I test

Given that my current product is a very generic data storage system with API and command line interfaces, it is natural that folks working on implementations will look to navigate around any problems using the tools at their disposal. Even on previous systems I've encountered some interesting attempts to negotiate the things that impede them in their work.

  • On a financial point of sale system that I used to work on it was a requirement that the sales person completed a fresh questionnaire with the customer each time a product was proposed. Results of previous questionnaires were inaccessible. Much of the information would not have changed and many agents tried to circumvent the disabling of previous questionnaires to save time on filling in new ones. The result was an arms race between the salespeople trying to find ways to reuse old proposal information, and the programmers trying to lock them out.
  • On a marketing system I worked on we supported the ability to create and store marketing campaigns. One customer used this same feature to allow all of their users to create and store customer lists. This unique usage resulted in a much higher level of concurrent use than the tool was designed for.
  • The compression algorithms and query optimisations of my current system require that data be imported in batches, ideally a million records or more, and be sorted for easy elimination of irrelevant data for querying. We have had some early implementations where the end customer or partner has put in place a infrastructure to achieve this, only for field implementation teams to try to reduce latency by changing the default settings on their systems to import data in much smaller batches of just a few records.
  • One of our customers had an issue with a script of ours that added some environment variables for our software session. It was conflicting with one of their own variables for their software, so they edited our script.
  • One of our customers uses a post installation script to dynamically alter the configuration of query nodes within a cluster from the defaults.
  • In the recent example that prompted this post a customer using our standalone server edition across multiple servers did not have a concept of a machine cluster in their implementation. Instead they performed all administration operations on all servers, relying on one succeeding and the others failing fast. In a recent version we made changes with the aim of improving this area such that each server would wait until they could perform the operation successfully rather than failing. Unfortunately the customer was relying on failing fast rather than waiting and succeeding so this had a big impact on them and the workaround they had implemented.

My former inclination as a tester encountering such workarounds was to adopt the stance of 'workaround denier', being dismissive of any non-standard uses of my software. More recently, thanks partly due to circumstance and partly in light of the very positive attitude of my colleagues, I've grown to appreciate the idea that being aware of and considering known workarounds is actually beneficial to the team. It is far easier to cater for the existence of these edge uses during development than to add support later. In my experience this is one area where the flawed concept of the increasing costs of fixing may actually hold true given that we are discussing having to support customer workflows that were not considered in the original design, rather than problems with an existing design that might be cheaply rectified late in development.

What are we breaking?

Accepting the presence of workarounds and slightly 'off the map' uses of software raises an interesting question of software philosophy. If we change the software such that it breaks a customer workaround, is this a problem? I don't believe that there is a simple answer to this. On one hand our only formal commitment is to the software as delivered and documented. From the customer perspective, however, their successful use of the software includes the workaround, and therefore their expectation is to be able to maintain that status. They have had to implement such a workaround to overcome limitations on the existing feature set and therefore could be justifiably annoyed if the product changes to prevent that. Moving away from the position of workaround denial allows us to predict problems that, whilst possibly justifiable, still have the potential to cause negative relationships once the software reaches the customer.

Some tool vendors, particularly in the open source world, have actively embraced the concept of the user workaround to the extent of encouraging communities of plug-in developers who can extend the base feature set to meet the unique demands of themselves and subsets of users. In some ways this simplifies the problem in that the tested interface is the API that is exposed to develop against.

(While this does help to mitigate the problem of the presence of unknown workarounds, it does result in a commitment to the plug-in API that will cause a much greater level of consternation in the community should these change in the future. An example that impacted me personally was when Microsoft withdrew support for much of the Skype Desktop API. At the time I was using a tool (the fantastic Growl for Windows) to manage my alerts more flexibly than was possible with the native functionality. The tool relied upon the API and therefore no longer works. Skype haven't added the corresponding behaviour into their own product and the result is that my user experience has been impacted and I tend to make less use of Skype as a result.)

Discovering the workarounds

The main problem for the software tester when it comes to customer workarounds is knowing that they exist. It is sometimes very surprising what users will put up with in Software without saying anything, as long as somehow they can get to where they want to be. The existence of a workaround is the result of someone tackling their own, or somebody else's, problem and it may be that they don't inform the software company that they are even doing this. It can take a problem to occur for the presence of the workaround to be discovered.

  • For the sales agents on the point of sale system trying to unlock old proposals, we used to get support calls when they had got stuck having exposed their old proposal data but now unable to edit them or save any changes.
  • The customer of the marketing campaign system reported issues relating to concurrency problems in saving their campaign lists.
  • For the team that edited the environment script, we only discovered the change when they upgraded to a later version and the upgrade had a problem with the changes they'd made and threw errors. Again this came in via the support desk.
  • For the team who reduced the size and latency of their imports, we only discovered the problem when they reported that the query performance was getting steadily worse.
  • For the recent customer who was taking a 'fail fast' approach to their multi-server operations, again the problem came in via the support desk exhibiting as a performance issue with their nightly expiry processes.

So the existence of a workaround is often only discovered when things go wrong. In my organisation the channels are primarily through the technical support team, and in running that team I get excellent visibility of the issues that are being raised by the customers and any workarounds that they have put in place.

For other organisations there may be additional channels through which information on customer workarounds can be gleaned. As well as being a useful general source of tester information on how your software is perceived, public and private forums are also the places where people will share their frustrations and workarounds. I've already mentioned that I used a public forums to discover a solution to my problem with my car stereo. My colleague also discovered the 'random parameter' add-in to Jenkins on a public forum, and it was public threads that we looked to in order to identify workarounds to the lack of branch support in Jira.

Prevention is Better than Cure

Responding to problems is one thing. If we work to gain an understanding of where the customers are getting into trouble I think that it is possible to anticipate places where customers try to work around limitations in Software and testers can be on the lookout for potential consequences of they do. Doing this, however, requires an understanding of customer goals and frustrations that sit outside of the defined software behaviour. I believe that the signs are usually there in advance if you look for them in the right places, perhaps a question from an implementation consultant working on customer site on understanding why a feature was designed in a certain way, an unsatisfied change request on the product backlog, or a post on a user forum. If we can make efforts to understand where the customers are getting frustrated then we can test scenarios where they might try to get around the problem themselves and establish how they might get themselves into trouble if they do. There are some testers in my team who actively help out with support so gain good visibility of problems. In order to encourage a wider knowledge of customer headaches throughout the test team we have started to run regular feedback sessions where a support agent discusses recent issues and we discuss any underlying limitations that could have led to these.

Of course, ideally we would have no need for workarounds at all, the software should simply work in the way that the users want. Sadly it is rarely possible to satisfy everyone's expectations. The responsibility to prioritise changes that remove the need for specific workarounds is probably not one that falls directly on testers. In light of this, is it something that they need to maintain awareness of? As I've stated my inclination was to dismiss them as something that should not concern testing - after all, we have enough challenges testing the core functionality. It is tempting to adopt the stance that the testing focus should be solely on the specified features, that we need to limit the scope of what is tested and the documented feature set used 'as designed' should be our sole concern. This is a strong argument, however I think that this belies the true nature of what testing is there to achieve. Our role is to identify things that could impact quality, or value the product provides to some stakeholder, to use Weinberg's definition. From this perspective the customer value in such scenarios is obtained from having the workarounds in place that allow them to achieve their goals. Rather than dismissing workarounds, I'm coming around to the idea that software testers would better serve the business if we maintained an understanding of those that are currently in place, and raise awareness of any changes that may impact on these. In our latest sprint, for example, we introduced some integrity checking that we knew would conflict with the post configuration script mentioned above, so as part of elaborating that work the team identified this as a risk and put in place an option to disable the check. This is exactly the kind of pragmatism that I think we need to embrace. If we are concerning ourselves with the quality of our product, rather than adherence to documented requirements, it appears to me to be the right thing to do.

Image: https://www.flickr.com/photos/jenlen/14263834826

Monday, 8 September 2014

The FaceBook Effect

I recently celebrated the birth of my 4th child. Whilst my wife was recovering from the birth I enjoyed the opportunity to take my older children to school and to speak to friends and other parents wanting to pass on their congratulations and wishes. One such day I was chatting with a friend of my wife's and the conversation strayed into an area which she felt particularly passionate about, which also struck a chord with me both in a personal and professional capacity. The friend was telling me that she was so excited to hear news of the birth that she had logged onto Facebook for the first time in months to check my wife's status. She explained that she had previously stopped using Facebook as she felt that it compelled her to present a false image of her life. Whilst I occasionally use Facebook I was inclined to agree with her that there is a lot of pressure on social media to present a very positive image of yourself and your life. Indeed this pressure is such that many people seem more focused on staging personal occasions to take photographs and post details to social media than on actually enjoying the occasion themselves.

But What's this got to do with Testing?

Whatever your position on social media, you're probably wondering why I'm recounting this conversation on a testing post. The reason that the conversation struck a chord with me on a professional level was because I think that there is a similar pressure in professional communities and social media groups, with those associated with Software Testing being particularly prone for reasons I'll go into.

With the advent of professional social media in the last decade, professionals now enjoy far greater interaction with others in their field than ever possible before. In general I think that this is a hugely positive development. It allows us to share opinions and discuss ideas and accelerates the distribution of new methods and techniques through the industry. Social media channels such as Twitter, LinkedIn and discussion forums also provide less experienced members of the community far greater access to experienced individuals with a wealth of knowledge and expertise than ever possible than when I started testing. More importantly social media allows us to be vocal and criticise activities which could damage our profession and find other individuals who share the same concerns. The recent rallying behind James Christie's anti ISO 29119 talk would simply not have been possible without the social media channels that allowed like minded individuals to find a collective voice in the resulting online petition (I don't suggest that you go immediately and sign the petition, I suggest that you read all of the information that you can, and make up your own mind. I'd be surprised if you decided not to sign the petition ). Social media has the power to give a collective voice where many individual voices in isolation would not be heard.

On the flipside of the positive aspects, Social Media carries an associated pressure - let's call it the 'Facebook effect' - where those contributing within a professional community feel the need to present a very positive image of what they are doing. It is easy to compare one's own work in a negative light and engender feelings of professional paranoia as a consequence. This is not something that is specific to testing and the phenomenon has been highlighted by those writing on other industries, such as this post highlighting problems in the marketing community.

For the many who make their living in or around the social media industry, the pressure to be or at least appear to be an expert, the best, or just a player is reaching a boiling point.

The message is clear. Advancing in the modern professional work is as much about climbing the social media ladder as the corporate one and in order to do that we need to present the right image.

Living up to the image

Based on the article above, and the references at the end of this post, it is clear that the negative side of social media affects other industries, so what is it about testing in particular that I think makes us particularly conscious of how we present ourselves?

Before putting this post together I approached a number of testers to ask whether they had ever experienced or observed the symptoms of the Facebook effect in their testing careers. I received some very interesting and heartfelt responses.

Most accepted the need to present a sanitised, positive image on social media

You don't want to wash your dirty linen in public

And the need for constant awareness that everything posted publicly was open to scrutiny

I know that anything I say is open to public 'ridicule' or open for challenge

Some went further to admit that they had experienced occasions where they felt paranoid or intimidated as a result of interactions with testing based social media. One tester I spoke to highlighted the problem of opening dialogues requesting help or input from others and being made to feel inferior

when you make an opening to someone in our industry asking their thoughts or opinions and they seem to automatically assume that this means you are a lesser being who has not figured it all out already

I don't think either myself or the person writing that feel that assumptions of this type are always the case, but I've certainly experienced the same thing. A few years ago, as an experienced tester finding my feet in the agile world, I found myself frustrated by the 'you are doing it wrong' responses to questions I posted in agile testing lists. I don't want a sanctimonious lecture when asking questions on my problems, I want some open help that acknowledges the fact that if I'm asking for assistance in one area it doesn't mean that I don't know what I am doing in others.

Why is testing so affected?

I think there are a number of factors that contribute to testing being particularly prone to the 'Facebook effect'.

  • what are we admitting?

    I obviously can't comment on how it is for other professions, but I think for testing the pressure of presenting a positive image is particularly prevalent due to the implications of any negative statements on perception of our work or organisations. Any admission of fault in our testing is implicitly an admission of the risk of faults in our products, or worse, of risks to their data as customers. Whilst we may be prepared to 'blame the tester' in the light of problems encountered, it is not the case that we want to do this proactively by openly admitting mistakes. Some testers also have the additional pressure of having competitor companies who can pick up on revelations of mistakes to their advantage. As a tester working for a product company with a close competitor told me:

We are watching them on social media as I am assuming they are watching us. So I do need to be guarded to protect the company (in which) I am employed.
  • what are we selling?

    Some of the most active participants in any professional communities will be consultancy companies or individuals, and testing is no different. These folks have both a professional obligation not to criticise their clients, and also a marketing compulsion to present the work that they were involved in as successful so as to present value for money to other prospective customers. The result is a tendency towards very positive case studies from our most vocal community members on any engagements, and an avoidance of presenting the more negative elements to protect business interests.

  • Where are we from?

    Testing is a new profession. I know that some folks have been doing it for a long time, but it just doesn't have the heritage of law, medicine or accountancy that provides stability and a structure of consistency across the industry. Whereas this does result in a dynamic and exciting industry in which to work, it also means that It workers operate in a volatile environment where new methodologies compete for supremacy. Attempts to standardize the industry may appear to be an attractive option in response to this, offering a safety net of conformity it a turbulent sea of innovation, however the so far flawed attempts to do so are rightly some of the greatest points of contention and result in the most heated debate in the world of testing today. The result is an industry where new ideas are frequent and it can be hard to tell the game-changing innovations from the snake-oil. Is it really possible to exhaustively test a system based on a model using MBT? Are ATDD tools a revolutionary link between testing and the business or really a clumsy pseudocode resulting in inflexible automation? In such an industry it is naturally hard to know whether you have taken the right approaches, and easy to feel intimidated by others' proclamations of success.

  • Where are we going?

    For some an online presence is part and parcel of looking for advancement opportunities. LinkedIn is particularly geared towards this end. Therefore presenting only the most successful elements of your work is a prudent approach of you want to land the next big job. Similarly for companies who are recruiting, if you want to attract talented individuals then presenting the image of a successful and competent testing operation is important.

Facing Your Fears

One of the problems that we face individually when interacting with professional social media is the fact that the same 'rose tinted' filtering that is applied to the information that we read on other testers and their organisations is not applied to our own working lives. We see our own jobs 'warts and all' which, during the leaner times, can lead to professional paranoia. This is certainly something that I have experienced in the past when things have not been going as well as I would like in my own work. I found that this was more of a problem earlier in my career and has got less as I gain more experience which provides perspective on my work in relation to others. The FaceBook Effect does still rear is head during periods of sustained pressure when I have little chance to work on testing process improvements, as I have experienced this year.

The manner in which we deal with these emotions will inevitably depend on the individual. I think that, whilst easy to fall into a pattern of negativity, there are responses that show a positive attitude and that can help to avoid the negative feelings that can otherwise haunt us.

  • look to the experienced

    Ironically it seems to be the most experienced members of a profession that are most willing to admit mistakes. This could be because many of those mistakes were made earlier in their careers and can be freely discussed now. It could be that having a catalogue of successful projects under your belt furnishes folks with the confidence to be more open about their less successful ones. It could also be that the more experienced folks appreciate the value of discussing mistakes to help a profession to grow and build confidence in its younger members. These are all things that I can relate to and as my experience grows I find an increasing number of previously held opinions and former decisions that I can now refer to personally, and share with others, as examples of my mistakes.

  • get face to face

    I wrote a while ago about an exchange that I did with another company to discuss our relative testing approaches. I have since repeated this exercise with other companies and have another two exchange visits planned for later this year. The exchanges are done on the basis of mutual respect and confidentiality, and therefore provide an excellent opportunity to be open about the issues that we face. There is an element of security about being face to face, particularly within the safe environment of the workplace, which allows for a open conversations even with visitors that you have known for a short time.

  • consultancy

    I don't rely extensively on external consultancy, however I have found it useful to engage the services of some excellent individuals to help with particular elements of my testing and training. In addition to the very useful 'scheduled' elements to the engagement, almost as useful is having an expert with a range of experiences available to talk to in a private environment. As I mention above, consultants should maintain appropriate confidentiality, and they will also have a wealth of experience of different organisations to call on when discussing your own situation. Having had the benefit of a 'behind the doors' perspective of other companies provides a far more balanced view of people's relative strengths and can put your own problems into a more realistic context as a result. There are few more encouraging occasions for a test leader in an organisation than being told that your work stands up in a positive light to other organisations (even if they aren't at liberty to tell you who these are) .

  • closed groups

    I was fortunate to be involved in a project recently that involved being a member of a closed email list. I found this to be a liberating experience. The group discussed many issues that affect testers and test managers openly and without fear of our words being misinterpreted by our organisations or others in the community. There were disagreements on a number of the subjects and I personally found it to be much easier discussing contentious issues with reference to my own position in the closed group environment. The problem with discussing internal issues in an open forum is obviously the risk that your candid talk is seen by the wrong eyes, a closed group avoids this problem and allows for open and candid discussion with sympathetic peers. In fact I obtained some really interesting input from exactly that group prior to writing this post.

  • trusted connections

    I am lucky to have some fantastic testers on my private email address list who I can turn to in times of uncertainty or simply to bounce ideas off before putting them into the public domain. For example I recently had some questions around Web testing. This is not something that I've looked at for some time, having been focussed on big data systems. I received some invaluable guidance from a couple of people in my contacts list without any stigma around my 'novice' questions, as the individuals I spoke to know me and respect the testing knowledge that I have in other areas. Their advice allowed me to provide an educated view back to my business and make better decisions on our approach as a result. As with the closed group, I approached a number of personal contacts for their experiences and opinions to contribute to writing this post.

Don't worry be happy

When my brother left his last job in water treatment engineering his colleagues gave him one piece of parting advice.

Lose the nagging self doubt - you are great at your job

So it could well be that I suffer from some familial predilection to being self critical. You may not suffer from the same. Whether you do or not, I think that when using social media as a community we should maintain awareness of how others will feel when reading our input. We should try to remember that just because others ask for help doesn't mean that they don't know what they are doing. We should consider admitting failures as well as successes so others can learn from our mistakes and gain confidence and solace in making their own.

If interacting with social media personally leaves you with a taste of professional paranoia - I recommend reading this excellent short post from Seth Godin, and remind yourself that the simple fact that you are looking outside your work to the wider professional community to improve your testing, probably means you're doing a fine job.

Other links

Image: Sourced from twitter @michaelmurphy https://twitter.com/michaelmurphy/status/492648065619492864

ShareThis

Recommended