Tuesday, 19 June 2018

Textual description of firstImageUrl

How Not To Get Into Test Automation

I’m going to have a bit of a moan. I’m not inclined to rant, however this has been a bugbear of mine for a while which came up again recently. I’m hoping that sharing my thoughts will hopefully both get them off my chest, whilst at the same time providing some useful career advice for budding testers.

Here’s an example of a conversation I recently had with a candidate for a testing job

Me - So what would you ideally like to be doing in a new role?

Candidate - Well I’ve got lots of experience of manual testing so I’d like to use my experience there, and ideally I’d also like the opportunity to get into automation

Me - have you done anything so far to develop automation tooling skills?

Candidate - No, the opportunity hasn’t come up in my previous roles so ideally I’d get a role with a company that’s going to help me develop in this area

Me - Sigh.

I have lost count of the number of times I’ve had this conversation and I’ve never hired a candidate off the back of such a discussion. I’m sorry to these candidates - but if I am hiring with automation in mind then the last person I’m going to hire is someone who claims to want to develop skills in that area but has taken no steps in acquiring those skills themselves.

If I want someone to create automation tools then the characteristics that I'm looking for are typically

  • They are capable of writing code to automate tests - this doesn’t have to involve knowing a specific language - just that they can understand coding structures
  • They are capable of understanding the benefits and limitations of automating test execution
  • They are capable of self learning

If you are not able to demonstrate these skills then it's naïve to expect an employer to land you with a juicy test role where you get trained in test automation.

To be absolutely clear, I'm not in any way stating that personal development is the sole responsibility of the employee and any new skills need to be progressed in their own time. What I am saying is that the chances that I would choose to invest time and money into building the skills of an individual increase significantly if that person can demonstrate the ability to learn and improve themselves. Opportunities for self progression don't often land unexpectedly, they tend to gravitate towards those people who appear to deserve and merit the opportunity.

Teach yourself

Across my previous testing roles I've created many test tools and harnesses

  • I created a multi-server database test scheduling harness in Linux shell after teaching myself shell scripting
  • After a manager rejected my proposal to automate ODBC tests, I did it anyway, demonstrated the value and got justification to build on and maintain these
  • I taught myself java in my evenings and created two test harnesses for acceptance and load testing the JDBC interface
  • I learned Perl to output our test build results in a readable html report
  • I learned python to create a simple test harness to parse JSON to test a REST interface
  • I learned ruby to create a random SQL query generator as a side project

In every single instance I learned the language off my own back in order to add the value in what I felt was the best way. Sometimes this started with some help from developers. Other times this involved sitting up late on Friday nights starting from "Hello World" with a beer and a few web pages to get me started on the basics of the language. Now I’m not advocating that everyone spend their Friday nights working, but if you’re driven to do something and are learning new skills it genuinely doesn’t feel like work (the beer helps with that!).

The intention here isn’t to show off - after all most of these harnesses were picked up, maintained and often improved by other members of my team who developed themselves and their skills in the process. Not only that but they then went on to create their own tools - not because I asked them to but because they wanted to and because they could. And the reason that they could was because I’d hired people with a demonstrable ability to self learn to improve their skills whilst adding value.

The technology doesn’t matter...

When hiring testers I’m not interested in whether they know a specific language or tool. I am, however, absolutely focussed on finding people that have shown the ability to drive their own improvement and make opportunities for themselves. This doesn’t necessarily have to involve tool creation, however the people who I have hired for this have shown clear evidence of researching and intelligently creating or introducing tools to improve the testing in their companies.

I remember once being particularly excited about a CV that landed on my desk. The tester in question had clearly self-learned new technologies in order to demonstrate value and suggest a number of improvements in previous roles that improved testing and saved time/money. Naturally I hired this person and she went on to become one of the best testing hires I’ve ever made. That CV became my go to example when talking to recruiters about what a good tester CV really looks like (a great way to discuss tester recruitment - I call it "recruitment by example" )

... and technologies alone aren’t enough

On the flip side, I’m also very sceptical of hiring individuals who have been trained in one technology, such as Selenium, and shown no flexibility in looking outside that at other tools or approaches. Testing ain’t programming people and, whilst coding with one technology is useful it’s not what makes a great tester.

One of the best testers I’ve hired has no coding skills but has delivered great automation. She used her strong testing knowledge and relationship building skills to motivate developers to help her in introducing automated acceptance testing. The same tester also introduced Session Based Exploratory Testing to cope with a cutting of testing headcount in the business. I’d rather have a tester with no coding skills who has effected change in this way over someone that’s worked exclusively on Selenium automation for 5 years.

Don't just sit there

So that’s it - rant over. I hope that if you’re a tester who is looking for work then you take some of this on board. Don't expect prospective employers to do the work for you. Be hungry, be proactive, be a pioneer of tools and approaches in your organisation, and ultimately be prepared to take the first few steps on whatever path you wish to pursue yourself...

..oh and if you live in commutable distance from Gloucestershire UK let me know - I’m always happy to speak to you as I'm frequently looking for proactive and passionate testers to join my team.

Photos by Fabrizio Verrecchia and Etienne Boulanger on Unsplash

Monday, 4 June 2018

Textual description of firstImageUrl

The Wave of Risk Perception

I had an interesting chat with a former member of my team recently. We were in the pub and she was sharing with me her woes in a new role. The problem, as she explained it, was that the members of the leadership team in her company were pushing back against her efforts to devote development time to improving existing features, robustness and stability of the product. They were solely interested in new features. The issue seemed to be that they didn't have an appreciation of the levels of risk involved in their existing product. To help her to understand the reasons for their position I drew up a diagram that I've used in a few talks to explain the perception of risk. The conversation reminded me that I'd not written about this, so here goes...

Destined to be Different

It is not a coincidence that in many situations people close to software development, particularly testers, have a very different perception of risk than some leadership roles. One of my favourite historical blog posts is this one on why testers and business leaders inevitably differ in their perception of risk. I recommend reading that one for a deeper exploration, but to summarise here:-

  • Humans are susceptible to the 'Availability Heuristic' that leads us to attribute higher risk to outcomes which we can recall experiences or stories around. Therefore roles with greater visibility of issues that have occurred through a process, or of software problems in general, will attribute higher risk to a software release than those who have not
  • Research such as this paper has additionally shown that managers who don't have a detailed understanding of negative outcomes will aggregate multiple separate negative possibilities into one 'risk of failures' which they then attribute a lower level of risk to than is appropriate due to lack of availability.
  • The 'Affect Heuristic' means that we naturally attribute lower risk to things that we associate a strong positive outcome to, and vice-versa, so a manager focusing emotionally on the benefits of a software release for customers will typically attribute lower risk to the release than those roles with a focus on the challenges of technical delivery

These three factors combined give good cause to expect leadership roles to have a much lower perception of risk around a software delivery than testers and other development roles.

The Wave of Perceived risk

Understanding differences in risk perception is all well and good, however unless it is in understanding and recognising the potential outcomes of this disparity that the theory becomes practically useful. What would we expect to see as a result of this somewhat inevitable difference in perspectives?

When creating a talk around risk and the reasons that I came up with the 'risk questionnaire' I used the following diagram as a means to represent the scenario that can arise when there is a strong disparity of risk perception between testers and management. The idea here was to represent perceived risk over time in a tester and a product role on a new development and to explain the shape of testing that we would expect to see as a result.

Initially when commencing development what we expect to see is a large difference in their perceived levels of risk around the same situation. The perception of risk in management is low and decisions on approach and expected pace of development are driven based on this. The tester is aware of the lack of rigour in place and will have an elevated perception of risk driven by their experience of the many issues that could surface.

Inevitably one of these potential problems goes bang and the manager is suddenly and painfully aware of a specific risk.

Additional effort is justified to tackle 'the problem'. This is likely to be in the form of a process improvement or the creation of some new tests, however the scope of this work will in all likelihood be limited to covering the issue that emerged, possibly in excess of an appropriate level given the testers elevated perception of risk.

The manager is satisfied and returns to a having a lower perception of the risk situation, however knowledge of the problem does give some increased awareness of the chances of hitting problems. The Tester, on the other hand, knows that only one of the many areas of potential concern has been mitigated and their confidence is only slightly increased due to their maintained perception of the other potential negative outcomes in their process.

Inevitably another issue cross up in another area with similar consequences

over time the encountering of issues and improving testing reaches a point where, whilst not necessarily aligned, the perception of risk between tester and manager is closer and in balance with a consensus around the level of testing needed and the resulting chance of hitting issues.

The testing is likely to be formed around pockets of high (potentially excessive) coverage where risks have manifested themselves, with lighter coverage in other areas.

Surfing the Wave

I was slightly nervous when I first presented this at a conference, however I've presented this a few times now and had many testers confirming that this looks very much like the way that testing evolved in their organisation. At the last talk one audience member said it was the best way of representing this situation that he had seen.

The same person asked an interesting question - given the choice would you reduce the amplitude or the frequency of the curve? To rephrase - is the way to tackle the problem to reduce the number of issues or the difference in perception to start with? For me the two are separate but go hand in hand, and it is necessary to address the second to justify focussing on the first. Working towards a more common perception of risk between roles will not in itself reduce the problems encountered. Without addressing this disparity, however, it can be hard for testers to establish and justify a level of rigour around development required to do so. Such a change for a manager with a low perception of risk would appear to be an increase in development time and cost for little benefit. If we want backing to improve testing, without just bouncing from issue to issue to justify point improvements, then helping to establish a more consistent perception of risk is clearly a key area to focus.

The good news here is that, from our understanding of the causes of the difference in perception, we have a great starting point in understanding this might be achieved. I identified at the start of this post three biases that impact our perception of risk that we can act to influence.

  • Tell stories to convey risk information. Human risk perception simply isn't driven by the logical brain processes that deal with facts and figures. Risk perception is influenced far more through 'availability' of stories and experiences that convey the potential for negative outcomes. My personal experience supports this - one excellent example involved a conversation with a CEO once around improving data security that I was expecting to be a challenge, however he had just recently read an article on a data breach from a similar company which made headline news, and he was therefore very open to addressing the risks in that area.

  • Don't aggregate all conversations on issues into 'bugs'. Discussing different potential problem areas separately will help to establish a more realistic understanding of the multiple negative outcomes that could arise in our development. Actually discussing these collectively as bugs could result in an under-appreciation of the different risks in those not close to the details of the development.

  • Include testers early in value conversations - Testers are often excluded from early conversations around developments, and are only brought in when the solutions are scoped. Yet it is in those early conversations that the real value of the development is established and discussed. It carries then that one positive way to avoid the curve by influencing their own risk perception is by getting themselves involved from the start and understanding the value that each development could deliver.

It is worth remembering here that our focus needs to be on establishing a realistic and consistent perception of risk across roles, which involves more than just increasing awareness of risk in management. Despite the superpowers to find issues and save projects, testers are only human and just as susceptible to biases in their risk perception as everyone else and the tester's default position is as likely to involve over-estimation of risk as management roles are to under-estimate.

A key element to the testing role is to understand the biases that can influence the perception of risks in our own roles as well as others. Understanding these can provide the key to being able to have genuinely influential conversations with business leaders in ways that will effectively convey risks in a way that is meaningful to them, and hopefully avoid the painful process of progressing up the wave that comes when different roles start from very different perspectives.

https://www.pexels.com/photo/action-beach-fun-leisure-416676/

Monday, 23 April 2018

Textual description of firstImageUrl

An Emotional Journey

Of all roles in software development, the Product Owner is one that I find is most at risk of positive biases around their software. When creating a product there's a natural tendency to be overly optimistic around the positive reception that it will get from its target user community. I am in the process of developing an innovative new engagement and productivity product in my company and naturally am very excited and optimistic around the benefits of will give. As my colleagues and I started out in this endeavour we wanted to make sure we included a consideration of potential negative feelings into our development process - and our UX (User eXperience for the uninitiated) specialist came up with a great way of doing this...

Getting into UX

One of the things that I've found most rewarding in moving to Web and mobile work after many years of being in a world of APIs and command lines, is learning more about UX. I maintained an interest in UX during my years working on big data, but the absence of significant front end development work left this as a secondary concern to those of accuracy, scale and performance. The last two years working on Web and mobile technology has given me the opportunity to make up some ground on the UX learning curve - a process which has been accelerated thanks to the enthusiasm of our in-house UX team.

When we decided to create a product that attempted to bring together the worlds of data and employee engagement, the importance of establishing the right emotional connection with users was paramount to ensure the team experience of using the product was empowering rather than intimidating. In chatting with the UX specialist working on the product, who I'll call Phoebe, we discussed the need to identify the emotions that we wanted to promote in using the product. On the flip-side we also talked about the emotions that we did not want to promote, and how useful it would be to identify some of these up front so that we could design with these in mind.

An Emotional Journey

Phoebe had the bright idea of running an 'emotional user journey' workshop to help flush out both positive and negative emotions that could arise at key points through the process of using our product. This was something neither of us had done before but seemed like a great for for what we were trying to achieve.

The starting point was getting the right mix of people in the room. We pulled together a combination of Development and Commercial roles as well as some of the senior client services and Product Ownership people from our successful bespoke engagement and data programmes.

  • Phoebe started by presenting the different user personas that we had created for the product, explaining the personalities, pet hates and goals of each one.
  • She then progressedd to map out the primary elements of the flow of product behaviour that we had identified as our core journey.
  • At each stage she placed some leading 'question' cards with questions to make the attendees think about the emotions that the people and teams using the product might feel at each stage.
  • Phoebe then split the attendees and invited individuals to consider the journey from either a very positive and optimistic, or a very negative and cynical position.
  • These two sets of individuals added emotions to the journey at the key points - one colour of card for positive and one for negative emotions
  • At points where the cards were concentrated, we placed further cards to highlight the ideal emotions that we would want to promote to help avoid any potential emotional pitfalls that had emerged.

What was fascinating about the session was that, as the emotional cards were added to the wall what emerged were a small number of critical 'fulcrum' points which had the possibility of a engendering very strong emotions, but also risked very negative ones. Some areas that we had assumed would promote a positive response around visibility and openness actually had a high level of risk of people feeling exposed and monitored or judged. Additionally a strong set of potential emotions emerged around the product as a whole emerged around how people might feel if it was put in front of them. What we realised was that, without our perspective on the potential benefits there was a risk of suspicion around new business technology and its potential for 'big brother' monitoring that we needed to consider and mitigate with our product features and messaging.

Emotional Take-Aways

The workshop provided some invaluable insights into our target product from the perspective of the people who would be using it. Identifying the main points that carried the greatest emotional risk allowed us to focus on those areas to ensure that they encouraged the responses we were looking for. Through the development of the initial features we tailored our approaches to specifically include steps to encourage open and democratic behaviours and discourage command-and-control, autocratic ones. Our awareness of the general risks around the perception of new products was also empowering in ensuring that we provide the right support and messaging to back up the benefits that our product can provide.

Our company brand and sales and marketing collateral all carries the messaging our software and services are for everyone in a company, not just managers. With the help of the insight that came out of the emotional journey session, we're ensuring that this is a message that is reinforced rather than put at risk as we build the product features.

References

We didn't use it in the session but here's an interesting post by Chris Mears on using Robert Plutchik’s emotion wheel to measure emotions through user journeys that I would consider using if repeating the exercisehttps://theuxreview.co.uk/driving-more-valuable-customer-journeys-with-emotion-mapping-part-1/

Photo by Kasuma from Pexels

ShareThis

Recommended