Tuesday, 27 September 2016

The Risk Questionnaire

What does it mean that something is "tested". The phrase "tested" troubles me in a similar way to "done" in the sense that it implies that some status has been achieved, some binary switch has been flicked such that a piece of work has achieved some threshold status that up until that point had not been the case.

Of course, there is no magic switch that renders the feature "tested" or the user story "done", we will simply reach a point where the activity performed against that item gives us sufficient confidence in it to move on to other things. My biggest problem with "tested" is that it can mean very different things to different people, often within the same organisation. When two different people in the same organisation refer to something as tested then that each can be taking a very different understanding of what is implied, particularly the exposure to risk involved.

A common understanding

Working for a long time in the same organisation affords one with the luxury of time to reach a common understanding across the various business and technical roles of what "tested" means. To the mature team, tested hopefully means testing has been performed to a level that we've established as a development team that the business is happy with in terms of the time and cost of carrying out the testing and the level of bug fixing that we're likely to need. This level will differ from company to company, with a critical driver being the level of acceptable business risk. I suggested in my post "The Tester, The Business and Risk Perception" that businesses, like individuals, will likely operate at a determined level of risk and the overall activities within that business will gravitate towards that level irrespective of the actions of individual testers.

In my experience an understanding evolves as a team or department matures around what the expectation is on testing and how much time is to be spent on it. If a team spends too much time on testing then it is questioned by management, if the team are pushed to do too little testing then they push back, or quality suffers and the customer feedback prompts review. Eventually in response to these inputs, an equilibrium is reached. But what of new companies or teams? How do we identify an appropriate testing balance between rigour and pace, when a team is either newly created or adopting a new approach such as agile? Is it possible to fast-track this learning process and calibrate ourselves and our activities more quickly around the risk profile of the business

Finding an equilibrium

I was recently asking myself just such a question when facing the challenge of establishing a test strategy for my new company. The company had grown quickly and also recently adopted agile methods, and so inevitable had not yet found its equilibrium in terms of an acceptable level of risk and testing. This caused some confusion across the development teams as they wasn't a consistent understanding of the level that was expected of them in their development and testing activities.

It would have been easy to come in and immediately push to increase the level of testing, however I felt that it was important first to establish amongst the business leaders understanding of the general attitude to acceptable risk. The approach that I took yielded some very interesting results:-

Learning from IFAs

Independent financial advisers (IFAs) need to understand risk. When an individual in the UK attends a financial adviser then they may well be asked to complete something like this questionnaire:- https://www.dynamicplanner.com/home/investor/what-is-risk-profiling/assess-attitude-to-risk/

The use of these questionnaires is common practice amongst financial advisers and institutions to assess the risk appetite of their clients and recommend a pension/investment strategy accordingly. The principle is simple, the customer is presented with a series of statements associated with their attitude to finances, with the answers to these questions provided through a set of standard options reflecting different risk levels.

Example questions might be ones like this

Assume you had an initial investment portfolio worth £100,000. If, due to market conditions, your portfolio fell would you:

  • a) Sell all of the investments. You do not intend to take risks.
  • b) Sell to cut your losses and reinvest into more secure investment sectors.
  • c) Hold the investment and sell nothing, expecting performance to improve.
  • d) Invest more funds to lower your average investment price.

If you could increase your chances of a return by taking a higher risk would you

  • a) Take more risk with all of my money
  • b) Take more risk with half of my money
  • c) Take more risk with quarter of my money
  • d) Not take the risk

The resulting range of answers provides the adviser with an overall impression of the individual's risk level, and they can build an investment plan accordingly. I recently went to an IFA to consolidate my pensions and took just such a questionnaire, coming out at a rather dull 6/10 (apparently the vast majority of people sit between 4 and 7).

I felt that this idea of a questionnaire was a useful simple technique which could be validly used for assessing the risk appetite of individuals across a development company. I wanted to establish a starting position that would allow me to progress the conversation around an acceptable level of business risk and establish whether or not there was consistency of opinion across the company on our approach to risk, and this looked like a good option.

The questionnaire

I created a risk questionnaire that we could ask members of the leadership team to complete. The audience was selected to be a range across development and client services, and the CEO. For the questionnaire I focussed around 4 primary areas

  1. Business Risk
  2. Development Risk
  3. Perceived Status
  4. Time Spent on Testing Activities

1. Business Risk

The business risk statements were focussed on the interaction between the software process and the customers. These were open to all to answer, but primarily targeted at the non-development leadership roles. The aim here was to establish an understanding of the priorities in terms of customer deliverables and the conflicting priorities of time, cost and rigour.

Rate the following questions on 1 - 5 from 1=Strongly agree 5=Strongly disagree

  1. On time delivery is more important than taking longer to deliver a higher quality product

  2. I am happy to accept the need for later effort in maintaining a product if we can deliver that product at a lower up-front cost

  3. Our customers would knowingly accept a reduced level of rigour in development compared to other products in order to keep the cost of the software down

  4. Putting the software in front of customers and responding to the issues they encounter is a cost effective way to prioritise fixing software problems

  5. The cost of fixing issues in production software is now reduced to the point that this is an economically viable approach

  6. Our product context is one in which we can adopt a relatively low level of rigour compared to other business facing software development organisations

  7. I would be reluctant to see an entire sprint given over entirely to testing and bug fixing unless this was driven by issues encountered by the customer

In review of the results we identified that some of these questions were open to interpretation. For example, in question 4. I was not clear as to whether this referred to giving customers early visibility prior to release, through demos or betas, or whether it meant pushing early to production and letting them find the issues in live use (I had intended the latter). Having similar questions worded in slightly different ways, such as question 5, helped to identify whether there was any misunderstanding there, however if doing a similar exercise again I would look more carefully for possible ambiguity.

2. Development Risk

Development risk questions focussed on the risks inherent in our development activity. The idea was to get an understanding of the expectation around the development process and the feeling towards developer only testing. Again these were open to all to answer, but did not shy away from slightly more involved development terminology and concepts.

Rate the following questions on 1 - 5 from 1=Strongly agree to 5=Strongly disagree

  1. The effective application of developer/unit testing can eliminate the need for further devoted testing activity

  2. Appropriate software design can eliminate the need for devoted performance and stability testing

  3. Adding further development skills in our agile teams provides more value in our context than devoted testers

  4. The testing of our products does not require specialist testing knowledge and could be performed by individuals with limited training in software testing

  5. I would be reluctant to schedule specific testing tasks on a team’s backlog without any associated development

In creating the questions I did try to avoid leading questions around any known problems or perceived shortfalls in the current status. At the same time, clearly the questions needed to be suitable for the nature of the business - questions suitable for on a long release-cycle big data database would not have been appropriate here.

3. Perceived Status

This was a really interesting one. I pitched two very straightforward questions around company status.

  1. With 1 being lowest and 5 highest rate how you think the company currently stands in its typical level of rigour in software quality and testing?

  2. With 1 being lowest and 5 highest rate how you think the company should stand in its typical level of rigour in software quality and testing?

The first of these doesn't give an answer that reveals anything about the level of acceptable risk, however what it does do is put the answers that do into context. Knowing how people feel about the current status and how this compares to where they want to be gives a strong indication of whether changes to increase development/testing rigour will be accepted and supported.

4. Testing Commitment

This section didn't work quite so well. I was aiming to get an understanding of how much time people felt should be spent on testing activities as part of a typical development.

Rate the following in terms of time; 1 = Less than 1 hour; 2 = 2 - 4 hours; 3 = 0.5 - 1 days ; 4 = 1-2 days; 5 = more than 2 days

  1. How much time out of a typical user story development taking 1 person-week in total should be given over to the creation of unit tests?

  2. How much time out of a typical user story development taking 1 person-week in total should be given over to automated acceptance testing?

  3. How much time out of a typical user story development taking 1 person-week in total should be given over to human exploratory testing?

One respondent didn't answer these as he felt that, even in the same organisation, different work items were too different to provide a 'typical' answer. My feeling was that there was a fairly typical shape for the developments undertaken that we could use to calibrate teams around how much testing to do, however I could see his point and understood the reluctance to answer here

Another issue here was that I provided timescales for user stories anchored by my experience of development of complex data systems. In this context "typical" user stories would be much shorter than 1 week in duration and therefore there was a contradiction built into the questions. Nevertheless the answers were informative and provided useful information to help in constructing a strategy.

Presenting the Figures

All testers are wary of metrics. It is a natural suspicion borne of seeing too many occasions of glowing bug statistics masking serious quality problems. Presenting the figures in a visually informative and digestible way was key to getting the benefits of the analysis here. I used Tableau Public to create some useful visualisations. The most effective being a simple min/max/average figure for each question. This format allowed me to highlight not only the average response but also the range of responses on each question.

(I've altered peoples names and scores of the responses here for obvious reasons, however tried to keep the outputs representative of the range of results and patterns that I encountered)

Business Risk:

With the business risk it was the ranges that were most interesting. Some questions would yield a wide range of opinion across respondents in their answers, whereas others would be much more focussed. Clearly in some areas specific individuals were prepared to consider a higher risk approach than others, something that hadn't been necessarily highlighted previously and possibly the cause of some uncertainty and pressure within the business. What was apparent in the real results was a general desire to reduce the risk levels in development and an acceptance of needing to increase rigour.

Development Risk

Most interesting on the development risk front was that, as I've shown here, there was 100% consensus on the need for specialist testing skills, however the organisations strategy to that point had been not to have specialist testers. Whilst testing skills doesn't necessarily require testers, the phrase "specialist testing skills" does imply a level of testing focus beyond a team solely consisting of developers.

Company Perception

The "Company Perception" demonstrated most clearly the desire to increase the level of rigour in development, with the desired level of testing clearly above what was perceived to be the current status in a similar way to the results shown here.

Starting from the Right Place

As I wrote in my post "The Living Test Strategy", a test strategy in iterative software development is not based around documents, but embodied in the individuals that make up the development teams, and the responsibilities that those individuals are given. The role of defining test strategy is then not in writing documents, but in communicating to those teams and individuals what their responsibilities and expectations are. Some fundamental decisions need to be made in order to establish a starting framework for these responsibilities. Questions such as:-

  • Are we going to have specialist testers?
  • If so will this be in every team?
  • What level of acceptance test automation is appropriate for the risk appetite of the business?

Need to be answered to establish the initial testing responsibilities and hiring needs of the team.

Using a risk questionnaire to profile the business leadership has given me an invaluable insight into the risk appetite stepping into a new company. In addition to giving an overall understanding of where the company sits in its acceptance of risk, the approach has also highlighted where there is a lack of consensus over testing issues that might need further investigation as to why.

As an experiment I would definitely regard it as a success. In my context, as someone stepping into a new agile organisation and looking at test strategy, it is particularly useful. I can see other uses as well, though. Whether your goal is to understand your new company, or to simply review where your current company stands, or even to expose conflicting opinions on testing and risk across the business, a risk questionnaire may just be the tool to help you achieve it.


A good guide to financial risk profiling and the requirements of it can be found here: https://www.ftadviser.com/2015/06/18/training/adviser-guides/guide-to-risk-profiling-k4JJTswPCCWAPyEsMwEZsN/article.html

Another good example of a risk profiling questionnaire that you can take online: https://www.standardlife.co.uk/c1/guides-and-calculators/assess-your-attitude-to-risk.page

Thursday, 21 July 2016

Making Luck

This week I've accepted a permanent role at River, the company that I've been consulting with since February. The title of the role is immaterial - what matters is that I'll be helping both Product Owners and Testers to develop themselves and their functions within the company.

I'd like to share the story about how I came to working in River. I apologise if it comes across as self-indulgent - I think there are some valuable lessons there for those focussing on their careers in demonstrating the value of the effort that we put into our professional development over time.

An Eventful Day

The events of my day on 18th January ran something like this:-

  • 8:00am - at train station as normal ready for work
  • 8:05 am - receive calendar invitation on my phone for "all company" meeting that morning. Predict redundancies.
  • 8:30am - get to work.
  • 10:00am - discover am being made redundant along with a number of colleagues
  • 11:15 am - in pub
  • 1:00pm - in Cafe eating disappointing sausage sandwich in pathetic attempt to soak up copious amounts of beer before going home
  • Afternoon with family sobering up
  • 7:00pm - At Cheltenham Geek Nights. Cheltenham Geek Nights is an development community event run in Cheltenham by Tom Howlett (@diaryofscrum). I'd become aware of it through communicating with Tom previously when he approached me to do a talk on Testing Big Data in Agile. The speaker on this occasion was David Evans. I had already been looking forward to taking the opportunity to catch up with David having benefited from his expertise in the past and maintained an amicable relationship since. After the events of the day I was particularly interested in his insight on the roles and team structures he was seeing develop through his work consulting in the fields of Agile Testing and Specification by Example.
  • 9:30pm - back in pub
  • 11:00pm - taxi home. Speaking to Tom and David that evening was the perfect antidote to my bad news and I ended the day on an optimistic note.
  • 5:00am - awake with hangover and worrying about future

During the next few days I was genuinely overwhelmed with the number of people who got in touch with me. Some contacted me just to wish me good luck, or to ask me what had happened. Others provided suggestions of possible roles or options available to me, and some even came in with direct offers of work. I can't overstate how valuable this communication was to me. Whether or not I followed up on every suggestion, the fact that so many people got in touch was hugely reassuring in an uncertain time and to those folks who got in touch - I can't thank you enough.

One of the options that I did follow up on was when Tom suggested a possible contract Product Owner role at a company he was consulting at. It turned out that he was looking to create a team of Product Owners to help to alleviate some of the challenges that they were facing in their Agile development adoption. This was exactly the kind of option that I was looking for - something that would allow me to get back to work quickly, in a very different organisation and provide some time for me to decide on a next long term move.

Landed on Your Feet

Over the ensuing weeks and months I've found that the nature of challenges presented in a very different work context have been refreshing. The experiences of working with such an amazing, close knit team at RainStor provide a constant reminder of how effective teams can be if you

  • Structure them with the right combinations of skills and experience levels
  • Allow them sufficient stability of members and technologies to confidently establish and meet release forecasts
  • Provide sufficient trust combined with flexibility over the delivered solution to fully engage the team in the creative process Having this knowledge is a powerful tool in looking where to focus to help add value in a new company.

When I've told people about my new position in a local organisation, a number have used a phrase like

'You've landed on your feet'.

This is an interesting one, as it implies a strong element of luck. Of course I would agree that the timing was somewhat fortuitous, however let's look at some of the events that led up to the situation:-

  • The contract role was suggested to me by Tom based on a conversation at a community event that I attended
  • The reason that I knew about the event was because of this blog and talks I'd done in the Agile/Testing communities had prompted Tom to ask me to speak at the event in the past
  • The reason that I specifically attended the event that day was because there were people there who I already knew from the software community
  • The reason that Tom felt confident in suggesting the role to me was because he had read my work, spoken to me personally, engaged me to do a talk and was confident in my abilities

This option, and other suggestions or offers that folks so kindly extended, made me really thankful for the time that I'd taken to invest in being an active member of my professional community.

The Slight Edge

I recently started reading the Slight Edge by Jeff Olsen. Whilst I'm not big on self help books, this one had been recommended to me and the concept interested me. One of the principles that the author presents in his book is the idea that it is not one-off Herculean effort but repeated incremental effort over time yields long term success. Many trends in modern society revolve around the concept of instant gratification: the lottery win, the talent competition, the lean start up overnight success. What Olsen suggests is that, far more reliable success comes from incremental effort to improve, applied over time. It comes from sticking with something even if it doesn't yield immediate results.

I've been writing a blog now for 6 years, and hit my 100th post last year, and I'd like to think that the effort that I've put in to blogging, attending meetups, making connections and generally working to improve myself through sharing ideas with others made that possible. If I hadn't invested that time over those years , then I wouldn't have interacted with so many different folks and would not have built a network of contacts and a body of work that allowed other professionals to place their faith in me.

Writing a blog is not easy, particularly when you have a busy job and a young family. For example during the first months of writing a software testing blog I had no idea whether anyone at all was reading it - it was 8 months and 10 posts before I got my first tweet share. In later years, particularly during the 'creativity vacuum' that comes when your small company is taken over by a large multinational, I have a couple of times considered stopping writing. I really started the blog from a desire for validating approaches rather than networking, but it was through redundancy that the realisation came as to just how valuable the effort was in building a great network of contacts to provide opportunities and guidance when needed. It is never nice losing a job involuntarily, yet ironically it was at exactly such a time that the extra effort in committing to something beyond the day job really showed its value.

I'm of the opinion that it is better to commit to one thing and stick with it than take on 5 things and give all of them up weeks later. I've given a specific example of writing here, however there are innumerable other areas where success comes not through moments of inspiration but through perseverance:

  • Keeping on top of emails
  • Keeping a backlog or to-do list up to date
  • Maintaining a sprint/status board
  • Writing unit tests
  • Testing with the right user roles not just admin
  • Adding testability/supportability properties to code
  • Having regular catch ups with team members
  • Diet
  • Exercise
  • Not having a cigarette (10 years so far for me)

The takeaway is that commitment matters here - you don't see the value, or get the power of incremental improvement, unless you stick at something. One of the great powers of the internet, blogs and conferences is that anyone with a great idea can contribute massively to their professional community. Looking at those people who enjoy universal respect amongst the groups that I communicate with, it is the ones that are persistently contributing, always putting in effort and maintaining their commitment that attract the greatest admiration.

image: https://www.flickr.com/photos/bejadin/17147146227

Thursday, 30 June 2016

Being Honest

It's safe to say that there has been a fair amount going about in the UK news recently about honesty, or lack thereof, surrounding a certain political event that I'm not going to discuss here. I've always been an honest kind of person. Whether driven through a higher moral compass, fear of getting caught out, or simply a lack of imagination that it doesn't really occur to me to fabricate, even to my own advantage, I'm rarely tempted to resort to fiction to resolve an awkward situation. I had a friend at school who was very different. He would tell me stories that, years later in adulthood, I have discovered were complete hogwash. Like the time that he told me that his cat had killed his neighbours parrot while he was at home alone ...

... a story that I later discovered he'd invented to excuse his missing a homework deadline and had told all of his school friends to reinforce his tale to the teacher. I'm still friends with him (I'm not entirely sure why) and even now every so often he reveals to me the truth behind some absolute porker he told me years before.

It's easy for me to say that honesty is always the best policy, but it doesn't always feel like the best option. A situation that occurs frequently in the working environment that can tempt anyone to stretch the truth, is when we're worried about the status or progress of our work. When this happens it is easy to fall into the trap of thinking that giving the impression of being in a better position than we actually are might satisfy those that we report to for a while, and allow us to catch up or fix the issue.

But you rarely do catch up. The fact that you are in a position where you feel the need to fabricate your progress in the first place must have come about because you're progress is not quite where you, or someone else, would have liked it to be. This will have happened for a very good reason, like the work being more challenging than you thought, which you're not going to change overnight.

Let's look at the potential impact of misrepresenting progress to some important people:-

To The Customer

The more distant the third party the easier to be economic with the truth so the customer is a prime candidate for inventive reporting. Of course you don't want the customer to know things are taking longer than expected, why would you? Perhaps you run the risk of exposing that perhaps your up front estimation processes were, actually well, estimates, and the development is taking slightly longer. Or, even worse, it could involve admitting that you are actually doing some work for, gasp, another customer. If the customer is on the other end of an email or phone then the temptation to give a false impression of progress is ever present.

I'd advise against it. Consider some of the disadvantages that can come down the line:-

  • If you misrepresent progress on a development, and then hit a technical challenge that requires a rethink, or at least some major decision making on the part of the customer, how are you going to broach that conversation when the customer thought that you were already well past making such fundamental decisions?
  • Also if you misrepresent progress then it diminishes your ability to negotiate scope as the development progresses. Better to be open early and often, at least then you can negotiate openly around what you are delivering.
  • If you misrepresent the level of rigour in your testing, this can really backfire. I've worked with more than one supplier who has insisted that they were thoroughly testing what was delivered. I therefore focussed my own testing on the integration points and relied heavily on their testing of their own features. On discovering a high number of issues in these features this immediately called into question either their competence, or their honesty. In subsequent developments I actually had to negotiate for the supplier to deliver the software slower to give adequate time for them to test it.
  • If you misrepresent the severity of an issue that has been discovered, then it is hard to justify taking time in testing a fix and therefore you place pressure on yourself to push out untested bug fixes. When I was running both support and testing I had to negotiate a reasonable delay on delivering changes to the customer a number of times. This was done on the basis that we had just found an issue and the last thing we wanted to do was rush in a change and introduce more issues, so we needed to take the time to do some thorough testing around the fix. Under-playing the severity of an issue makes such negotiations more difficult.

In the drive for 'responsiveness' to please the customer it's easy to lose sight of the value of honesty, however responsiveness in communication can deliver a huge amount of value without necessarily the need for the same responsiveness in scheduling or delivery. In my experience customers appreciate knowing where they stand, even if it is not necessarily where they want to be. They also find it hard to argue if you are putting effort into ensuring improvements to the quality of the end product you're delivering to them. Even with the awkward subject of other customers, software customers want to be buying from a stable supplier, so your having other customers is something that they should view in a positive light (though not all will). Showing respect for those other customers and being seen to uphold your commitments to them will reinforce your integrity as a supplier. If on the other hand, you are seen to default on your commitments just to please the customer in front of you, they are going to be under no illusions that they'll receive the same treatment once they're not in front of you.

To your colleagues

Misleading your progress to colleagues seems like a strange thing to do, yet I have encountered situations where this has happened when individuals are struggling or making mistakes.

  • On one specific occasion I can remember a tester working in an agile team felt that he wasn't keeping up with the user stories he was supposed to be picking up. Instead of admitting this, he misrepresented the level of testing that had been done on the user stories that he was working on, to the extent that I thought they were completed. I then reported to other teams in the business that they could make plans on the basis of that work being completed. On examination I discovered the status of the tests had been misreported and what had been done gave us little or confidence in those features. The result was me having to drop my other work in order to devote time to testing those items so that we could fulfil my commitment to the other teams. Had the tester been honest with me about his status, I wouldn't have misreported our progress and the problem simply wouldn't have occurred.
  • Similarly, if an individual has made a mistake then the temptation can be to try to cover it up. One lesson that I personally have had to learn over and over again, is that it is rarely as bad as you think, and getting a mistake out in the open is empowering. Keeping mistakes concealed, conversely, accumulates stress on the individual, and anyone else taking the responsibility to resolve a difficult situation will not appreciate it if those who have made mistakes hide the truth in an attempt to conceal their errors.

Being honest with your colleagues goes beyond operational benefit. In the world of self-managing teams and a move away from strict hierarchical control and micro-managements, creating a community of trust with your colleagues is essential. If you're co-workers feel that they can't trust your status reports in terms of what you have developed or tested, then they will likely resort to committing effort to double checking your work, at the expense of progressing their own, a situation which at best will result in ill-feeling and at worst could blow up into serious confrontation.

To management

Senior managers tend to be more removed from the immediate development and testing work and so often don't have the low level view of the software products to really make a good assessment of status. That's what they rely on you for, particularly the testers. It's therefore relatively easy to give a false impression of progress to senior management, but this can have dire consequences for them, your colleagues or you.

  • Something I've seen more than once is when a senior developer or architect takes it upon themselves to develop a pet project that they put together as they see value in it for the business and want to impress with their ingenuity. Much of this type of development has gone on outside of the core development processes and has had little or no formal testing. The software is presented to C-level executives as release ready in all but requiring a "rubber stamp" sign off from the testing team ( I have actually had a CEO say to me "now all we have to do is test it - right?" in relation to just such a development ). The result can be a lot of stress placed on colleagues who have to take this work and try to test it or support it to the level expected of them. The sad thing here is that there's no need to keep such projects away from others. The best chance of success of any new endeavour is to bring others on board with it and get them behind it, being honest about what you don't have the time or skills to do, such as the testing. If you are honest about getting others on board and giving them the credit for the contribution they make, they'll be more than happy to report your contribution to senior management and give you the credit that you are looking for.

  • Another route for misleading management comes from the manner by which we communicate with them. If we're communicating via metrics or aggregated reports and metrics then for some reason it becomes easier to massage the truth. Now I've admitted that that I strive for honesty, however I'm not a saint and admit that I have in the past been tempted to tweaking metrics and measurements to make a situation look better. For some reason the level of guilt around gaming metrics to give a positive spin on things is far greater than it would be doing the equivalent in an open conversation. If ever there was a need for an open and communicative relationship with senior management, this is one. Inevitably the truth will out and no metrics in the world can the reality of the status of development. If you don't realistically appraise management of the status of development then someone else will, and if that person ends up being frustrated customer who is suffering delays or poor quality, then the outcome will be far worse than the awkward conversation you could have had around more realistic metrics.

Honestly, Honesty - it's not that hard

Looking back at some of the most challenging times of my career so far, many of the biggest difficulties have been the direct result of someone not being open an honest as to the status of their work. The problem with taking a misleading line in relation to work is that, once done, it is so easy to escalate and compound your problems by telling more lies. Taking ownership of difficult situations, getting the problem out in to the open and driving through to a conclusion is an empowering activity that actually helps with personal development and builds experience and character. Whether in software development/testing, management or support - there isn't a successful technical leader out there who hasn't at some point had to tackle a late project, missed deadline or simple mistake head on and face the consequences.

Coming into a new organisation, as I have done this year, has been a difficult journey for me given the level of knowledge that I had developed around my team and processes in my former role. I've inevitably made some mistakes as I've worked to understand the culture and processes of a new organisation with a very different technology stack and customer base. Maintaining a level of openness around what I'm doing and any mistakes has hopefully allowed the same openness in communication to me and encouraged others to be honest with me about their work - as honesty is a two-way thing. By being honest with colleagues, management and customers I believe that we not only uphold our own principles but also encourage others to do the same.

image: https://www.flickr.com/photos/13923263@N07/1471150324