Wednesday, 9 April 2014

Knuckling Down

Having attended this year's TestBash conference in Brighton I come away from it with mixed feelings. It was fantastic to see so many testers who were clearly passionate about testing, and the atmosphere was vibrant. On the other hand I felt that there wasn't quite as much variety around the talks as the previous year, in terms of both gender of presenters and subject matter, the latter noticeable particularly the middle of the day. At least three of the speakers' opening gambit was to admit a history of ISQTB before discovering a more enlightened path of a context driven approach. (As anyone who has read this post will appreciate I am not a fan of ISQTB, however this repetition, combined with the intensity of multiple half hour talks with no breaks and no natural light, evoked a mild feeling of being in a recruitment meeting for a religious cult).

One of the messages that came up in more than one of the talks during the day, most strongly in Huib Schoots talk on Context Driven in Agile, was the need to stick to the principle of refusing to do bad work. The consequential suggestion was that a tester should leave any position where you are asked to compromise this principle. As anyone who has read my previous post will be well aware, I am a strong believer in taking an approach based on integrity, and I have a lot of sympathy with what was being presented here. Saying that, I think that we need to be very careful as an industry of the message that we are portraying, as the one interpreted could be very different from that intented.

Leaving so soon?

On the face of it the idea of leaving projects on the grounds of principles rather than doing what they perceived to be poor testing is a sentiment that I agree with. If you are working on a project where the limitations placed on your ability to do good testing with no avenue to circumnavigate these then I can totally understand that moving on should be considered.

What was missing for me in the sentiments presented at TestBash was any suggestion that testers should attempt to tackle the challenges faced on a poor or misguided project before leaving. In the examples I noted from the day there was no suggestion of any effort to resolve the situation, or alter the approach being taken. There was no implication of leaving only 'if all else fails'. As an employer of testers though I'd like to see an attitude more around attempting to tackle a bad situation head on rather than looking at moving on as the only option. Of course we should consider moving if a situation in untenable, but I'd like to think that this decision be made only after knuckling down and putting your best effort in to make the best of a bad lot.

Inform Decisions

Huib in his talk highlighted the need to restrict testing to being the providers of information to business stakeholders to inform decisions, not to be part of the decision making process, which is a common sentiment in testing circles and one that I generally agree with (although I do feel that in an empowered team, testers can take on some of the decision making responsibilities in collaboration with the other roles).

I felt that this presented an interesting juxtaposition for testers. On one hand we are saying that we should restrict our activities to providing information to informing decisions. At the same time we are refusing to perform bad work, where a major factor in the quality of the testing work that is possible will be the decisions that are made. Whilst I can see that these principles are not contradictory, I think that there is room for confusion in the interpretation that could lead testers to the conclusion that moving on is the only option when decisions have been made which present challenges for testing.

The root of that ambiguity lies in the phrase 'bad work' and the possible interpretations of what could constitute bad work for testers. In many scenarios testing projects have constraints or limitations in place that restrict testing activities that some might see as 'bad work', but I feel that it is still possible to do good testing in these situations. In fact it is in the more challenging and time constrained projects that knuckling down to perform excellent testing to quickly expose information is most valuable.

  • Not having sufficient time to test?
  • One situation which causes concern for testers is when they feel that there is insufficient time for testing. This is typically the result of a business decision to impose time constraints, sometimes based on rigid deadlines, often arbitrary targets, but outside of the control or influence of the tester.  As long as they clarify the risks imposed in the limited information that they will be able to uncover, then a skilled tester can still add a great deal of value in uncovering as much information as possible in the limited time available. These presentation slides from STARWest 2011 by Lynn Mckee and Nancy Kelln provide an excellent summary of test estimation issues and predefined project constraints, and also a superb set of references to further reading including a lot of writings on the subject by Michael Bolton.

  • High risk approaches
  • Testers provide information to inform decisions. If those decisions involve unneccessary risks this can have dire consequences for the project, but does not mean that the tester has done bad work. I've certainly been involved in projects where I've not been in agreement with the risk decisions that were made regarding the product in question. As I've discussed in this previous post, the levels of risk adoption in a business are unlikely to change, however there are various approaches that I've adopted in the situation where I felt that the decisions had a negative impact on the testing and the project as a whole. These primarily involved exposing and presenting risk information to the business that may not have been available previously. I'll expand on some examples of these below.

  • Inappropriate test approaches
  • Some projects may be characterised by the business decision makers dictating the testing approach. This is a more challenging situation, and one that I've been fortunate to avoid for most of my career. I have been in the situation where I was being told what to test, which involved focussing on the obvious user interface features at the expense of investigating more fundamental server stability issues. I've also been on projects where the development manager was advocating a lightweight manual testing approach where the use of appropriate automation tools would have dramatically improved the testing effort. In these situations I've had to take steps to address the situation, and this has not always been easy.

One of the most difficult skills I've found to learn as a tester is the ability to justify your approach and your reasons for taking it, and being able to argue your case to someone else who has a misguided perspective on what testing does or should involve. Having these discussions, and changing peoples minds, is a big part of what good testing is.

Avoiding Bad Work

I certainly don't believe in knowingly doing bad work, but I do believe in attempting to put in every effort to improve a situation. Having worked flor a long time as a permanent employee in small reactive technology companies I've been involved in a variety of projects, some better conceived and delivered than others. In situations where the testing risked being compromised I have had to dig deep on more than one occasion to try to recover a bad situation and deliver a successful result. Here are some of the approaches that I've found to be useful when trying to change the direction of a project towards better testing.

  • Pointing out the Risks
  • I've found a risks map to be a useful tool in highlighting the problems faced on a testing project. For example with the utility that I wrote about in this post on Testability where the testability was limited, I explained the situation to the management with the use of a Testability map that broke down the types of exploratory and automated testing that I felt was appropriate on that product, with icons highlighting areas which were inaccessible to testing in the time available. This formed the basis of a discussion around the approach to be taken and an agreement around improvements needed to progress the testing work.

  • Stories on the backlog
  • In an agile approach we tackle work in the team through the scheduling and prioritisation of user stories. If we feel that the testing of a certain feature is inadequate then an effective approach is to add stories to the backlog targeting the customer value that comes from the extra confidence that the testing can provide. This places into the hands of the business the decision to prioritise that testing work based on the value that is derived from it.

  • Using groups
  • A group discussion or exploratory testing session can help build confidence and cohesion in the testing group before raising concerns with management. In situations where myself and the testing team have had concerns over the suitability of a product for customers then I have arranged group exploratory testing sessions in the form of tours from the perspective of the relevant users. Discussing openly and critically the value of the product helped to gain a group understanding and provide confidence in raising our concerns to product manager at the time.

  • Just doing it
  • If the approach that is imposed on a testing effort is invalid for the context, then I'm a strong believer in taking the time and using a dose of professional flexibility to do what I feel is right anyway. Sometimes it is necessary to go the extra mile to do enough to demonstrate the value of a different approach. I was once being pressured to take a very limited testing approach on an API integration that was highly volatile and I felt that a level of automation around the interface was appropriate. I freed up some time during the working day to create a simple harness and then spent my evenings learning Java to develop a more extensive test suite. This was invaluable in helping to highlight basic regressions in further deliveries of the integration and also as a tool in facilitating further exploratory testing.

A Ray Of Sunshine

This is the kind of thing that wanted to see more of in the talks at TestBash - practical examples of how people have avoided bad testing by tackling difficult testing challenges. Thank goodness then for Chris George from RedGate in the penultimate talk of the day. Chris recounted a story of himself and a developer getting stuck into a test automation problem that had been estimated and needing 6 months of work, and through a can do attitude and a healthy dose of intelligent hard work, achieved a successful result in a short time.

The testing community is going through a fantastic period at present with more and more testers passionately promoting testing as a skilled profession. With the best will in the world, not every project that we work on or role that we step into is going to be initiated from an enlightened position of appreciating testing as a highly skilled role. Working on improving the testing effort on poorly conceived projects could be an opportunity to demonstrate the value of testing and the information that we can provide to a new audience. If testers capable of changing peoples' perceptions about testing and thereby improving the status of the testing profession take the approach that they will move on as soon as they are presented with a request that offends their testing sensibilities then that is an opportunity lost.

Image: http://gimp-savvy.com/cgi-bin/img.cgi?noabSlKvExKIYUk1200

Tuesday, 18 March 2014

A Match Made in Heaven - RapidReporter and Baretail

Synergy is one of those words which is drastically overused in the workplace. No set of Buzzword Bingo cards would be complete without the word synergy in there somewhere. It describes a system involving multiple actors or components which achieve a greater, or funamentally different, effect than if each were working in isolation. Whilst a staple word in management waffle, synergy is an appropriate word to use when you encounter a scenario where people, organisations or tools combine to create a greater outcome than if each were operating alone. I think that, for me personally at least, I have encountered an excellent example of synergy recently. See if you agree...

Synergy of Bees and Flowers

Rapid Reporter

I really like RapidReporter. Schmuel Gershon originally wrote it as a tool to take notes in Session Based Exploratory Testing. As I mention in my entry on in in my tools page, I don't actually use Rapid Reporter for testing note taking, however I do find it useful for other note taking such as static code or document reviews, as well as taking notes in conference calls.

For me one of the greatest features the tool, its unobtrusive operating size, is also the source of one of its limitations. You cannot easily refer back to the notes that you have taken previously in that meeting or session. Whilst it is possible to view previous notes in the context menu this is not in an easy format to review and does not differentiate between entry types. The only other option is to open the working folder and view the notes. As the instructions readme suggests, this is the least favourable option:

“Open working folder” from context menu and look at the notes directly from the *.CSV file. This is not recommended because some apps will lock the file access, which would cause an error message to appear.
When I'm taking notes during a phone conference, for example, I find it useful to be able to refer back to decisions and actions that have come up so that I can raise appropriate questions or summarise the meeting and actions for attendees at the end.

Baretail

I also like BareTail. It is a Windows tool that emulates the basic behaviour of tail with the -f follow flag in Linux in its ability to display the contents of a file as it gets written to. It also has some other useful features, such as the ability to highlight lines in a file that it is tracing based on the content of the line. While a useful tool for testing in many contexts, I have not historically used it that often for the simple reason that I don't have the need to track files on Windows very often.

Greater than the sum

Whilst taking notes during web conference meeting using RapidReporter last year and I found that I was repeatedly opening the notes to refer back to previous entries. It occurred to me that what I was missing was exactly the functionality that BareTail could provide. I needed to have something running in the background that I could just refer to quickly if I needed to review a previous comment. BareTail could give me this, plus the added benefit of highlighting key entries like Actions and Decisions.

I set up some custom note types to use when calling RapidReporter

RapidReporter.exe NOTE ACTION DECISION QUESTION

I then added the last three as custom highlights in BareTail.


BareTail Highlighting configuration
The final step was to knock up a simple script which would automatically track the csv file generated by RapidReporter once it was fired up. This is not as easy as it could be as the output files generated by RapidReporter are not controllable, however it is fairly simple to track the output directory for csv files and then call Baretail once one has appeared. (I use cygwin to provide bash scripting capabilities on my Windows laptop)

Simple Cygwin script to combine RapidReporter with Baretail

(An interesting testing aside here - the RapidReporter output file is only generated after entering a session reporter and charter. This means that it is possible to exit the Reporter without generating a file, hence the need to track that both RapidReporter is running and that the csv output exists in the tracking loop.)

Is it synergy?

By combining the use of these tools I have the advantages of RapidReporter in its unobtrusive ability to take notes, combined with a reference window in BareTail which I can minimize for most of the session but which actively tracks the report output without having to reload or refresh. It also highlights important decisions, actions and questions to allow review during the session. If I want to review the output again later for a follow up review, e.g. when writing up meeting notes, I open the CSV with BareTail in the Windows context menu to view the highlighted output.

In a similar way this could be used for test sessions with highlights for ISSUE, FOLLOW UP or BUG to allow simple session reviews, either individually or in peer review.

RapidReport and Baretail combined

For me this constitutes a synergistic relationship. The use and value of the tools combined for me is different from, and more valuable than their separate use. If for you they are not, at least I've provided the opportunity to complete your Buzzword Bingo card for today.

Both of these tools, along with other tools that I use in my work, can be found on my Testers Toolkit page.

Image: http://www.flickr.com/photos/14646075@N03/3929252341/sizes/l/

Sunday, 9 March 2014

What Testers Can Learn from Marketing

This week I gave a talk at the Birmingham STC meetup entitled "What Testers can Learn from Marketing". The subject of testing and marketing was something that I had been intending to write or present on for a while, having gained an insight into marketing operations in my time testing a marketing analytics database. Committing myself to giving a talk on it was a good, albeit somewhat high pressure, way of providing an incentive to do this.

One the evening of the meetup, as I was preparing to give the talk, Vernon Richards (the @testerfromleic) came up to me and said "You're talk is either going to be really good or really short, as when I think of what we can learn from marketing I think - nothing at all!" (apologies to Vernon if I've misquoted him here.) I was hoping to encounter a little of this type of opinion in the room - is there really stuff that testers can learn from the people who put junk mail on our doormats, spam our inboxes and subject us to Kerry Katona on our televisions (if you are not familiar with the name, please replace with the name of any irritating minor celebrity from TV commercials in your own country)?

As I went on to explain - actually yes there is a great deal...

I want to learn

Marketers' work has a huge amount of crossover with Software Testing, and increasingly so as the models of testing change to meet the demands of emerging testing environments and contexts.

I typically feel the urge to run for the hills when someone pulls up a Wikipedia definition in a presentation, however I have to confess that on visiting the Wikipedia towards the end of preparing the talk, I found the following two statements on The Wikipedia Marketing page compelling and worthy of inclusion:-

“Marketing is the link between a society’s material requirements and its economic patterns of response”

There is an interesting, albeit indirect, parallel here – Marketing is an interface between requirements and response to requirements. A critical element in the success of a Marketing operation is understanding what people want and how they respond as a result, just as understanding what customers want and their resulting expectations and behaviours is a fundamental part of being a software tester.

“Marketing can be looked at as an organizational function and a set of processes for creating, delivering and communicating value to customers”

So both marketing and testing roles involve the understanding of what people want, and what constitutes value to them. Both involve the understanding of responses to those requirements and the attempts to meet them, and most importantly both are communication roles in relaying information in response to those requirements to allow value decisions to be made.

Looking at some of the specific activities that marketing is involved in I highlighted some characteristics of marketing activities which should be more than familiar to testers:-

  • Market Research is an exploratory activity researching new markets and opportunities
  • Marketing is a communication role providing information to inform decisions
  • Marketing is a testing role – testing campaign effectiveness to establish differences and significance of response
  • Marketing is a checking role – ensuring campaigns are delivered correctly
  • Marketing is a learning role – taking information from previous tests and runs to improve future efforts

I think that all of these verbs are equally at home in discussion of a testing role as they are of Marketing, with one major difference…

Marketing has a lot more money behind it.

Marketing is Exploring

Market Research is an exploratory activity. It is the process of investigating information gathered both from and on markets to identify opportunities and threats that the company instigating the activity may want to position itself to best respond. The market research process is commonly defined as being a six stage process:-

  • Step 1 - Articulate the research problem and objectives
  • Step 2 - Develop the overall research plan
  • Step 3 – Collect the data or information
  • Step 4 – Analyze the data or information
  • Step 5 – Present or disseminate the findings
  • Step 6 – Use the findings to make the decision

There are a huge number of parallels here with the process of testing. Whilst a little more formal and staged than the approach that I prefer to take in a testing project, we can see that each step in the process has direct relevance to testing.

Articulate the problem. - How often do we have a testing problem where we are expected to wade into the testing without articulating the scope of the problem or the objectives of our testing activity? When I asked if anyone present had been asked to "just test something", without clarifying the scope or goal of that testing, the show of hands in the room of individuals suggested that it was something that affects most testers at one time or another.

Ensuring that the objectives of testing are well defined up front helps us to maintain focus on what it is that we are trying to achieve and avoid distractions. One of the recommendations that I read for Market Research professionals was to re-iterate the original problem at the end of the research activity to ensure that you communicated the purpose of your work.

Step 2 - developing a plan and Step 3 - collecting information are pretty much fundamental to any research activity. Whilst important in themselves and worthy of discussion, I personally felt that steps 4 onward were more interesting from the perspective of comparison and learning.

Analyze the Data - Marketers have a wealth of techniques at their disposal to analyse data collected from market research. What they are particularly interested in is identifying and distinguishing causal relationships – does having a dog make you more likely to buy a certain brand of toilet cleaner? The use of statistical modelling tools such as multi-variant regression modelling allows marketers to identify which variables have the most influence over behaviours that they are interested in. Whilst many of the testing problems that we face are more obviously deterministic – "if I enter this text and click this button then the application crashes", for certain behaviours, where the causes are less easily determined, these techniques could be something that we make use of. For example - if analysing failure rates in large installations involving high levels of concurrent activity it may be that the factors contributing to failures are unclear. The extensive use by marketers of regression modelling is something that I think will be seen increasingly in the testing industry as our patterns of testing change in relation to the systems we work on.

As we move towards the later stages the process changes in focus such that reporting and communication become the key elements.

Present the findingsmymarketresearch.com has some interesting recommendations regarding presenting the findings of a research project:-

When it comes time to presenting your results, remember to present insights, answers andrecommendations, not just charts and tables.  If you put a chart in the report, ask yourself “what does this mean and what are the implications?”  Adding this additional critical thinking to your final report will make your research more actionable and meaningful and will set you apart from other researchers.

This has huge relevance to testing, what they are saying here is that, to set yourself apart as a talented and valuable Market Researcher you will go beyond the collection of data and presenting of figures. Making use of critical thinking and reporting insights along with data from running a research project when presenting the findings will increase the researchers value to the business decision makers. I believe that this is a principle that applies equally to the reporting of the findings of software testing activities, particularly in the context of helping to deliver the next step.

Make Decisions - The final step in the process involves using the findings to make a decision. If we refer to the old favourite from - “How to measure anything” by Douglas Hubbard – all information gathered in business should be done so with the purpose of informing a decision. The structure of the 6 step process helps to maintain a focus on this goal as the final step in the process. Again this is something that testers would be well advised to bear in mind. The execution of tests and collection of data has little relevance if not helping to inform a decisions. More importantly, if the data that we are collecting is the wrong data to inform that decision, such as counting a percentage of automated test cases that have passed to decide whether to release rather than examining results and implications individually, then we could be doing more harm than good.

Whilst simplistic, I believe that one of the main strengths of this model is that it maintains a focus on the end goal of making decisions, such that it is maintained as the ultimate target of the previous stages.

Accept decisions - mymarketresearch.com also has some relevant advice relating to the fact that you may not always get the decision that you expect.

Remember that market research is one input to a business decision (usually a strong input), but not the only factor.

Whilst you have collected and presented critical information to help to inform decisions, there will be other factors in play which will also influence those decisions and the logical decision based on the results of testing alone may not be the one that is made.

Marketers as Testers

Given the cost of planning and executing marketing campaigns, there is naturally a high level of interest from the campaign organisers in ensuring that the campaign ‘creatives’ (the media deliverable of a marketing campaign) are tested to establish their effectiveness. Campaign Management therefore involves a huge amount of testing, and marketers have come up with some interesting solutions to what essentially break down into a series of testing problems.

When looking to make changes to a marketing message or deliverable, it can be a major risk to make the change wholesale to the entire campaign list. One example given on a website I was reading was if you attempt to introduce personalization and it goes wrong it can have an opposite effect to that desired, making the recipient feel even less valued. Having been on the end of a “Dear [CustomerName],” email I know how much negative feeling can be generated from mistakes such as this.

It is not just mistakes that Marketers test for, even subtle changes in campaign deliveries can have a postitive or negative impact on the effectiveness on that campaign that can result in big differences to resulting sales. Instead of risking a negative impact hitting the entire campaign list, typically the marketer will perform what is known as A/B segmentation of the list to identify a reduced set of individuals to receive the new/changed campaign.

Segmentation actually has two related but distinct terms in marketing. Market segmentation is the process of splitting a market, typically on socio-demographic factors, in order to identify those segments most likely to respond to specific marketing approaches. Campaign list A/B segmentation, which is what I am referring to here, is a testing activity where the list of people targeted for a campaign are split to measure different responses to different marketing activities.

The marketer will create a ‘test cell’ of individuals who will receive a different target to the ‘control’ group. These two groups can then be compared for significant differences in response to see if the test of the changed campaign was more effective than the control.

What the marketer is essentially doing when performing these tests against the ‘test cell’ is tackling the problem of 'Testing in Production'. The data in play is live data and therefore the risks involved with potentially costly mistakes are high. These processes of testing are responses in order to allow the testing of different campaign formats, or correct delivery, whilst attempting to reduce the impact that mistakes could make on a live customer base such as a list of email subscribers. These approaches are directly comparable with any application or software that releases new code to a subset of selected users in order to elicit feedback and ‘test’ the response before releasing to the full user base and risking an embarrassing rollback. Testers operating in these kind of systems are very well placed to use marketing techniques to establish significance of these changes, not only in relation to new bugs but potentially in terms of other negative relationship indicators such as account closure or reduction in use.

Marketers as Checkers

With the number of individuals typically targeted in a marketing campaign it is rarely practical to test every individual to ensure that they received the ‘creative’. Instead the marketer may opt to test a subset of the list , or they may decide to include a set of predefined records who are not real campaign targets but individuals specifically included to test the receipt of the marketing creative, known as a set of seeds.

As well as being a response to the problem of using production data for testing, this approach of using seeds is also a response to the challenges of ‘Testing Big Data’ – something that I am very familiar with. The cost of checking every single recipient for successful receipt of the creative would be prohibitively expensive, just as checking every data row in a data warehouse or big data storage system would not be practical. Instead the marketer creates a series of dummy respondents exhibiting the required characteristics which can then be included in with the live campaign list to test the correct execution of the workflow.

I had a conversation on big data testing with a number of testers last year, one of whom was working on a data warehouse ETL process, and this is the exact approach that he took to test the ETL processes. He would design test data records which would exercise the various decision rules of the ETL process, and include these in the batch live data payloads. It was then a much less costly job to check that these seed records had undergone the correct data transformations than looking at the huge volume of records in the live payload.

Closing the Loop

Marketers use the phrase, “Closing the Loop” to the process of analysing previous campaigns and activities to improve future efforts. According to Tom Breuer in his paper How to evaluate campaign response

Closing the loop' is key in state-of-the-art database marketing. It means testing, measuring–tweaking campaigns.

Marketers spend a huge amount of time examining the results of previous campaigns to establish the most effective targets and ‘creatives’ to yield the most response. Breuer identifies two costs associated with running campaigns. There is the “complexity cost” of creating the deliverable, and also the “opportunity cost” of testing against control groups which are not marketed to yet still have to be tested. He states that using control groups inevitably yield lower responses than groups which have been targeted (although I don’t think that he had considered the possibility of Kerry Katona being involved). A key to making the most of the marketing investment is to make the most of learning from previous campaigns to identify the specific customers who are most likely to respond, so marketers will again perform a wealth of data mining and analytics on previous campaign data to identify the targets that are going to give them the greatest benefit relative to the investments of marketing to them.

Whilst continuous improvement and continuous learning are core principles for the majority of testers that I relate to, this kind of algorithmic learning is something different that has very specific applicability in the testing arena. I’ve recently been investigating the use of stochastic test generation for SQL testing. This approach involves the random generation and execution of test queries, using other databases, or other versions of our own system, as test oracles to identify inconsistencies. This type of testing activity is very much a ‘closing the loop’ type operation. A model is created for test generation, then the outputs from running that model are used to see how many issues the tests identified. Given that the time taken in executing the queries, and examining the outputs to factor out false negatives, is the major cost in this exercise then analysing the output results to improve the chances of finding issues in the next run is intuitively an approach that makes sense. This allows the tester to tweak the model in order to focus the next iteration of tests on the model structures that expose the most problems.

A natural progression once this cyclical process is established would be the introduction of algorithmic learning programs to identify the models that are most effective at exposing issues. The ability to mine historical data in order to guide future tests, particularly in an automated randomised testing activity, is a compelling capability.

I should make my position clear on this kind of approach - I think that it can be hugely powerful in the right context, such as applications with a well defined input model like an SQL parser, and only then as an augmentation to human exploratory testing. As these approaches increase in popularity and move into the testing mainstream, then the body of knowledge available in the Marketing community around iteratively testing, tuning, learning and measuring the significance of responses is going to prove invaluable.

Wrapping up

I received some excellent feedback on the talk and the subject matter, I hope that the material in the form of this post proves as interesting. I acknowledge that with the talk and this post I have only scratched the very surface of the marketing profession and what it can teach us. In the subsequent discussions around learning from other professions, the meetup group discussed other fields that we could look to learn from, and also where we should be cautious. I was glad that many contributions in that discussion echoed my own feelings. For me the primary principles that I maintain when looking to other fields for ideas are these: 

  • Testing is a research, exploration and communication role far more than a rubber stamp quality checking operation and I believe that the professions that we look to learn from should be reflective of this. We shouldn't always look to manufacturing as the only industry from which we can learn. There are a wealth of other fields which pursue more relevant areas to the role of software engineering, and testing specifically. A little imagination and lateral thinking goes a long way in finding a rich field of research from which to learn to better ones craft.
  • More importantly - we should look to learn and from other industries and professions and adapt, not attempt to mimic them. Software engineering and software testing stand alone as having very specific challenges and opportunities. Whilst other fields have a huge amount to teach us we should approach this in the sense of selecting the most useful and appropriate elements instead of resorting to mimicry.
Thanks to those who attended the talk, and to you for taking the time to read this. In the true spirit of marketing - if you like it, please share it. I'd appreciate any comments that you may have below.

Further Reading

A useful glossary of Marketing Terms http://www.campaigner.com/resource-center/discuss-and-share/glossary.aspx
Basic introduction to the 6 stage Market Research Process http://marketresearch.about.com/od/Market_Research_Basics
A good general website on Market Research http://www.mymarketresearch.com
Some useful further reading on approaches to forecasting from Judgemental Models to identification of Causal Relationships http://www.marketingprofs.com/Tutorials/Forecast/index.asp
Tom Breuer's Paper on How to evaluate campaign response http://www.palgrave-journals.com/jt/journal/v15/n2/full/5750036a.html
A useful introduction to marketing data analysis techniques using Excel https://www.iacquire.com/blog/quantitative-data-analysis-techniques-for-data-driven-marketing-2

Slides from the talk can be found on my talks page

Images: http://www.flickr.com/photos/x1brett/5054129673 (Modified); http://www.flickr.com/photos/maccast/306309853/sizes/o/

ShareThis