A phrase that you hear a lot in software circles is 'in the real world'. I've attended many conference talks and presentations since I've worked in Software Testing. Typically there is opportunity for Q&A either during or after the talk - from my experience of speaking this is one of the most nerve-wracking stages of any talk as you simply don't know what is going to come up. The phrase 'in the real world' is one that is often thrown up during this stage. The source is typically an audience member who is suggesting that the the ideas or approach presented in the talk are not applicable in a real world situation. Sometimes the questioner restricts their 'real world' to their specific company or role, however I have seen those who go beyond this, claiming to represent all testers operating in real companies in their dismissal of an idea.
I've been thinking about this a lot recently, particularly relating to Agile. On one hand we have a manifesto and a set of principles that back this up. Out of this has grown a number of methodologies, such as Scrum, that dictate practices. I've seen how deviation from the core practices of scrum can incite accusations of 'doing it wrong' and labels such as 'Scrumbut' and 'Cargo cult'. At the same time, one of the key principles of Agile is to continuously review and improve as a team. Any improvements will inevitably be driven by context and our own 'real world' and will therefore involve tailoring our approach to our specific needs - so how do we tell the difference between valid, pragmatic, context based augmentation to our process and a scrumbut-esque gap in our Agile adoption?
Evolution is good
I think that any approach will require some modification in its application to a real world context. In my talk at EuroSTAR 2011, and a few others since, one of the key themes that I focussed on is how short time-boxed iterations and regular review allowed teams to evolve over time to yield massive improvements in their development activities. I strongly believe that this is the greatest single benefit of an iterative approach to software development. At the same time any deviations from a strict adherence to an approach, scrum in our case, can yield criticism of 'doing it wrong'.
With rather fortuitous timing - as I was noting down the initial ideas for this post - one of our team, John, in our last retrospective raised a point of questioning how well we compare to the 12 Agile principles. He wanted to highlight the fact that years of retrospectives and general familiarity with our process could have resulted in our taking our eyes of the ball in terms of working towards the Agile principles. This was a great reminder for me of how quickly time passes. It is easy to forget the importance of reviewing not only with the aim of improving within your own context, via scrum retrospectives, but also taking the time to measure yourself against the principles of the approach that you follow.
As a result we decided to have a review of the Agile principles to discuss how we were doing and if there were areas we wanted to revisit. John arranged a session where we reviewed the principles and discussed them in the context of which we felt we were delivering well, which we could do better at, and whether any were any that we felt might need some valid amendment to apply to our situation.
the Agile Principles
For anyone not familiar with them, the 12 Agile principles are documented here. Let's examine each in turn.
Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.
There is an interesting assumption embedded in this principle and that is that the customer will be satisfied with early and continuous delivery of software. For some contexts this assumption is perfectly valid, however for a product company delivering into blue chip organisations I seriously question it's applicability. Installing a new version of our software into a production environment is a significant undertaking for many of our customers. What they really want is to install a robust and stable piece of software that they can build and run their business processes around without disruption. For my team a principle around providing valuable new functionality through frequent iterations, whilst ensuring the final release delivery satisfies the contracts established through previous releases, would be a more appropriate one.
Welcome changing requirements, even late in development. Agile processes harness change for the customer's competitive advantage.
This is an interesting one when applied to a scrum approach. The principle states that we welcome changing requirements, however many scrum teams operate with a policy of not changing scope within a sprint, in our case 4 weeks. If the principle applied to scrum relates to having different requirements on a sprint by sprint basis the principle is inherent in the process as we aim for a level of completion and replanning at each iteration that allows for changes. Given that we're delivering working software with each iteration I would not describe such changes as being 'late in development'. For me a 'late in development' change in srum would be within the sprint, thereby positioning this principle somewhat contrary to the scrum process. For no team we aim to minimise the changes within a sprint to reduce context switching, but we will be flexible to changes in scope if there is a clear business need. I think that this is a case of a principle that works well when contrasted with the more common 'waterfall' based approaches which it was clearly written in contrast against. For a team who have grown up around an iterative approach, the concept of 'late in development' may differ from the original intention of the principle, to the extent that the wording of the principle might need reviewing as our perspectives on what constitutes a development lifecycle change.
Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.
This one is core for us, and I imagine for most agile teams. Maintaining a position of working software ensures that we never stray too far from being able to deliver the software, which reduces the likelihood of significant deadline increases due to discoveries that impact releasability.
Business people and developers must work together daily throughout the project.
Is this true? Gojko Adzic once wrote a great piece on the mythical product owner. Can we really expect to have business decision makers present in our scrum teams on a day to day basis? For a product team we can't realistically expect a representative of each of our customers to work in our teams, we therefore have to have a proxy role, and most teams aim to have a product owner here. My concern with this is that, adding a proxy role in between the development team and the customers could actually allow the team to distance themselves from understanding the customer. I think that a better principle for many is that the members of the development team work frequently with business representatives on a conversational basis to ensure that they have an excellent understanding of the business needs and can act in a proxy capacity to assess the software. As I wrote about in this post I think that this is a role that a tester can take on in an agile team.
Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.
This was one that we marked as 'could do better'. I think in general we have a good level of motivation in the team, however one or two felt that there was room for improvement here and that was sufficient for it being marked as such. This is another of the principles where I felt that our assessment of our position was very much performed relative to our position as a successful agile team, where the original intention was perhaps relative to the pervasive development cultures at the time of writing. I believe that teams built around Agile approaches typically enjoy higher levels of motivation on their work than more traditional organisations due to greater levels of autonomy and collaboration, yet this is now how we measure ourselves, so the bar has been raised.
The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.
This was another one that we marked as as 'could do better', yet again I believe that we do a great job here if we compare not to our own high standards but the cultures that this principle was written in response to. Developers and testers don't communicate primarily through the bug tracker as they have in organisations I have worked in previously. Testing isn't a production line process which takes in requirements and software and produces test cases, bugs and metrics. Perhaps our over-reliance on emails causes concern for some, however for me I think we do a grand job here and easily meet the principle as it was originally intended - with high levels of interactivity and engagement between team members. In some ways the fact that some felt we could improve here was a real positive, in that again the cultural bar has been raised and we now need to measure our principles against the new expectation of highly collaborative teams and cultures.
Working software is the primary measure of progress.
Yes - 100% this is us. We measure our progress based on what is finished and works. I think that this is one of those principles that really was created in response to some flawed models of tracking project progress that allowed a project to appear to be 90% complete only only for the final 10% testing phase to take as long as the rest of the project combined due to the late exposure of issues. I've heard stories of Agile adoptions where management failed to relinquish the need to focus on quantifiable measureents of progress and so misguidedly rounded on story points/team velocity as some kind of comparable metric to measure progress and compare the efficiency of different teams. I'm glad to say that I'm not in one of those organisations.
Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.
When I started in Agile 8 years ago we went at relentles pace. I think we did 12 sprints without a decent break, and as we pushed towards releases we had the stereotypical weekend working to meet deadlines. One time the situation was so bad that I fell out with the then manager as I had a friend visiting from another country and I was not prepared to miss their visit to go to work on a Sunday. I'm pleased to say those days are gone. We might not always achieve everything we plan in the time originally intended, but we respect the need for sustainable pace. If we don't meet our targets within the pace with which we operate that is accepted.
Continuous attention to technical excellence and good design enhances agility.
I think that Agile can be its own worst enemy in this regard as short timeboxes can lead to myopia and 'shortcut coding' if we don't maintain a focus on this principle. I have seen the situation where, rather than risk breaking existing functionality by modifying an existing interface to a file store or database, a developer has simply written another interface. Whilst this reduces risk to the existing code in the short term, over time it leads to brittle code and a nightmare for testers and programmers alike as the number of features that are independently affected by a change to the storage layer increases. Sustainable development does not only refer to the pace of the team but also the quality of the code, if the quality deteriorates over time then the pace of development will slow. I completely agree here - and have seen the impact on agility that hasty design can have.
Simplicity--the art of maximizing the amount of work not done--is essential.
As anyone who has read my post "Putting Your Testability Socks on" will know that "Simplicity" is also one of the testability factors. I question whether the definition of simplicity in this principle could actually contradict the Testability one on the basis that reducing work done does not necessarily lead to simplicity in the product design. In fact have the opposite effect -as in the example I gave above where taking a myopic approach of minimising the work can come at the expense of maintaining good code structure. This is clearly not what we are aiming for. For me, simplicity should not be about maximising the amount of work not done. It should be about minimising the amount of functionality in the product. It should be about minimising the number and complexity of interfaces. It should be about maximising the lines of code not present in the code base. Sometimes achieving these elements of simplicity actually requires more work. I know what the principle is driving at here, however the wording is flawed.
The best architectures, requirements, and designs emerge from self-organizing teams.
I like to think that this is the case, however I have no evidence to back this claim up. If I were to take a contradictory stance I'd argue that a more accurate, but possibly more trite, definition would be that "The best architectures, requirements and designs emerge from people who know what they are doing". It is not so much the self organising team that I've seen yielding the best designs, but the teams who have grown to develop a high level of tacit knowledge around the workings of the product and the needs of the customer. In our review we struggled with this one, mainly due to the fact that it is such an absolute statement. We decided that we probably 'could do better' on the basis that some of our requirements and designs come from outside the self-organising teams, however whether this necessarily means that they are worse for that is hard to say. A possibly less trite attempt might be "The best architectures, requirements, and designs emerge from collaboration between individuals with a range of relevant expertise and experience".
At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behaviour accordingly.
Which brings me neatly back to my initial thoughts - "I strongly believe that this is the greatest single benefit of an iterative approach to software development". The frequency of review and improvement in an Agile adoption supports a level of tuning of procedures that allow a team to evolve itself perfectly into whatever context it operates. This evolution may sometimes result in practices that run contrary to the idealised models of Agile development that one finds in the syllabuses of certification courses and the training material for Agile tool vendors, however this is no bad thing, as long as we maintain a clear picture of the principles that we are working to.
The Beauty of Principles
"In the real world" sometimes it pays to review what we do in light of a more idealised viewpoint. Whilst we might decide that all of the practices of our chosen development methodology are suitable, by reviewing ourselves in the context of the principles we can see if we are straying from the core intentions of that approach. Whether I agree with them all or not, I like the agile principles and the fact that they exist. I find that a set of principles, as with the "Set of Principles for Test Automation" that I created to guide our automation, provides clear guidance without imposing too rigid a structure or being too woolly and trite (company mission statement anyone?). What we do need to do is ensure that those principles are maintained to ensure they stay relevant. With the Agile principles I feel that, given the massive impact that agile has had on development teams these principles may now benefit from being revisited in light of the fact that agile has itself moved the goalposts in terms of how these principles are presented.