This post is accompanied by a visual representation of two testing cultures: Confirmation and Exploration
One of the most common themes in discussions about testing is how testers can not only add value, but demonstrate how they add value to the teams and organisations they work with.
Sometimes the benefits of having skilled and knowledgeable testers within teams can be undermined by a Confirmation Culture; a culture where those teams focus heavily (sometimes exclusively) on confirming that pre-defined requirements have been fulfilled.
These practices can constrain the work of testers. Their opportunities to investigate, experiment and explore are limited. If testers feel constrained, they may feel that they are not contributing as much as they might to the team and their outcomes (and they may be right). Naturally, the question of the value of testing and testers will arise in these circumstances. Testers may question their own value (their morale is likely to drop), their team mates may begin to see testing as a low value activity, and organisations focused on rapid and lean delivery may see testing as a bottleneck, or testers as unnecessary, costly and wasteful.
If this concerns you then it is worth identifying where such a culture exists or is emerging and what could possibly be done to challenge it.
Confirmation can be important…
It is worth noting that confirmation is not always a bad thing or unnecessary. There are important verification activities which can form part of a tester’s role. A couple of examples:
- If an organisation sends communications via email or print to their customers, there can be business rules or regulation around what those communications must include and there may be people who specialize in confirming that the correct information is included in the right places. Mistakes could lead to customer dissatisfaction and reputational damage.
- If a price for goods or services is displayed and that price is calculated according to variable factors, then there may be people with specialized knowledge of the complex calculations and factors which contribute to the price being accurate. Confirmation of these prices could be critical and mistakes could be very costly to the businesses involved.
…but emphasis on confirmation can go too far
The danger is that confirmation becomes the totality of testing; that in aspects of a product, system or process where exploration and experimentation could provide valuable insights to stakeholders, testers are instead asked to simply confirm that a pre-determined (and possibly inaccurate) requirement has been fulfilled. Testing is reduced to a binary activity – yes or no, true or false, pass or fail.
Characteristics of Confirmation Culture
Teams and organisations with a Confirmation Culture are likely to display some or all of these attitudes and behaviour:
- ‘Signing off’ of pre-determined, static, requirements (or Acceptance Criteria) and trust in those requirements or criteria accurately capturing needs, desires and purpose – attempting to define and lock down what people may want from a product up front with little attempt to adjust and refine that thinking or even to throw ideas away.
- An emphasis on demonstrating those requirements or criteria have been met and a view of testing as a finite activity, measurable by the ‘completion’ of test cases (or sign off of acceptance criteria)
- A need for ‘traceability’ matrices to show where requirements map to test cases – if test cases cover the necessary requirements then testing is ‘done’ once ‘all requirements have been tested’.
- A reliance on a ‘Regression suite’ which will repeat the confirmation of requirements from earlier releases – sometimes coupled with an assumption that this suite will somehow eliminate all risk of regression, this can be a crude approach which uses a pre-defined set of checks tool to repeatedly hammer the same points of a system or application in an attempt to confirm that ‘nothing else has been affected by the change’.
- A tendency to cover eyes and ears if testers go beyond the confirmation activities they have been asked to carry out – in the worst cases, I have known testers to be told that a bug will not be investigated because it is not related to a test case, or asked ‘why were you even testing this?’
- Lazy behaviour from testers around problems they find – raising defect records at the first sign of a deviation from an ‘expected result’ rather than talking through potential underlying environmental or data problems, or desired system behaviour with their colleagues
Challenging and changing a Confirmation Culture – what can we do?
Understand why the culture exists
Before we become too critical of such a culture – and especially before attempting to change it – it is important to take time to understand why the culture exists or is emerging. There is little to be gained in undermining or belittling ways of working; far better to engage our colleagues and probe their assumptions and beliefs. What do they feel they get from the methods and approach they use? Do they see problems with how things currently work? What would they change if they could? These discussions make it easier to address any concerns they might have with changes and may help them to understand our concerns too.
Take small steps together
Even if you have a clear vision of where you would like to get to and the kind of culture you would like to see within your team, you are more likely to succeed if you bring your team and leaders with you on the journey. This might mean making small incremental improvements to the way you work, rather than taking a ‘big bang’ approach and seeking to obliterate existing patterns and behaviours at a stroke. Changes can be made within releases or sprints, or can be trialed alongside existing practices:
- Testers who are unfamiliar with session based exploratory testing could trial this approach for some of the features in scope
- Visual process or system maps could be introduced to show testing effort and coverage for elements of a release from a business or technical perspective, rather than seeking to demonstrate coverage through a requirement matrix
- Metrics associated with a pass/fail approach to testing could be positioned alongside more subjective assessments of coverage and quality and potentially, over time, phased out
- Quality Risk workshops or discussions could be introduced as part of existing project inception or sprint planning routines, focusing on where changes could go wrong, or could fail to add value. This analysis might help to broaden the conversation from definition of acceptance criteria to a greater shared understanding of risk
Small, practical adjustments which demonstrate value rather than dictate change, and which offer opportunities to make continuous improvements can help in collectively shaping a new culture.
Look for role-specific opportunities
Each of us can influence others, whether acting as a tester, a coach, a leader, or in some other capacity. Those spheres of influence may present specific opportunities to make changes to a culture. Change can come from any direction and what may seem like small adjustments can snowball into significant and long lasting improvements.
As testers, for example, we can demonstrate the value of exploratory testing to our colleagues through the information we reveal and the problems we identify. As coaches we can provide exercises and practical demonstrations which help people understand how to apply new ways of working in their own environments. As leaders we can support new initiatives, help explain the benefits to senior people, and give time and space to try, to fail and to learn.
If we want to add value and be seen to be adding value through our testing, then we must take responsibility for the testing culture that exists in our teams and organisations. If that culture prevents us from fulfilling what we believe our roles to be, then we should lead the way in changing it. Challenging a Confirmation Culture might not be easy. There may be difficult conversations to be had with people who have only ever viewed testing one way, people who believe that testing is a factory which takes requirements or user stories in one end and pumps out test cases and defect reports at the other end. Sometimes those conversations require great bravery in challenging assumptions and perceived wisdom, and sometimes a culture may be so embedded that it seems impossible to change.
However, if we take the time to listen, to understand the needs and objectives of our colleagues and the organisations we work for, we will be better placed to demonstrate that changing the culture might be in everybody’s best interests.
- Maria Kedemo recently wrote about ‘Testing beyond Requirements’ and explained some of the important questions that a confirmatory approach could miss
- I talked about the subject of ‘Seeing beyond requirements’ in one of my earliest blogs
- In the February 2014 edition of Testing Trapeze, Adam Howard discussed an exercise which introduced confirmatory and exploratory approaches to people who were new to testing. In the same edition, Aaron Hodder and James Bach explained why ‘Test Cases are not Testing’.
- This paper from Michael Bolton in 2009 includes some comparisons between confirmatory and exploratory approaches
There is much more information on this subject out there, and a great deal of supporting material for anyone who needs to discuss Confirmation Culture, and alternative ideas that might provide greater value, in their own organisations.
Thanks to Paul Maxwell-Walters for his assistance with this post.
9 thoughts on “Confirmation Culture (and how to challenge and change)”
Thanks for this and the accompanying post / infographic.
I’ve been thinking about a related topic lately, that this reminded me of. I was thinking about a client I’ve been working with, and how a lot of my exploratory testing time is “wasted” highlighting implementations that don’t meet the requirements. I don’t just mean this in the sense that the requirements themselves need to be challenged (although this is also the case), but really basic stuff like, “update action x to use z icon instead of y icon,” and the wrong icon has been used.
I was thinking about how much more useful exploration I could be doing – real exploration – if I could trust that these basic requirements had already been met. Then I was thinking about the whole testing vs. checking thing, and also about whole team quality. Perhaps the reason why there’s so much confirmation culture amongst testers in some companies is because that’s the first time anyone’s confirming anything – the developers aren’t checking their own work, so testers have to check instead of test.
Anyway, I don’t want to ramble on, but basically I’ve been wondering if more focus on whole team quality would help to improve the kind of confirmation culture you wrote about.
Thanks again for sharing your thoughts,
Hi Cassandra and thanks for commenting… I think you are right and this is perhaps another contributing factor if testing is viewed as a low value activity. If a discreet testing function exists and people are visibly fulfilling the role of tester then their findings and observations will contribute significantly to the perception of the value they add. If the time available to them is spent identifying basic errors, raising tickets in JIRA, then retesting fixes, it doesn’t necessarily demonstrate value.
I think the trust factor you mention is critical. A culture with a heavy focus on confirmation activity is unlikely to change if testers don’t trust their colleagues and the processes and checks that exist around them. Equally, organisations need to trust their testers, their knowledge and their instincts for risk if a more exploratory culture to emerge.’Trust the tester’ has a much nicer ring to it than ‘blame the tester’ which is a more common trait within a confirmation culture I think.
Rightly stated, confirmation and exploration cultures certainly makes testing more effective.
Just to add that context based exploratory testing is the key aspect of testing strategy in today’s software products/projects, accepting the truth that entire SDLC will have less documentation, requirement creep and evolutionary designs.