Following my blog on reporting last week, I have been reminded of my experiences in product launches or shipping (call it what you will – the point where the code is ‘live’ and the people who need to use it actually get to use it).
It has also prompted me to think about a more general question:
How much involvement should testers have in making the decision about whether a product is ready to ship?
For some the answer is simple. Testers are not involved in making this decision. Their role is to provide information to others who will make the decision. I’m generally in agreement with this although I believe there are circumstances where a tester can feel comfortable offering an opinion.
Can we ship? Part 1
“When I want your opinion, I’ll give it to you” – Samuel Goldwyn
Whilst I was working on a large and complex project, a manager asked me whether I felt we were ready to ship. We were just a couple of weeks out from the planned go-live date but I replied that I felt more testing was required. This wasn’t the response that the manager hoped for so I was pushed for my reasoning. I explained that some parts of the product hadn’t been tested to the depth that we would like and that some of the bugs we had found were completely preventing testing in some areas.
I was given some sharp advice about my message. I was told that further testing would put the launch date at risk. The implications were clear – there was a delivery date to be met and I should toe the party line. I wasn’t comfortable with this so was told I had to attend a meeting with the CEO to explain my views. I did so, and the CEO (who I had spoken to about our testing a few times during the project) took this into consideration.
I was careful not to overstep the boundaries of my role or to directly talk about the wider decision for the company. I simply talked about my view on whether we had done enough testing. The launch was delayed, but I had no knowledge of what other factors went into this decision.
Can we ship? Part 2
A second large release of the same product happened around a year later. I had continued to involve senior colleagues in discussions about testing throughout the project. We had a large whiteboard positioned near my desk and would use it to show how things were going. At times we used some horrible metrics (tests executed, number of tests passed and failed, that sort of thing – it happens; consider my wrists slapped) but we also used the whiteboard as a trigger to prompt conversations about our testing and what we had observed.
If the CEO or the Financial Officer wanted to understand how things were looking generally, or in a particular part of the system, they would come and talk to me and the other testers about it. They would wander over to the whiteboard, point at something that caught the eye and ask us about it. We could explain details on specific bugs and they would help us understand some of the implications of those bugs.
To my surprise, for this second release I was asked along to discussions about the go-live decision. The response to involving decision makers more was to involve me more. I attended meetings which covered operations, training, marketing and promotions along with frank conversations about the pressures the decision makers were under and the business risks of shipping later than planned. In one meeting the CEO described the implications of blockers – those bugs which prevented us testing some parts of the system. This was a memorable moment. I knew we had made some real progress in explaining the value of testing.
At the last of these meetings the CEO went round the table to ask us whether we should go live with the release. Because I now had a broader understanding of the factors affecting this decision I was able to offer my opinion. I could explain my point of view based on what had happened whilst testing and also offer a view taking the other factors into account.
I do not perceive the role of a tester to be what some describe as the ‘gatekeeper’. I understand this to mean that the tester or test team is responsible for making a decision on whether the product is ready to ship. I have yet to encounter a project or organisation where a tester wields this much power, but I understand that others have. This seems dangerous to me because the tester does not typically understand all of the following:
- the imperative for shipping
- the criteria which determine whether the product is ready
- the criteria which determine whether the organisation is ready operationally
Some examples of shipping or launching imperatives:
- the product may be required by a certain date because a commitment has been made to shareholders
- there may be a regulatory change which requires a product to be delivered or altered by a certain date
- a business may have an urgent need for a product in order to be first to market or to keep pace with competitors who have something similar available
These factors may mean that decision makers are prepared to take a degree of risk with the quality of a product. They may decide to ship something which a tester believes is not ready. The tester is aware of bugs which have not been fixed or parts of the product which have not yet been tested. The decision makers may consider these factors to be less risky than failing to deliver to a date. They may also have a plan for working around bugs and fixing them later, or for making part of the product unavailable until it is fully tested.
Conversely a product may appear to be ready to a tester, yet there may be some other activity required which the tester is not aware of:
- the marketing material may be behind schedule
- the operations team may not have been trained in how to use the product
- new information may have come to light which substantially changes what the product should be able to do
So what should a tester do when asked for their opinion on whether a product should be shipped or launched? Here are my three tips:
- If you do not feel qualified to provide an opinion then in my view this a perfectly valid and honest response. As long as you can clearly explain the outcomes of the testing which you are responsible for then you are carrying out an important part of your role. Be clear about why you don’t wish to go beyond this and be honest about what you don’t know.
- If you are asked to give an opinion, be sure that you have the freedom to express it openly and honestly. If you are asked to simply reinforce an opinion which someone else holds, reject this. Fulfilling someone else’s agenda will not establish credibility when you provide information (or opinions) in the future.
- Build relationships early. If you can include stakeholders in testing from the earliest stages then you will develop a good understanding of how decisions get made. You will be able to provide useful information to them on an ongoing basis and any opinion you offer will be built on a trusting relationship.
What are your experiences? Do you feel that testers should simply inform or should we try to understand more about the decision to ship?
If you enjoyed the blog, please share and comment.
5 thoughts on “That sinking feeling: are we ready to ship?”
good post and advice. Having a CEO who takes an interestin in testing before shipping is a sign for a healthy development process, count yourself lucky.
Your experience mirror mine when I was a tester. I also sometimes had to stop people to not ship just because I didn’t feel comfortable. Being able to express that worry is something every good tester should be able to do so that the powers that be can then take a decision.
This responsibility shifted again in the test manager role where often I was asked “Thomas, can we ship or not?”. This put me squarely in the gatekeeper position if I wanted to or not.
The assumption was that I could express my concerns (but no one was interested in the details), weigh them with the knowledge and experience I have and then give a yay or nay answer. In some places my answer was final as I was expected the business related points as well, i.e. imperative for shipping, is the company ready operationally, etc. The last word may sit with someone else but having de facto power to decide when to ship or not is scary as hell.
Answering your last question, “only” informing others about open risks is what I’d suggest for beginner to intermediate testers. Understanding all decisions to ship takes more experience and is asking for responsibility that is often above your paygrade.
Nice post, thanks for making me think.
Thanks for taking the time to read the blog and to comment. I really appreciate it.
It was certainly great to have senior people take a real interest in testing but in my experience it isn’t that unusual. It does require some work from testers because there are some organisations where the culture demands that testers are kept in a dark corner and only wheeled out in front of senior people towards the end of a project. We can change that by involving stakeholders early and asking them about their concerns. We can then follow that up with some conversations about how we plan to test for those concerns and keep them informed on progress.
I find that testers can often communicate pretty well with business people. Lots of good testers have the ability to understand the experience for the customer and can express bugs this way. This really helps!
I feel for any tester that is asked to be the gatekeeper. It seems like a way of finding a scapegoat – “Blame the testers! They touched it last!”
Thanks for sharing your experience and opinion.
In some organisations testers may have the visibility and opportunity to understand other aspects that would influence decision to go live or not. I think in that scenario trying to understand the decision to ship would be useful and add value. This would be possible usually in organisations that are more open and value individual’s opinion.
But there are other organisations/projects where testers are not involved enough to be able to make an informed opinion. I think then its more appropriate for testers to just inform senior executives.
In my opinion, adapting the working/involvement style based on the culture/project/oragnisation could be useful for testers.
I agree – the response should be tailored according to the culture of the organisation and the people we work with.
I think the most important thing is to be aware that there are potentially many different criteria and motives which affect the decision and if necessary to use this as rationale for keeping opinions confined to a subset of these criteria.