Following my blog on reporting last week, I have been reminded of my experiences in product launches or shipping (call it what you will – the point where the code is ‘live’ and the people who need to use it actually get to use it).
It has also prompted me to think about a more general question:
How much involvement should testers have in making the decision about whether a product is ready to ship?
For some the answer is simple. Testers are not involved in making this decision. Their role is to provide information to others who will make the decision. I’m generally in agreement with this although I believe there are circumstances where a tester can feel comfortable offering an opinion.
Can we ship? Part 1
“When I want your opinion, I’ll give it to you” – Samuel Goldwyn
Whilst I was working on a large and complex project, a manager asked me whether I felt we were ready to ship. We were just a couple of weeks out from the planned go-live date but I replied that I felt more testing was required. This wasn’t the response that the manager hoped for so I was pushed for my reasoning. I explained that some parts of the product hadn’t been tested to the depth that we would like and that some of the bugs we had found were completely preventing testing in some areas.
I was given some sharp advice about my message. I was told that further testing would put the launch date at risk. The implications were clear – there was a delivery date to be met and I should toe the party line. I wasn’t comfortable with this so was told I had to attend a meeting with the CEO to explain my views. I did so, and the CEO (who I had spoken to about our testing a few times during the project) took this into consideration.
I was careful not to overstep the boundaries of my role or to directly talk about the wider decision for the company. I simply talked about my view on whether we had done enough testing. The launch was delayed, but I had no knowledge of what other factors went into this decision.
Can we ship? Part 2
A second large release of the same product happened around a year later. I had continued to involve senior colleagues in discussions about testing throughout the project. We had a large whiteboard positioned near my desk and would use it to show how things were going. At times we used some horrible metrics (tests executed, number of tests passed and failed, that sort of thing – it happens; consider my wrists slapped) but we also used the whiteboard as a trigger to prompt conversations about our testing and what we had observed.
If the CEO or the Financial Officer wanted to understand how things were looking generally, or in a particular part of the system, they would come and talk to me and the other testers about it. They would wander over to the whiteboard, point at something that caught the eye and ask us about it. We could explain details on specific bugs and they would help us understand some of the implications of those bugs.
To my surprise, for this second release I was asked along to discussions about the go-live decision. The response to involving decision makers more was to involve me more. I attended meetings which covered operations, training, marketing and promotions along with frank conversations about the pressures the decision makers were under and the business risks of shipping later than planned. In one meeting the CEO described the implications of blockers – those bugs which prevented us testing some parts of the system. This was a memorable moment. I knew we had made some real progress in explaining the value of testing.
At the last of these meetings the CEO went round the table to ask us whether we should go live with the release. Because I now had a broader understanding of the factors affecting this decision I was able to offer my opinion. I could explain my point of view based on what had happened whilst testing and also offer a view taking the other factors into account.
I do not perceive the role of a tester to be what some describe as the ‘gatekeeper’. I understand this to mean that the tester or test team is responsible for making a decision on whether the product is ready to ship. I have yet to encounter a project or organisation where a tester wields this much power, but I understand that others have. This seems dangerous to me because the tester does not typically understand all of the following:
- the imperative for shipping
- the criteria which determine whether the product is ready
- the criteria which determine whether the organisation is ready operationally
Some examples of shipping or launching imperatives:
- the product may be required by a certain date because a commitment has been made to shareholders
- there may be a regulatory change which requires a product to be delivered or altered by a certain date
- a business may have an urgent need for a product in order to be first to market or to keep pace with competitors who have something similar available
These factors may mean that decision makers are prepared to take a degree of risk with the quality of a product. They may decide to ship something which a tester believes is not ready. The tester is aware of bugs which have not been fixed or parts of the product which have not yet been tested. The decision makers may consider these factors to be less risky than failing to deliver to a date. They may also have a plan for working around bugs and fixing them later, or for making part of the product unavailable until it is fully tested.
Conversely a product may appear to be ready to a tester, yet there may be some other activity required which the tester is not aware of:
- the marketing material may be behind schedule
- the operations team may not have been trained in how to use the product
- new information may have come to light which substantially changes what the product should be able to do
So what should a tester do when asked for their opinion on whether a product should be shipped or launched? Here are my three tips:
- If you do not feel qualified to provide an opinion then in my view this a perfectly valid and honest response. As long as you can clearly explain the outcomes of the testing which you are responsible for then you are carrying out an important part of your role. Be clear about why you don’t wish to go beyond this and be honest about what you don’t know.
- If you are asked to give an opinion, be sure that you have the freedom to express it openly and honestly. If you are asked to simply reinforce an opinion which someone else holds, reject this. Fulfilling someone else’s agenda will not establish credibility when you provide information (or opinions) in the future.
- Build relationships early. If you can include stakeholders in testing from the earliest stages then you will develop a good understanding of how decisions get made. You will be able to provide useful information to them on an ongoing basis and any opinion you offer will be built on a trusting relationship.
What are your experiences? Do you feel that testers should simply inform or should we try to understand more about the decision to ship?
If you enjoyed the blog, please share and comment.