Last week I blogged about design and how I think it is something that testers could focus on a little more. I’ve been considering this further and thinking about an experience I had as a customer….
Someone I know once made an error whilst completing an online grocery order. I won’t say who made the error because that wouldn’t be fair and besides, my wife would probably be displeased.
In truth the error was fairly easy to make. The objective was to order some bananas, enough for the week for two of us. The field for placing the order looked like this:
So far so good. However, whilst entering the order the wrong radio button was selected. I think it may have been a case of initially intending to purchase by weight, realising it was easier to specify a number of items but forgetting to change the radio button selection back. So this is what was added to the basket:
So far not so good. We didn’t want eight kilos of bananas. The error wasn’t spotted amongst the long shopping list and the order was placed.
Sure enough, when the delivery arrived a day or two later we had everything we had ordered including a couple of large plastic bags filled with bananas. From memory I think there were around fifty bananas in total. We were offloading them to friends and colleagues for the next few days. We ate a lot of banana bread.
Now, as any good IT professional will tell you, this is an open and shut case of ‘user error’. It is pretty difficult to argue with that. Except that by ‘user’ we actually mean customer and most customers don’t think about things in the same way that IT professionals do.
As a customer the experience was a bit different. When the delivery arrived, our first response was:
‘Why have they delivered all of these bananas?’
Wasn’t it obvious that we wouldn’t want fifty of them? This may have been an unreasonable response given that we had received exactly what we requested. But it is how we felt and the customer is, of course, always right.
The most obvious way to avoid the confusion would have been for the person or persons fulfilling the order to query it and for someone from the store to call us and confirm. The company’s processes or the individuals involved didn’t enable this. A problem for sure but could this error have been catered for in the design, development and testing of the website?
Testing against requirements – a potential banana skin
It certainly appears that some other grocery websites have taken a different design approach which would have eliminated the possibility of making the error. Here’s an example:
But wait; what if I want to order 8kg of bananas? Do I now need to calculate how many bananas are in a kilogram? Maybe this would be easier if I just went to the store myself.
Perhaps some simple code could have brought up a confirmation message given that the order of 8 kilos was a slightly unusual request. To implement such code would require answering some questions first, not least of which is ‘What is an unusual request?’ With the right threshold and possibly some algorithm that looks at past orders and the number of items ordered elsewhere this might be possible.
And how about our testers? Could they have looked at this part of the website from the standpoint of a customer and anticipated this problem? If so, would a bug have been accepted by the project as valid or would it have been dismissed?
It isn’t always easy to think about technology in this way. A common approach in developing and testing software is to focus on requirements. Yet we know that requirements are often flawed; incomplete, ambiguous, contradictory, take your pick. This isn’t an attack on the people whose job it is to capture and communicate requirements. They have a difficult task, not least because those pesky customers keep changing their minds (for example one minute they want to be able to order bananas by the kilogram, the next minute they don’t).
This is one of the reasons why we testers must try to think about the customer experience beyond the narrow confines of requirements. Perhaps somewhere in a dusty folder there is a document detailing the requirements for an online grocery website that contains some text along the following lines:
Requirement ID: 3.1.12
The website shall provide the option for the user to order items by weight or by number of items.
Requirement ID: 3.1.13
Permissible values when ordering items are in a range from 0 to 9999.
Maybe a tester was asked to design some tests and map these tests to requirements. This would have given everybody a warm feeling that requirement coverage had been achieved.
The same tester may have tested the website and followed the steps in the test. They may have tested a range of values in the quantity field to prove that it was possible to enter 0, 9999 and some other values in between. They may also have tried to enter a negative value and some values greater than 9999. Just for fun they may have tried to enter some other characters in the field too.
They may have found bugs, these may have been fixed and retested, but ultimately everybody involved was happy that the website requirements had been met. No further testing required!
I’m speculating of course, but I have seen this approach enough times to know it happens.
If the tester was able to consider how people might actually use the website then they would be more likely to identify possible outcomes like the one above. In an enlightened and collaborative project environment the tester’s voice would be heard and the website design and code could have been adjusted accordingly.
And most importantly of all the customer could have avoided this slip-up.