Welcome to another Q&A! Each month we catch up with someone (from the testing community or beyond) with an interest in quality in software development and technology.
Some guests you may know well, and others might be less familiar. I hope you will learn something new about each of them, and something new from each of them. Each will bring their own perspectives and insights on quality and testing.
The format will be the same each time:
- a little information about this month’s guest and what they are currently up to
- some questions for them to answer
- some answers for them to question (the ‘Jeopardy’ section where I will provide the answer and ask the interviewee to give me the question)
- finally, the ‘Pass it on’ section with a question from last month’s participant and the opportunity to pose a question for next month’s guest
This time our guest is Anne-Marie Charrett, who will need little introduction to many people in the testing community.
Anne-Marie is a software tester, trainer and coach with a reputation of excellence and passion for Quality and the craft of software testing. An electronic engineer by trade, software testing chose her when she started testing protocols against European standards.
Anne-Marie advocates a whole team approach to Quality. She sees Software Testing as a skilled activity that many might perform. She trains & coaches companies to embrace this approach to quality and to develop software testing skills best suited to the context they work in.
In the past, Anne-Marie has developed software testing courses and lectured at the University of Technology, Sydney, and ran the Sydney Testers Meetup. She blogs at Maverick Tester and offers tweets @charrett
Anne-Marie is available for work through her company Testing Times where she offers consulting, training and coaching services.
First of all Anne-Marie, would you like to tell us about anything interesting you’ve been involved in recently, any exciting upcoming ventures, or just what you are working on at the moment?
The context in which we test now is very different to how I first learned how to test. The way we develop and deliver software, the complexity of systems, flatter hierarchies and smaller cross functional teams requires us to rethink what skills tester’s need in contemporary workplaces. For example, good communications skills have always been important for testers, today that skill is vital.
Software Testing is now more of an activity that many perform. In recognition of that, my training classes are now more focused on teaching software testing as opposed to teaching software testers. How to better design tests and think critically about the testing being performed is becoming an important skillset for all team members.
This has naturally led to a greater interest in understanding Quality within an agile context. How can we as a collective team delivery quality products? It’s a fascinating exploration, one that I’m not sure I have a full answer to, but I think I’ve come up with some ideas and approaches that can be adopted. I wrote about one in my blog called Quality Workshop.
If you encountered a version of yourself from earlier in your career, and talked about how you approach your work, what would you disagree on?
Not many people know this, but I was initially helping out on the ISO standard that cannot be named. Ok, you can – it was ISO 29119. I would have asked my younger self to think more critically about the value of such a task and think more critically about software testing.
When you look back at your career so far, what do you consider to be the highlight(s)?
SpeakEasy is a big highlight for me. Being able to help some testers on their speaking journey is incredibly rewarding.
My time working as a lecturer at the UTS teaching software testing as a course was a great experience and I’m very grateful to Tom McBride who gave me that opportunity
I think my work with James Bach needs a mention. His work on software testing and his commitment to better understanding software testing I think challenges many of us to keep doing better.
My time at Tyro allowed me to work with some inspirational testers and people. I’m proud of the work I achieved there.
When you think back to these highlights, what were the most important lessons you learned?
I worked with James Bach on a model for coaching software testers. From this experience, I learned how to model my own thoughts and ideas about a topic. I realised that with a little hard work, research and application, pretty much anyone can do the same. That’s a pretty liberating thought.
I think SpeakEasy is a prime example of the type of work I love to do. With SpeakEasy I began to understand how passionate I am about providing space for testers to breath so they can discover their untapped potential. Sometimes all its needs is some space for thinking or re-assessing, sometimes its someone to say ‘you can do it’, sometimes it’s just being a ‘permission-giver’.
What do you think is the most common misconception about testing?
Many believe that software testing is easy and that anyone can test. It’s true that for some, maybe even many purposes, software testing can be easy to perform, and requires minimal skill. But for some contexts greater skill is required.
Using art as an analogy, everyone can draw but few would ever compare their artwork to a grand masters. Creating a masterpiece takes skill and dedication. Not everyone needs or wants a masterpiece, many happily have imitations. Where you need a masterpiece, you need skilled artists.
There’s more to artwork than mere brushstrokes. There’s more to software testing than pass/fail.
And now, the Jeopardy section. I’ll provide you with some answers and ask you to give me the questions…
What is something that everyone believes they understand but no-one can quite explain?
What is often the biggest challenge within delivery that often gets ignored or left to the last minute to solve?
Lastly, the ‘Pass it on’ section. The question posed last time out, by Richard Bradshaw, was:
“I’m particularly fascinated with the potential of AI and smart algorithms. What potential usages do you see for AI in the context of testing?”
I saw a wonderful talk by Stephanie Wilson from Xero on how they’re using AI to train their test data. I think ideas like this can add great potential to how we deal with some complex testing problems.
Recently I spoke to David Howden from Sajari and he described Go-Fuzz which refines the randomness of the test data it injects through some algorithmic self learning process. These engineering solutions to testability really interest me. They will never be the silver bullets to software testing, but they are great assets to draw on when required and in combination with other approaches may be a powerful way of tackling some of the software testing problems out there.
And finally, what question would you like to pose for next month’s participant?
How important is humour in the work we do and is this an underestimated element of our work?
I’d like to thank Anne-Marie for taking part in Q&A and sharing her knowledge and insights with us… see you next time!