We have become familiar with the terms functional and non-functional in relation to testing. There has been a fair amount of debate about these terms, dating back several years. I believe they present us with some problems, and I suggest that they are terms which we might be better off without.
What’s the problem?
There are three problems which I believe are worth considering:
Problem one – the use of ill-defined and inappropriate terms
People involved in software development can experience difficulties resulting from miscommunication and misunderstanding of the language we use. Whilst some confusion is understandable when new ideas and technology come into play, we should try to avoid incongruous terms within our own control. I believe the terms ‘non-functional testing’ and ‘functional testing’ fall into this category, and will explain why a little further on.
Problem two – division of testing activities into two categories based on those terms
It is not uncommon for projects to identify separate ‘functional requirements’ and ‘non-functional requirements’. The terms also have an influence on testing tasks and activities which take place. In my experience, some testing types which are considered as non-functional testing (e.g. Performance and Security) are given greater attention than others (e.g. Reliability, Maintainability and Portability).
This greater focus may be entirely appropriate in some circumstances, but by grouping all of these types together in a container marked non-functional, I believe that the latter group are sometimes forgotten. To make matters worse, decisions about important aspects of the testing approach – environments and data, tools, budgets, allocated time etc. – are sometimes taken as though these types of testing were not separate at all. A non-functional testing approach might conveniently tick a box on a project plan, but it doesn’t encourage nuanced thinking about what might be appropriate.
What is more, some of the classifications of testing types as non-functional make little sense; I must not be alone in wondering how Usability testing has become classified as non-functional.
Problem three – typecasting of people based on these two categories
I have become increasingly frustrated at the use of the terms functional tester and non-functional tester (does anyone really want to be described as non-functional?)
These terms are sometimes assigned to people who may have a broad range of skills but who have mostly been involved in one or other types of testing. They may lead to a tester working on similar tasks in the future with little consideration for that person’s other abilities or interests. For example, just because someone has extensive experience in Security Testing, it does not meant they know nothing about how to test a product’s features.
The terms could also set a boundary which may restrict what sort of problems testers look for. For example, defining someone as a functional tester might be telling them (either explicitly or otherwise) that it is not their job to look for problems with a system’s performance. This kind of segregation is restrictive to testers, unhelpful to our clients and customers, and can be damaging to our profession.
‘Non-functional testing’ does not work
Let us look in more detail at the term non-functional testing. Its use has been challenged, and sometimes ridiculed by others. It doesn’t take a great deal of effort to construct a sentence which demonstrates what an absurd term it is. Here are some examples from an Agile Alliance paper:
- “The non-functional test is functioning.”
- “The last non-functional test passed.”
- “I am not sure the non-functional test will work.”
I’d like to add this example:
- “The operational acceptance tests are non-functional.”
I have heard people involved in testing deride the term non-functional, yet continue to use it. Familiarity is not necessarily a good reason for continuity. We are not obliged to preserve a silly term simply because it is understood within our industry. Yet it is still being thrust upon unsuspecting testers via the ISTQB glossary and training material.
An alternative view: Parafunctional Testing
Some years ago, Cem Kaner proposed an alternative to non-functional testing; the term ‘parafunctional testing’.
This term won’t be new to many readers. The earliest reference I could find to parafunctional testing was in 2004. It would be easy to regard the phrase as simply a sensible replacement for non-functional testing since both terms are used to indicate any testing which is not defined as functional testing.
However, within the definitions of functional testing, I believe there is an important distinction which differentiates the intent of parafunctional testing from that of non-functional testing.
The description of functional testing within the ISTQB syllabus takes a Crosbyesque view on that testing – adherence to a specification:
“Functional Testing: Testing based on an analysis of the specification of the functionality of a component or system.”
The descriptions of functional testing and parafunctional testing which Cem Kaner uses in his 2004 paper, ‘The Ongoing Revolution’, provide a different definition; one which relates more directly to the person who will use the product, and distinguishes between those aspects of a product which are understood by customers and those which aren’t:
“Functional testing focuses on capability of the product and benefits to the end user. The relationships that are especially valuable to foster are between testers and customers (or customer advocates), so that testers can develop deeper understanding of the ways to prove that this product will not satisfy the customer’s needs.”
“Parafunctional testing focuses on those aspects of the product that aren’t tied to specific functions. These include security, usability, accessibility, supportability, localizability, interoperability, installability, performance, scalability, and so on. Unlike functional aspects of the product, which users and customers understand and relate to pretty well, users and customers are not experts in the parafunctional aspects of the product. Effective testing of these attributes will often require the testers to collaborate with other technical staff (such as programmers).”
This distinction emphasises consideration of the customer, rather than blind devotion to specifications or requirements, which can be ineffective at capturing customer desires and expectations. So, whilst parafunctional testing may refer to many of the same types of testing as non-functional testing, the outlook and motivation of the tester could be quite different.
Can we take this even further?
If we are to focus on the way that users and customers perceive a product, I would argue that many aspects of the product which are considered non-functional or parafunctional come into play. The customer’s experience is profoundly affected by some of these factors.
Usability is perhaps the most obvious example. Whilst a customer may not explicitly think of, or refer to a product’s usability, they will know when they don’t like using it, or in some cases can’t use it at all. The same could be said of Performance; most people who use websites can tell you when they think a page is taking too long to load or the navigation between fields seems slower than they might expect. Ask a person with impaired vision whether compliance with Accessibility guidelines affects the capability of the product for them. Of course it does.
The customer does not care whether something is functional or non-functional. They just care about their experience with the product and whether they can do the thing they set out to do.
This does not mean that we should ignore those aspects of testing which might be hidden from the customer’s view, but at the very least it should prompt us to reconsider some of the definitions we use.
But what about our functional tests?
Merely moving some testing types into the functional bucket from another bucket is not enough. Perhaps we can do without these terms entirely. Using the term functional testing seems to me to invite division of testing activities along these murky lines.
What would be lost by referring to what we are doing simply as testing? Where necessary, we could continue to use more precise terms, for example Performance Testing or Accessibility Testing, to clarify the specific objectives of that testing.
Of course, it might be that the objective of testing is to focus on the things which a product is supposed to do; the reason(s) we needed the product in the first place. Perhaps feature testing is a term which might be useful in these circumstances.
A feature could be any element of a product or system which is intended to meet the desires, needs or expectations of a customer. Sometimes we may place particular emphasis on testing of a feature or a group of features. At other times we may simply be testing the product whilst considering the many ways in which problems might occur (including Performance, Accessibility, Security, Usability and others).
The terms functional testing and non-functional testing and the definitions associated with them are more confusing than helpful. In my view, the division of testing activities into these two categories has little merit; there are better ways for us to think about testing, to prepare for it and to communicate what we are doing to our stakeholders and customers.
The phrases are not well understood among testers, and are inconsistently applied. The misunderstanding is exacerbated by the use of these terms in the wider software development community; project plans, schedules and contracts may mention them without a clear definition of what they mean. This can lead to important aspects of testing being undervalued or even overlooked.
We can address this by reducing their use; in reports, documents we prepare, the conversations we have with colleagues. We can be more precise when necessary, referring to specific types of testing or objectives of testing. At other times we can talk more broadly about testing as an activity; we can encourage others to see testers as people with an understanding of an array of problems which might occur, and the techniques to locate those problems. Testers can broaden their knowledge, skills and understanding without the constraints of an arbitrary dividing line, and without being typecast.
What do you think? Do these terms help you or confuse you? Are they so embedded in our consciousness that we have to accept them, however imperfect they are? Or can we drop them and move on? Perhaps you think it just doesn’t matter…
I would be very interested to read your comments.