I came across this excellent article on cognitive biases in software engineering, which you should read. Go on, I’ll wait.
I think it’s worth considering how those cognitive biases listed affect the relationship between developers and test engineers in software projects. Now you really should read the linked article, if you haven’t already.
Fundamental Attribution Error
In social psychology, the fundamental attribution error (also known as correspondence bias or attribution effect) describes the tendency to overestimate the effect of disposition or personality and underestimate the effect of the situation in explaining social behavior.
As a developer, I’ve fallen victim to this error many times, and probably will again. I know I produce bugs, even though I try hard not to, but it’s always the damn test engineers who pester me with them.
Conversely, can’t developers run some simple tests before throwing their code over the wall to QA? It’s as if they just checked their code compiles, but never ran it, before committing it to the version control system.
I’ve seen this kind of conflict arise in many projects, and it’s particularly hard to combat. After all, the foundations for blaming the other’s personality are laid in the company structure. QA personnel are typically less programming savvy than developers, but their job is to critique the work of developers.
At the end of the day, everyone involved has to realize that this is a co-operative exercise, and not a power struggle. Buy each other a beer, and gripe about other things, and you’ll find you have more in common than you’re arguing about.
Confirmation bias (also called confirmatory bias or myside bias) is a tendency of people to favor information that confirms their beliefs or hypotheses.
Whilst very common, confirmation bias is very easy to combat in software testing. All it needs is a reminder that only one test case with valid and/or simple data for a piece of functionality is required, and any additional test cases with valid and/or simple data are a waste of time. Instead, spend time on edge cases, invalid data, and complex data (such as unicode strings, fractions for numbers, SQL injection, etc.)
Here’s where a team can also combat the fundamental attribution error above. Knowing what unlikely inputs might be tried against code is a good test engineer’s expertise, and developers quite often lack that perspective. Put the team together, and let them make this expertise part of the specification of a piece of functionality, and it will be much easier to get right from the start.
The Bandwagon Effect
The bandwagon effect asserts that conduct or beliefs spread among people, as fads and trends clearly do, with the probability of any individual adopting it increasing with the proportion who have already done so.
The bandwagon effect is a little harder to define in the context of this topic, but it’s probably most obvious in the difference between the common QA perspective and the common dev perspective on one and the same thing. That is to say, strong personalities in QA tend to differ from strong personalities in dev, which helps both parts of the team drifting into opposite camps.
One good example came up in a discussion on LinkedIn, where the question was posted whether two issues with the same root cause in code could be closed as duplicates of each other.
I think that depends on your perspective: if you’re a test engineer, you tend to err more on the side of the customer, who – presuming no technical knowledge – will treat the two issues as entirely separate. If you’re a developer, whose perspective is closer to the code, you’re likely to treat them as one, because there is one way to fix both.
The point of this is that either perspective is valid, and adopting just one or just the other – because some strong personality prefers it – is probably not the best thing for your team.
Given two similar rewards, humans show a preference for one that arrives sooner rather than later. Humans are said to discount the value of the later reward, by a factor that increases with the length of the delay.
This particular error is one almost solely focused on the development side of a software project. Unfortunately, it’s pretty difficult to balance out for a development team. The more experience you get as a developer, the more you are aware of the technical debt you incur when you take shortcuts, and the more you tend to fight shortcuts (or at least those that lead to high technical debt).
This stance tends to be in conflict with other needs of the company, such as getting a product out of the door, or satisfying a particular customer. In that kind of conflict, QA again has a tendency to side with the customer side, because these business needs are immediately understandable, while technical debt is hard to see unless you’re immersed in the nuts and bolts of code.
This is really an issue of management, rather than either development or QA. It’s up to management to balance one set of needs against the other. But if everyone in the team is aware of this conflict, then perhaps it’s easier to find the right balance together.
Negativity bias is the psychological phenomenon by which humans pay more attention to and give more weight to negative rather than positive experiences or other kinds of information.
Negativitiy bias is a particularly relevant error for QA to make. As test engineers are primarily involved in discovering negativity (bugs), it is very easy for them to assume a piece of software is unshippably broken because of all the stuff they discovered.
That’s why it’s the product manager’s responsibility to define acceptance criteria together with the test lead. In all likelihood, the test lead will tend towards a zero defects policy, whilst the product manager on their own will disregard any acceptance criteria that require too deep technical understanding.
I’ll be lazy, and quote straight from the linked article:
For all of these biases, the most important step is to recognize that they exist, and try to identify them when they appear in your day to day life. Once you do that, just take a step back and think about the situation. It can even help to state the bias out loud – especially if other people are involved. By bringing up the idea that you might be falling into a common cognitive trap, you can refocus the conversation and clear your mind.
That’s exactly it.