Welcome to the Real World, Microsoft

Alright, that’s a provocative title right there, I admit it. I couldn’t resist.

A short while ago, I was perusing the July 2011 issue of the Communications of the ACM, when I came across an interview article with some of the people behind a massive retro-documentation project at Microsoft. Microsoft had to document much of the client-server communications of their existing software to allow third parties to implement interoperable software. The article is succinctly called “Microsoft’s Protocol Documentation Program: Interoperability Testing at Scale”. Definitely worth a read.

The sentence that struck me, and that is repeated in sentiment in the remainder of the article is this: “First and foremost, a team would be required to test documentation, not software, which is the inversion of the normal QA process; (…)”.

I’m not sure I agree with that statement. And to be fair, that statement isn’t the point of the CACM article, either. But it presents a good launch pad for something that concerns me a little.

What the Microsoft guy is effectively saying is that the normal QA process is to take some documentation and some software, test the software, and if it doesn’t match up with the documentation, then reject it. While that sounds like a good description of the QA process as it exists in many people’s mind, the implication of it must be that the software is an implementation of some strictly documented specification, and that in turn strikes me as unrealistic.

My experience is primarily as a software engineer, and while I have occasionally had to implement some RFC or similar specification, in the vast majority of cases, that’s not happening. In the rare cases that software is implemented from a specification, it almost invariably deviates from the specification, too — and often intentionally so. Think of how browsers deviate in how they render documents as an example; some of these deviations are due to the way specs are interpreted by different parties, but some are deliberate.

What happens instead is that software is vaguely specced out, written, unit tested (if you’re lucky), and if developers have a good enough incentive, documentation of what the software de-facto does is done last. Now whether or not that’s a good thing is up for debate, but in this post I don’t want to focus on passing judgement on existing software engineering practices.

Based on such experience, I think it’s a slightly dangerous misunderstanding of the QA process to assume that it treats documentation as the standard against which to test software. A more realistic and valuable understanding would be that the QA process attempts to detect where documentation and software behaviour are in conflict.

The rationale is centered around what in the scientific method is called confirmation bias; it describes the tendency of people to find evidence to support their hypothesis.

In terms of testing, confirmation bias is most easily evident when tests are written to only test known good parameters instead of also testing known bad combinations. If you take the example of testing a login form that require you to enter an email address and a password, the truly interesting tests are those that test without the email, or without the password, or with a string in the email field that doesn’t conform to email address syntax. It’s more obvious to confirm the hypothesis that the login form should work than to disprove it.

Testing a login form is a relatively simple example, though. Confirmation bias in testing can occur in more subtle forms. If you’re testing a network protocol, for example, a lot of the goings on on the wire will be defined as requests and responses of some sort. It is usually implied — and most likely documented — that each request to a machine should result in a response being sent back from that machine. What if the machine sends multiple responses? What if it sends a response packet without a request being made? What if a different machine responds than the one the request was sent to?

In many protocols, such behaviour is so outlandish that it’s not even documented that, for example, such extra responses should be discarded (as one may somewhat reasonably assume). Yet it’s the existence of such responses that can throw off server software quite badly. The problem here isn’t that documentation describes behaviour that deviates from what the software does, it’s that the documentation omits behaviour entirely. It’s entirely reasonable to assume that even seasoned QA engineers will therefore also omit testing this crucial aspect of the software and/or documentation.

Given all the above, I do agree strongly with the Microsoft team that a change of perspective on QA may be required and useful. Where they speak about a specific goal, however, I think the change in perspective should be more general. In order to avoid all of the consequences of confirmation bias, QA should not be viewed as testing whether software conforms to specification. Instead, that should be viewed as a rare case that applies only in the special circumstances when software is also implemented from such specification.

In the real world (to get back to the title), QA tests documentation, software, and the ability of engineers to fill the gaps left between the two alike.

Written by: Mark Barzilay

Graduated with honors from TU Delft in 2007 studying Electrical Engineering and Media & Knowledge Engineering. Founded spriteCloud in 2009 and worked on test automation ever since, helping out small and large companies with their test automation strategy and infrastructure. Mark is also leading the development on Calliope.pro, an online platform for all your automated test results.

Subscribe to our mailing list!

Stay up-to-date on all things quality assurance,
test automation, and cybersecurity.

We’re spriteCloud, a leader in software and cybersecurity testing.

Aside from interesting articles, we also have a team of software testers that can help your organisation.

Have a look at our testing solutions.