blog

Let's talk about testing

Software Lifecycle Consultancy

It feels like ages that I blogged about the place of QA in software development. Granted, in internet years, it has been a while. But the topic never seems to quite go away.

Since I wrote that other blog post, a lot of things have happened here at spriteCloud. In the meantime, we’ve been involved in the testing process with quite a few more customers (yay!). One thing that emerged over time is that many companies don’t exactly come to us for testing alone, but also for answering the question of how to fit testing into the development and release process.

When you try to answer that question, it quite often happens that you discover that the reason these companies didn’t know how to fit testing into their process is that there is not much of a process in place to fit testing into. To us QA-minded people, that may come as a surprise, but it really shouldn’t be.

As one young developer at one of our clients put it to me (slightly paraphrased): “They should really teach release management in university. Programming is easy enough to pick up, but this stuff is hard to figure out on your own.” I sympathize with that, as it mirrors my own experience from some ten years or so ago when I started out on my development career.

Before you can teach anything about development processes, though, it is essential that one understands the software lifecycle.

Let’s start with the basics: it used to be taught (usually as a bad example) that the software lifecycle works a bit like a waterfall: Software is completely specified, before it is completely implemented, before it is completely tested, before it is released.

Waterfall Model of Software Development

The model deliberately simplifies things to the point of being fantastically incorrect in describing the real world. It does, however, describe accurately that some tasks within the development process by necessity depend upon the completion of other tasks. Where it fails is that it assumes these tasks can be grouped into five completely separate stages. In this model, testing falls solely into the fourth stage, that of verification. (You can click on the images to enlarge them).

When agile development methodologies sprung up, they usually started by pointing out the flawed assumptions behind the waterfall model. The most glaring flaw to many is that software rarely goes into maintenance mode — usually it is developed further, and newer versions simply supersede older versions. That last stage of the waterfall model, then, should really be got rid of.

What happens instead is that a whole new design-build-release cycle gets started that works out bugs in the previous release and adds new features, all depending of course on the overall product plan. The best model of describing things, then, should circle back into itself, as any good lifecycle would.

One thing that’s slightly alarming to quality-minded people is that a “verification” stage is usually dropped from this model. Depending on which flavour of agile proponent you ask about this, they’ll either state that testing should drive development, or that testing is done in production by end-users. The latter group usually legitimize their stance by also demanding very short iterations on this circular model, such that end-users would not be inconvenienced by spurious bugs for a long time anyway.

There is an argument to be made for both, of course, but both miss critical points about testing:

  1. Test-driven development misses the fact that testing isn’t just about testing code. Behaviour driven development is a much saner approach, as it focuses more on the end-user experience than anything else. As such, it implies that whoever thinks up what a piece of software should do should be involved in defining what behaviour needs to be tested for, which in larger teams means there’s now a dependency between several people’s deliverables.
  2. On the other hand, production testing misses the fact that end-users aren’t necessarily product designers. Of course it’s important to take their feedback into account, but different users have different acceptance criteria. It requires careful design to figure out how to best serve the majority of users, and it’s that design that needs to be fulfilled, not the user’s expectations.

Put differently, you now need a separate stage in the process again at which acceptance test criteria are verified.

If you’ve followed my line of argumentation closely, you may have noticed that I’m edging towards a statement about testing that is often slightly misunderstood: of course testing serves to eliminate bugs, but when bugs come in the form of undesirable software behaviour rather than crashes or spurious errors, someone really needs to sit down and describe what the desired software behaviour should be. In other words, there is no testing without a plan. And plans, at least traditionally, happen close to the start of any project (or iteration).

Once you’ve internalized that, it becomes quite clear that putting a neat “verification” box into either the waterfall or agile model cannot really reflect the reality of the software lifecycle. So let’s draw a picture that’s closer to reality.

Software Lifecycle

That looks a bit more daunting, doesn’t it? It should, because it gives up on the whole “stages” thing and breaks down the lifecycle into roles and the things these roles produce (lined arrows). Dotted arrows describe which artefacts serve as input into which other artefacts.

Once we started describing these roles and their interactions within the lifecycle to customers, things tended to become a lot clearer for them — they started seeing how some form of test planning needs to happen even before development starts properly.

But surprisingly, things became clearer to us, too. You see, the green blobs in the image cover spriteCloud’s core competencies. We’re a QA company, sure… but by being able to explain and conduct processes and best practices, we cover the release manager and (to an extent) lead developer roles as well, whether we plan to do so or not.

Put into numbers, we provide expertise for four out of eight distinct roles (50%) and produce six out of ten artefacts (60%). That’s over half of the stuff that happens in software development!

If we tried to push it a little, we could go even further: we all have, to a lesser or greater degree, software development experience. Writing test code is not at all the same as writing functional code, but it’s more experience than many newbie developers sport. We all have, to a lesser or greater degree, operations experience. Setting up test environments does that to you.

By contrast, many young companies out there have great product managers, UX designers and developers, but not much of the green glue that holds them together. No wonder do we often start talking about things other than testing!

There’s only one sensible conclusion from all this: we’ve decided that we’re “officially” shifting our company focus away from pure testing services, and now offer consultancy along the whole software lifecycle.


Comments are closed.

Reputation. Meet spriteCloud

Find out today why startups, SMBs, enterprises, brands, digital agencies, e-commerce, and mobile clients turn to spriteCloud to help improve their customer experiences. And their reputation. With complete range of QA services, we provide a full service that includes test planning, functional testing, test automation, performance testing, consultancy, mobile testing, and security testing. We even have a test lab — open to all our clients to use — with a full range of devices and platforms.

Discover how our process can boost your reputation.