Evaluating the success of test automation; can it really be done with metrics?
Evaluating the success of test automation is a lot like pagan mystic voodoo; there is a lot of activity involved, much chanting and ranting; and a fair amount of ritual, with the end result being completely valid, or highly dubious.
We here would all believe that there is value in automation, otherwise we wouldn’t be part of this group (let alone reading these very wordy forum posts of mine), but how do we know what that value is? Can it be stated in some kind of metrics that make sense to the world at large, or is it still mainly anecdotal?
I have often had to face this scenario of explaining why as a test manager I’ve put resources from my group into test automation. There are the obvious answers such as “it saves us time in regression testing basic scenarios that are manually intensive”. Login and registration functionality being a fine example. But can this be equated in some way with time invested up front VS time saved per test cycle where the automation is used.
Then of course there is the indirect value to consider. Test automation doesn’t find defects in the first instance, that is up to the test techniques that are all manually performed by a tester. So automation isn’t something that contributes immediate value, but value that starts after a certain period of time on a software project. Test automation scripts degrade over time as well, hence they require maintenance. If the maintenance phase is shortened or skipped entirely because of project delivery deadlines, then over time the automation scripts become useless. Hence more time is required to keep scripts valid with every release.
So it seems to me that some form of metric analysis could be performed, but would it really be useful? Is the value of test automation really something that can be quantified, or is it destined to be like Zen; you either get it or you don’t?