Skip to main content

The arrogance of regression testing

Lets assume we know that our software is not perfect. How can it be? Its complex, mortals created it and we don’t have enough time to test every execution path & environment – so we could never be sure anyway. This is Ok - this is normal, testers deal with this situation every day.

This tends to be a typical scenario... Our team has been working on some new features. They’re looking good, initial teething issues have been fixed and the new features are considered worthwhile enough and bug-sparse enough to be released into the wild.

This is where things can get a little awkward. The team member’s opinions are often split across a wide spectrum. The relatively minor perceived impact of the work leads some to conclude that the work is ready for release as is.

Other team members, who are possibly twice shy from previous ‘minor change’ induced problems, argue for a comprehensive ‘regression test’ of the software. There is usually a range of views in between suggesting for example only ‘regression testing’ directly affected systems or those associated with the changes etc.

The oft-stated concern is: we may have broken something. Our new code might be good, the existing code might be even better, but what about emergent issues caused by the new ‘system’ we’ve created?

A common compromise proposed by teams and customers is generally translated as ‘test that it [the core features etc] still works’. This may sound reasonable, intelligent and practical. But it just doesn’t sit well with me. My unease stems from more than the assumption that testers can prove it works…again. The concept of regression testing seems arrogant and it even, worse it seems wasteful.

Arrogant? Regression testing assumes that we tested the application so well before that we found all the important areas of ambiguity and bugs. It’s like saying that when Microsoft released a patch to Vista that they only needed to ensure it was up to the high standard of the original Vista release.

The root of the problem here is a bias. We start out with the perception ‘Our system is great’, and then lets check (sic) it still is. The Congruence bias is a powerful motivator to not upset the perceived status quo. We stop looking for problems that we don’t think are caused by the new changes. Whereas we might investigate these ‘issues’ in other circumstances, we don’t even think to look deeper unless they are practically labeled “BROKEN BY THE NEW CODE”. How many issues are overlooked?

Wasteful? By misdirecting ourselves from the system as a whole – we are missing an opportunity to test again. A second chance to find those bugs we didn’t find last time. We could be capitalizing on another chance to learn new and old areas of the application.

When I’ve been in this situation I’ve also fallen foul of the biases at play, I’m human. But as a tester, I’ve had to come up with a few strategies to help remedy the problem.
  • Firstly, just be aware of the bias - don't let it lead you blindly.
  • If you’re lucky enough to have another tester in your team, split the problem with them. One of you can test the system as a whole, the other the ‘affected areas’.
  • Deliberate opposition. A technique I use when I’ve got a definite checklist of the affected areas. Deliberately pick things not on the list, or that are the direct opposite of what is on the list. E.g.: If the change affects Logging in as user X, What does logging out as user Y do? Or can you avoid being logged in all together?
  • Randomness. Choose a path or some data at random. For good randomness www.random.org is a useful source.

I find the above a useful means of breaking out of the tunnel vision of regression testing. They give you new paths to follow, that if explored can often yield new and old bugs.

Comments

  1. Good thoughts!

    I was recently asked how I could "guarantee" that the system still works after a sw change (by use of an automated regression suite). To which I replied that I couldn't guarantee anything unless the guarantee stated that the narrow "paths" traversed by any automated suite would be the exact same that every possible customer would use (same starting states, exact same behaviour wrt timing between actions, data transmitted (being exactly the same), user provisioned data in the databases, etc, etc.).

    A very confused and disappointed face greeted me.

    The upshot was that I asked the person to be careful with language - try and be precise even - and definitely don't promise anything that you don't understand the implications of... I could tell the persons lots about the sw and the testing of it, but I'd probably never use the word "guarantee".

    Talking about testing, what it means and its implications is a big effort! And it's something that testers need to devote just as much time, effort and understanding to as any other part of their repertoire.

    ReplyDelete
  2. Thanks Simon, Yes I've been in your situation also, It can be a challenge to manage expectations. Using the 'right' language can help a lot. I also find keeping a mental list of 'similar issues' can be useful. Real failures often speak louder than words.

    For example: "You're right this looks like a simple change, But remember when we released 'The XYZ', and that was a change to the same sub-system. It turned out to affect 'The MNO' - badly"

    ReplyDelete

Post a Comment

Popular posts from this blog

Why you might need testers

I remember teaching my son to ride his bike. No, Strike that, Helping him to learn to ride his bike. It’s that way round – if we are honest – he was changing his brain so it could adapt to the mechanism and behaviour of the bike. I was just holding the bike, pushing and showering him with praise and tips.


If he fell, I didn’t and couldn’t change the way he was riding the bike. I suggested things, rubbed his sore knee and pointed out that he had just cycled more in that last attempt – than he had ever managed before - Son this is working, you’re getting it.

I had help of course, Gravity being one. When he lost balance, it hurt. Not a lot, but enough for his brain to get the feedback it needed to rewire a few neurons. If the mistakes were subtler, advice might help – try going faster – that will make the bike less wobbly. The excitement of going faster and better helped rewire a few more neurons.

When we have this sort of immediate feedback we learn quicker, we improve our game. When the f…

Thank you for finding the bug I missed.

Thank you to the colleague/customer/product owner, who found the bug I missed. That oversight, was (at least in part) my mistake. I've been thinking about what happened and what that means to me and my team.


I'm happy you told me about the issue you found, because you...

1) Opened my eyes to a situation I'd never have thought to investigate.

2) Gave me another item for my checklist of things to check in future.

3) Made me remember, that we are never done testing.

4) Are never sure if the application 'works' well enough.

5) Reminded me to explore more and build less.

6) To request that we may wish to assign more time to finding these issues.

7) Let me experience the hindsight bias, so that the edge-case now seems obvious!

Google Maps Queue Jumps.

Google Maps directs me to and from my client sites. I've saved the location of the client's car parks, when I start the app in the morning - it knows where I want to go. When I start it at the end of the day, Google knows where I want to go.
This is great! It guides me around traffic jams, adjusts when I miss a turn and even offers faster routes en-route as they become available.
But sometimes Google Maps does something wrong. I don't mean incorrect, like how it sometimes gets a street name wrong (typically in a rural area). I don't mean how its GPS fix might put me in a neighbouring street (10m to my left - when there are trees overhead).
I mean wrong - As in something unfair and socially unacceptable. An action, that if a person did it, would be frowned upon.
Example:
Let’s assume a road has a traffic jam, so instead of the cars doing around 60 mph, we are crawling at <10 mph.
In the middle of this traffic jam, the road has a junction, an example is shown here: