Skip to main content

The arrogance of regression testing

Lets assume we know that our software is not perfect. How can it be? Its complex, mortals created it and we don’t have enough time to test every execution path & environment – so we could never be sure anyway. This is Ok - this is normal, testers deal with this situation every day.

This tends to be a typical scenario... Our team has been working on some new features. They’re looking good, initial teething issues have been fixed and the new features are considered worthwhile enough and bug-sparse enough to be released into the wild.

This is where things can get a little awkward. The team member’s opinions are often split across a wide spectrum. The relatively minor perceived impact of the work leads some to conclude that the work is ready for release as is.

Other team members, who are possibly twice shy from previous ‘minor change’ induced problems, argue for a comprehensive ‘regression test’ of the software. There is usually a range of views in between suggesting for example only ‘regression testing’ directly affected systems or those associated with the changes etc.

The oft-stated concern is: we may have broken something. Our new code might be good, the existing code might be even better, but what about emergent issues caused by the new ‘system’ we’ve created?

A common compromise proposed by teams and customers is generally translated as ‘test that it [the core features etc] still works’. This may sound reasonable, intelligent and practical. But it just doesn’t sit well with me. My unease stems from more than the assumption that testers can prove it works…again. The concept of regression testing seems arrogant and it even, worse it seems wasteful.

Arrogant? Regression testing assumes that we tested the application so well before that we found all the important areas of ambiguity and bugs. It’s like saying that when Microsoft released a patch to Vista that they only needed to ensure it was up to the high standard of the original Vista release.

The root of the problem here is a bias. We start out with the perception ‘Our system is great’, and then lets check (sic) it still is. The Congruence bias is a powerful motivator to not upset the perceived status quo. We stop looking for problems that we don’t think are caused by the new changes. Whereas we might investigate these ‘issues’ in other circumstances, we don’t even think to look deeper unless they are practically labeled “BROKEN BY THE NEW CODE”. How many issues are overlooked?

Wasteful? By misdirecting ourselves from the system as a whole – we are missing an opportunity to test again. A second chance to find those bugs we didn’t find last time. We could be capitalizing on another chance to learn new and old areas of the application.

When I’ve been in this situation I’ve also fallen foul of the biases at play, I’m human. But as a tester, I’ve had to come up with a few strategies to help remedy the problem.
  • Firstly, just be aware of the bias - don't let it lead you blindly.
  • If you’re lucky enough to have another tester in your team, split the problem with them. One of you can test the system as a whole, the other the ‘affected areas’.
  • Deliberate opposition. A technique I use when I’ve got a definite checklist of the affected areas. Deliberately pick things not on the list, or that are the direct opposite of what is on the list. E.g.: If the change affects Logging in as user X, What does logging out as user Y do? Or can you avoid being logged in all together?
  • Randomness. Choose a path or some data at random. For good randomness www.random.org is a useful source.

I find the above a useful means of breaking out of the tunnel vision of regression testing. They give you new paths to follow, that if explored can often yield new and old bugs.

Comments

  1. Good thoughts!

    I was recently asked how I could "guarantee" that the system still works after a sw change (by use of an automated regression suite). To which I replied that I couldn't guarantee anything unless the guarantee stated that the narrow "paths" traversed by any automated suite would be the exact same that every possible customer would use (same starting states, exact same behaviour wrt timing between actions, data transmitted (being exactly the same), user provisioned data in the databases, etc, etc.).

    A very confused and disappointed face greeted me.

    The upshot was that I asked the person to be careful with language - try and be precise even - and definitely don't promise anything that you don't understand the implications of... I could tell the persons lots about the sw and the testing of it, but I'd probably never use the word "guarantee".

    Talking about testing, what it means and its implications is a big effort! And it's something that testers need to devote just as much time, effort and understanding to as any other part of their repertoire.

    ReplyDelete
  2. Thanks Simon, Yes I've been in your situation also, It can be a challenge to manage expectations. Using the 'right' language can help a lot. I also find keeping a mental list of 'similar issues' can be useful. Real failures often speak louder than words.

    For example: "You're right this looks like a simple change, But remember when we released 'The XYZ', and that was a change to the same sub-system. It turned out to affect 'The MNO' - badly"

    ReplyDelete

Post a Comment

Popular posts from this blog

Betting in Testing

“I’ve completed my testing of this feature, and I think it's ready to ship” “Are you willing to bet on that?” No, Don't worry, I’m not going to list various ways you could test the feature better or things you might have forgotten. Instead, I recommend you to ask yourself that question next time you believe you are finished.  Why? It might cause you to analyse your belief more critically. We arrive at a decision usually by means of a mixture of emotion, convention and reason. Considering the question of whether the feature and the app are good enough as a bet is likely to make you use a more evidence-based approach. Testing is gambling with your time to find information about the app. Why do I think I am done here? Would I bet money/reputation on it? I have a checklist stuck to one of my screens, that I read and contemplate when I get to this point. When you have considered the options, you may decide to check some more things or ship the app

Test Engineers, counsel for... all of the above!

Sometimes people discuss test engineers and QA as if they were a sort of police force, patrolling the streets of code looking for offences and offenders. While I can see the parallels, the investigation, checking the veracity of claims and a belief that we are making things safer. The simile soon falls down. But testers are not on the other side of the problem, we work alongside core developers, we often write code and follow all the same procedures (pull requests, planning, requirements analysis etc) they do. We also have the same goals, the delivery of working software that fulfills the team’s/company's goals and avoids harm. "A few good men" a great courtroom drama, all about finding the truth. Software quality, whatever that means for you and your company is helped by Test Engineers. Test Engineers approach the problem from another vantage point. We are the lawyers (& their investigators) in the court-room, sifting the evidence, questioning the facts and viewing t

XSS and Open Redirect on Telegraph.co.uk Authentication pages

I recently found a couple of security issues with the Telegraph.co.uk website. The site contained an Open redirect as well as an XSS vulnerability. These issues were in the authentication section of the website, https://auth.telegraph.co.uk/ . The flaws could provide an easy means to phish customer details and passwords from unsuspecting users. I informed the telegraph's technical management, as part of a responsible disclosure process. The telegraph management forwarded the issue report and thanked me the same day. (12th May 2014) The fix went live between the 11th and 14th of July, 2 months after the issue was reported. The details: The code served via auth.telegraph.co.uk appeared to have 2 vulnerabilities, an open redirect and a reflected Cross Site Scripting (XSS) vulnerability. Both types of vulnerabilty are in the OWASP Top 10 and can be used to manipulate and phish users of a website. As well has potentially hijack a user's session. Compromised URLs, that exp