Skip to main content

Convexity in Predictive Value & Why Your Tests Are Flaky.



A long time ago, in a country far away, a cunning politician suggested a way to reduce crime. He stated that a simple test that could be used to catch all the criminals. When tested, all the criminals would fail the test and be locked up. There’d be no need for expensive courts, crooked lawyers or long drawn out trials.

The politician failed to give details of the test when pressed by journalists, stating that the test was very sensitive and they wouldn’t understand it. His supporters soon had their way and the politician was elected to office.

On his first day in office, he deployed his national program of criminality-testing. Inevitably the details of the test leaked out. The test was simple and was indeed capable of ensuring 100% of criminals were detected.

The test was: If the person is alive, find them guilty and lock them up.

The test had a sensitivity of 100%, every single actual... real... bonafide criminal would fail the test and find themselves in prison.

Unfortunately, the test was not specific. Its specificity, found after an extensive and thorough review, was 0%. All the people who were definitely not criminals also found themselves ‘guilty’ and were sent to prison.

...only checking if a feature is present and ‘working’. This soon results in a preponderance of tests that fail intermittently.

In medicine, they use “sensitivity” and “specificity” to describe the accuracy of medical tests. Combined with the details of the disease prevalence (the proportion of people that actually have the disease or in our case the % of criminals in the population) clinicians can calculate the Predictive Values of a disease.

The fabled 'Boy who cried wolf', A case of an alarm that was ignored due to too many false alarms.

Positive & Negative Predictive Value are ways to summarise the usefulness of a test. A high Positive Predictive Value would mean that the majority of people who tested positive for a disease actually had the disease and weren't victims of a false alarm from a dodgy diagnostic test.

In software development a flaky test, that is one with low Positive Predictive Value (PPV) can be a useful entry point into how the app or tests are functioning. It’s the sort of messy real-world situation that can illuminate the emergent behaviour of a complicated system.

...naive practitioner often attempts to increase a team's test-automation levels by encouraging scenario testing, focusing on checking if a feature is present and ‘working’

But for creating reliable tests, suitable for a continuous integration system, we need tests with a high positive predictive value. As our electorate (above) found, not finding out the specificity of the test can come back to haunt us. Not having a high specificity can land us with a lot of false alarms.

For a fairly rare bug that might cause a failure 5% of the time, you need to be sure not to lower the test specificity. The reason for this is a convex relationship between specificity and Positive Predictive value when we maintain high sensitivity.

Figure 1: The convex relationship between Specificity and Positive Predictive Value is important when choosing to focus your team's time. Not ensuring your tests are highly specific will tend to cause a disproportionate number of unhelpful failures.


While conversely, the relationship between sensitivity and Positive Predictive Value is concave, given a high specificity.

Figure 2: A concave relationship for Sensitivity to PPV can give test developers a false sense of security, regarding their usefulness.

The consequence is that a slight drop in the specificity of your test can have catastrophic effects on your test’s usefulness. Even a minor degradation in specificity can mean that many of the test failures are false alarms.

The naive practitioner often attempts to increase a team's test-automation levels by encouraging scenario testing, focusing on checking if a feature is present and ‘working’. This soon results in a preponderance of tests that fail intermittently. You now have an app that may be flaky, a bunch of tests that definitely are flaky and no easy route to refactor your way to safety.

In case you're wondering about the effects of Specificity and Sensitivity on Negative Predictive Value. That is the usefulness of the test to show that you are all-clear if you are actually all-clear. You can see in figures 2 & 3 that they remain at relatively high levels in both scenarios

Bonus: The code for these graphs can be found on Github.

Figure 3: Due to the majority of test runs being on a working system, the varying of Sensitivity has little impact on the Negative Predictive Value of the test.

Figure 4: Due to the majority of test runs being on a working system, the varying of Specificity also has little impact on the Negative Predictive Value.

Comments

Popular posts from this blog

Betting in Testing

“I’ve completed my testing of this feature, and I think it's ready to ship” “Are you willing to bet on that?” No, Don't worry, I’m not going to list various ways you could test the feature better or things you might have forgotten. Instead, I recommend you to ask yourself that question next time you believe you are finished.  Why? It might cause you to analyse your belief more critically. We arrive at a decision usually by means of a mixture of emotion, convention and reason. Considering the question of whether the feature and the app are good enough as a bet is likely to make you use a more evidence-based approach. Testing is gambling with your time to find information about the app. Why do I think I am done here? Would I bet money/reputation on it? I have a checklist stuck to one of my screens, that I read and contemplate when I get to this point. When you have considered the options, you may decide to check some more things or ship the app

A h̶i̶t̶c̶h̶h̶i̶k̶e̶r̶'s̶ software tester's guide to randomised testing - Part 1

Mostly Harmless, I've talked and written about randomisation as a technique in software testing several times over the last few years. It's great to see people's eyes light up when they grok the concept and its potential.  The idea that they can create random test data on the fly and pour this into the app step back and see what happens is exciting to people looking to find new blockers on their apps path to reliability. But it's not long before a cloud appears in their sunny demeanour and they start to conceive of the possible pitfalls. Here are a few tips on how to avert the common apparent blockers. (Part 1) A good motto for software testing, as well as pan-galactic hitchhiking. Problem: I've created loads of random numbers as input data, but how will I know the answer the software returns, is correct? - Do I have to re-implement the whole app logic in my test code? Do you remember going to the fun-fair as a kid? Or maybe you recall tak

The gamification of Software Testing

A while back, I sat in on a planning meeting. Many planning meetings slide awkwardly into a sort of ad-hoc technical analysis discussion, and this was no exception. With a little prompting, the team started to draw up what they wanted to build on a whiteboard. The picture spoke its thousand words, and I could feel that the team now understood what needed to be done. The right questions were being asked, and initial development guesstimates were approaching common sense levels. The discussion came around to testing, skipping over how they might test the feature, the team focused immediately on how long testing would take. When probed as to how the testing would be performed? How we might find out what the team did wrong? Confused faces stared back at me. During our ensuing chat, I realised that they had been using BDD scenarios [only] as a metric of what testing needs to be done and when they are ready to ship. (Now I knew why I was hired to help) There is nothing wrong with c