Skip to main content

Serendipity

Recently, I was testing a new feature for a client, it had a known bug, that I'd found in prior testing, for which we'd figured-out a work-around. I was now performing further testing of the feature, hoping to discover more issues and figure out how it behaves a little better.

Does not apply to testers.
By this time, the work-around had become the norm - the expected mode of operation for the feature. Essentially this 'bug' had been found and was now fixed. Time had moved on. So what did I do? I ignored the work-around, applied my 'test load' to the system and activated the new feature. The failure was somewhat spectacular. A short while later the entire system was inoperable and a restart of several servers was required.

This was interesting. If I'd had expectations of what would happen, then it would of been for something simpler, less severe and closer to what had happened when I first found the bug that required the workaround.

After some investigation, I had found the likely cause. The system had a mis-configuration causing load to be incorrectly balanced (everything was being sent to [only] one of two servers). Speaking to the programmers, it became clear that the issue was caused by the code used in the work-around. Not only that, the work-around helped to mask the issue - because it protected the servers from the load!

In short, If I had used the work-around (that is: expected usage) I might never have seen the failure. I've seen this effect before and I suspect many testers have. By moving outside the 'safe operating parameters' you discover 'stuff'. You discover not only what can go wrong in the extremes, but what might happen in normal, run-of-the-mill usage. I'm deliberately saying 'stuff' here because we don't know what it's going to be. I am building new knowledge through experience.

Before I went on my 'voyage of discovery', I had certain preconceptions. I also had, it turns out, great gaps in my knowledge. By breaking the rules - or realising they don't apply and only hinder investigation - I put my self in a better position to learn more.

On a related note, some mainstream companies (OK... Apple) might benefit from some similar testing - outside the specifications. It might help them avoid negative publicity.

Comments

Popular posts from this blog

Why you might need testers

I remember teaching my son to ride his bike. No, Strike that, Helping him to learn to ride his bike. It’s that way round – if we are honest – he was changing his brain so it could adapt to the mechanism and behaviour of the bike. I was just holding the bike, pushing and showering him with praise and tips.
If he fell, I didn’t and couldn’t change the way he was riding the bike. I suggested things, rubbed his sore knee and pointed out that he had just cycled more in that last attempt – than he had ever managed before - Son this is working, you’re getting it.
I had help of course, Gravity being one. When he lost balance, it hurt. Not a lot, but enough for his brain to get the feedback it needed to rewire a few neurons. If the mistakes were subtler, advice might help – try going faster – that will make the bike less wobbly. The excitement of going faster and better helped rewire a few more neurons.
When we have this sort of immediate feedback we learn quicker, we improve our game. When the f…

Thank you for finding the bug I missed.

Thank you to the colleague/customer/product owner, who found the bug I missed. That oversight, was (at least in part) my mistake. I've been thinking about what happened and what that means to me and my team.

I'm happy you told me about the issue you found, because you...

1) Opened my eyes to a situation I'd never have thought to investigate.

2) Gave me another item for my checklist of things to check in future.

3) Made me remember, that we are never done testing.

4) Are never sure if the application 'works' well enough.

5) Reminded me to explore more and build less.

6) To request that we may wish to assign more time to finding these issues.

7) Let me experience the hindsight bias, so that the edge-case now seems obvious!

Being a square keeps you from going around in circles.

After a weary few hours sorting through, re-running and manually double checking the "automated test" results, the team decide they need to "run the tests again!", that's a problem to the team. Why? because they are too slow. The 'test' runs take too long and they won't have the results until tomorrow.
How does our team intend to fix the problem? ... make the tests run faster. Maybe use a new framework, get better hardware or some other cool trick. The team get busy, update the test tools and soon find them selves in a similar position. Now of course they need to rewrite them in language X or using a new [A-Z]+DD methodology. I can't believe you are still using technology Z , Luddites!
Updating your tooling, and using a methodology appropriate to your context makes sense and should be factored into your workflow and estimates. But the above approach to solving the problem, starts with the wrong problem. As such, its not likely to find the right ans…