Skip to main content

Serendipity

Recently, I was testing a new feature for a client, it had a known bug, that I'd found in prior testing, for which we'd figured-out a work-around. I was now performing further testing of the feature, hoping to discover more issues and figure out how it behaves a little better.

Does not apply to testers.
By this time, the work-around had become the norm - the expected mode of operation for the feature. Essentially this 'bug' had been found and was now fixed. Time had moved on. So what did I do? I ignored the work-around, applied my 'test load' to the system and activated the new feature. The failure was somewhat spectacular. A short while later the entire system was inoperable and a restart of several servers was required.

This was interesting. If I'd had expectations of what would happen, then it would of been for something simpler, less severe and closer to what had happened when I first found the bug that required the workaround.

After some investigation, I had found the likely cause. The system had a mis-configuration causing load to be incorrectly balanced (everything was being sent to [only] one of two servers). Speaking to the programmers, it became clear that the issue was caused by the code used in the work-around. Not only that, the work-around helped to mask the issue - because it protected the servers from the load!

In short, If I had used the work-around (that is: expected usage) I might never have seen the failure. I've seen this effect before and I suspect many testers have. By moving outside the 'safe operating parameters' you discover 'stuff'. You discover not only what can go wrong in the extremes, but what might happen in normal, run-of-the-mill usage. I'm deliberately saying 'stuff' here because we don't know what it's going to be. I am building new knowledge through experience.

Before I went on my 'voyage of discovery', I had certain preconceptions. I also had, it turns out, great gaps in my knowledge. By breaking the rules - or realising they don't apply and only hinder investigation - I put my self in a better position to learn more.

On a related note, some mainstream companies (OK... Apple) might benefit from some similar testing - outside the specifications. It might help them avoid negative publicity.

Comments

Popular posts from this blog

The gamification of Software Testing

A while back, I sat in on a planning meeting. Many planning meetings slide awkwardly into a sort of ad-hoc technical analysis discussion, and this was no exception. With a little prompting, the team started to draw up what they wanted to build on a whiteboard.

The picture spoke its thousand words, and I could feel that the team now understood what needed to be done. The right questions were being asked, and initial development guesstimates were approaching common sense levels.

The discussion came around to testing, skipping over how they might test the feature, the team focused immediately on how long testing would take.

When probed as to how the testing would be performed? How we might find out what the team did wrong? Confused faces stared back at me. During our ensuing chat, I realised that they had been using BDD scenarios [only] as a metric of what testing needs to be done and when they are ready to ship. (Now I knew why I was hired to help)



There is nothing wrong with checking t…

A h̶i̶t̶c̶h̶h̶i̶k̶e̶r̶'s̶ software tester's guide to randomised testing - Part 1

Mostly Harmless, I've talked and written about randomisation as a technique in software testing several times over the last few years. It's great to see people's eyes light up when they grok the concept and its potential. 
The idea that they can create random test data on the fly and pour this into the app step back and see what happens is exciting to people looking to find new blockers on their apps path to reliability.
But it's not long before a cloud appears in their sunny demeanour and they start to conceive of the possible pitfalls. Here are a few tips on how to avert the common apparent blockers. (Part 1) Problem: I've created loads of random numbers as input data, but how will I know the answer the software returns, is correct? - Do I have to re-implement the whole app logic in my test code?
Do you remember going to the fun-fair as a kid? Or maybe you recall taking your kids now as an adult? If so then you no doubt are familiar with the height restriction -…

Betting in Testing

“I’ve completed my testing of this feature, and I think it's ready to ship”
“Are you willing to bet on that?”
No, Don't worry, I’m not going to list various ways you could test the feature better or things you might have forgotten.
Instead, I recommend you to ask yourself that question next time you believe you are finished. 
Why? It might cause you to analyse your belief more critically. We arrive at a decision usually by means of a mixture of emotion, convention and reason. Considering the question of whether the feature and the app are good enough as a bet is likely to make you use a more evidence-based approach.

Why do I think I am done here? Would I bet money/reputation on it? I have a checklist stuck to one of my screens, that I read and contemplate when I get to this point. When you have considered the options, you may decide to check some more things or ship the app. Either could be the right decision.
Then the app fails…
The next day you log on and find that the feature is b…