Recently, I was testing a new feature for a client, it had a known bug, that I'd found in prior testing, for which we'd figured-out a work-around. I was now performing further testing of the feature, hoping to discover more issues and figure out how it behaves a little better.
By this time, the work-around had become the norm - the expected mode of operation for the feature. Essentially this 'bug' had been found and was now fixed. Time had moved on. So what did I do? I ignored the work-around, applied my 'test load' to the system and activated the new feature. The failure was somewhat spectacular. A short while later the entire system was inoperable and a restart of several servers was required.
This was interesting. If I'd had expectations of what would happen, then it would of been for something simpler, less severe and closer to what had happened when I first found the bug that required the workaround.
After some investigation, I had found the likely cause. The system had a mis-configuration causing load to be incorrectly balanced (everything was being sent to [only] one of two servers). Speaking to the programmers, it became clear that the issue was caused by the code used in the work-around. Not only that, the work-around helped to mask the issue - because it protected the servers from the load!
In short, If I had used the work-around (that is: expected usage) I might never have seen the failure. I've seen this effect before and I suspect many testers have. By moving outside the 'safe operating parameters' you discover 'stuff'. You discover not only what can go wrong in the extremes, but what might happen in normal, run-of-the-mill usage. I'm deliberately saying 'stuff' here because we don't know what it's going to be. I am building new knowledge through experience.
Before I went on my 'voyage of discovery', I had certain preconceptions. I also had, it turns out, great gaps in my knowledge. By breaking the rules - or realising they don't apply and only hinder investigation - I put my self in a better position to learn more.
On a related note, some mainstream companies (OK... Apple) might benefit from some similar testing - outside the specifications. It might help them avoid negative publicity.
Does not apply to testers. |
This was interesting. If I'd had expectations of what would happen, then it would of been for something simpler, less severe and closer to what had happened when I first found the bug that required the workaround.
After some investigation, I had found the likely cause. The system had a mis-configuration causing load to be incorrectly balanced (everything was being sent to [only] one of two servers). Speaking to the programmers, it became clear that the issue was caused by the code used in the work-around. Not only that, the work-around helped to mask the issue - because it protected the servers from the load!
In short, If I had used the work-around (that is: expected usage) I might never have seen the failure. I've seen this effect before and I suspect many testers have. By moving outside the 'safe operating parameters' you discover 'stuff'. You discover not only what can go wrong in the extremes, but what might happen in normal, run-of-the-mill usage. I'm deliberately saying 'stuff' here because we don't know what it's going to be. I am building new knowledge through experience.
Before I went on my 'voyage of discovery', I had certain preconceptions. I also had, it turns out, great gaps in my knowledge. By breaking the rules - or realising they don't apply and only hinder investigation - I put my self in a better position to learn more.
On a related note, some mainstream companies (OK... Apple) might benefit from some similar testing - outside the specifications. It might help them avoid negative publicity.
Comments
Post a Comment