Skip to main content

Conspicuous in their absence

If you're a tester then you'll no doubt of heard phrases to the effect of "That's pretty unlikely", "Our users don't do that" or "Thats a fairly minor browser". Its been blogged about before, and elsewhere. The argument is many users are niche, novice, confused or from different backgrounds / viewpoints / languages. These are realistic and probably correct hypotheses, for many situations.

From my experience, thats often where the discussion ends, someone makes a judgement call, and the issue is fixed, mitigated or ignored. More often, than not, its ignored. That decision should probably be a business decision, its their money. But can they make such a decision safely? We are asking for consent to 'not operate' or 'operate' on their software. To come to the right decision, they need to be fully informed. i.e.: Are we sure that the issue is indeed rare? Are they making a properly informed decision?

For example what if the issue is: that a website has several serious issues when viewed in a particular web browser, but not in a more 'mainstream' browser. When this issue is presented to the decision maker - How could it be presented?

A) Users of Browser XYZ ... can't play/view the video
B) A browser used by < 1% of our users ... can't play/view the video

Option (B) appears to give more information. But we are also including a reporting bias here. The users maybe only make up 1% of our users - because - the video doesn't work. They tried to use the site but gave up in frustration or found a competitors site had fully working video - and so took their custom there.

Whenever we try to quantify a user's behaviour as it appears to us - we need to remember that we are not seeing the full picture. Rather we are glimpsing just the tip of the iceberg. The users probably haven't complained about how the system crashes, when you use that feature, because they've learned not to use that button "as it's flakey". They'd love to use that button - if only it worked.

This survivorship bias is endemic in the world around us, not just in software development. How many times have you seen adverts that read something like "90% of our customers would recommend us to a friend!". The adverts fail to mention that most of the customers ran screaming away to a competitor, or failed to even get through a tortuous ordering process - leaving the rest who love the -one- working feature. Now that those other 'disgruntled users' are out of the picture, the few remaining customers may generally be happy.

Many companies even make it harder still to get the feedback they need. Rather than a Help page or Help button having an easy to find web-form to submit problems or questions - they hide or remove this functionality altogether. Thats free testing - by real users - providing details of actual real world bugs and requirements - being ignored in the belief that they are saving the company money.

From a testing standpoint, we provide information, and its important not only to provide the facts, but maybe some context and explanation as to how the issue reports might relate to real world applications e.g. for the above there is an option (C): iPhone users won't be able to view the video. Or: these users make up 1% of users here, but Google/Microsoft etc has them at 10% of its users, Why don't we see all of those users?

Comments

  1. Splendid piece, as usual.

    A related bias is in thinking about the symptom as being the problem, when the problem is something poorly understood and potentially far bigger. (I wrote about that here.

    Mark Federman wrote a wonderful piece related to the your notes on survivorship bias. You can find that here .

    ---Michael B.

    ReplyDelete
  2. It's hard for some stakeholders to listen to testers when profits are louder than our concerns.

    ReplyDelete
  3. Second link in the first comment is giving 404 error because it has quote symbol at the end . I removed the quote and this seems to be correct link
    http://individual.utoronto.ca/markfederman/VoiceoftheCustomer.pdf

    ReplyDelete

Post a Comment

Popular posts from this blog

The gamification of Software Testing

A while back, I sat in on a planning meeting. Many planning meetings slide awkwardly into a sort of ad-hoc technical analysis discussion, and this was no exception. With a little prompting, the team started to draw up what they wanted to build on a whiteboard.

The picture spoke its thousand words, and I could feel that the team now understood what needed to be done. The right questions were being asked, and initial development guesstimates were approaching common sense levels.

The discussion came around to testing, skipping over how they might test the feature, the team focused immediately on how long testing would take.

When probed as to how the testing would be performed? How we might find out what the team did wrong? Confused faces stared back at me. During our ensuing chat, I realised that they had been using BDD scenarios [only] as a metric of what testing needs to be done and when they are ready to ship. (Now I knew why I was hired to help)



There is nothing wrong with checking t…

Manumation, the worst best practice.

There is a pattern I see with many clients, often enough that I sought out a word to describe it: Manumation, A sort of well-meaning automation that usually requires frequent, extensive and expensive intervention to keep it 'working'.

You have probably seen it, the build server that needs a prod and a restart 'when things get a bit busy'. Or a deployment tool that, 'gets confused' and a 'test suite' that just needs another run or three.

The cause can be any number of the usual suspects - a corporate standard tool warped 5 ways to make it fit what your team needs. A one-off script 'that manager' decided was an investment and needed to be re-used... A well-intended attempt to 'automate all the things' that achieved the opposite.

They result in a manually intensive - automated process, where your team is like a character in the movie Metropolis, fighting with levers all day, just to keep the lights on upstairs. Manual-automation, manumatio…

Scatter guns and muskets.

Many, Many years ago I worked at a startup called Lastminute.com (a European online travel company, back when a travel company didn't have to be online). For a while, I worked in what would now be described as a 'DevOps' team. A group of technical people with both programming and operational skills.

I was in a hybrid development/operations role, where I spent my time investigating and remedying production issues using my development, investigative and still nascent testing skills. It was a hectic job working long hours away from home. Finding myself overloaded with work, I quickly learned to be a little ruthless with my time when trying to figure out what was broken and what needed to be fixed.
One skill I picked up, was being able to distinguish whether I was researching a bug or trying to find a new bug. When researching, I would be changing one thing or removing something (etc) and seeing if that made the issue better or worse. When looking for bugs, I'd be casting…