Skip to main content

Believing you don't know

People want to believe. If you are a tester then you've probably seen this in your work. A new build of the software has been delivered. "Just check it works" you're told, It's 'a given' for most people that the software will work. This isn't unusual, it's normal human behaviour. Research has suggested that it's in our nature to believe first. Then secondly, given the opportunity, we might have doubt cast upon those beliefs.

We see this problem everywhere in our daily lives. Quack remedies thrive on our need to believe there is a simple [sugar] pill or even an mp3 file that will solve our medical problems. Software like medicine is meant to solve problems, make our lives or businesses better, healthier. Software release are often presented to us as a one-build solution to a missing feature or a nasty bug in the code.

As teams, we often under-estimate the 'unknown' areas of our work. We frequently under-estimate the time taken to test and fix the features we create. I suspect the more 'unknowns' we can think of the less we actually prepare for them. We fail to see the potential link between the idea of an 'unknown' and its inherent ambiguity (It's unknown for a reason - probably the new function is not easy to 'know' or understand). As such, many development projects will deliberately avoid planning for the fixing of bugs. Refusing to accept that they can be accounted for until they 'exist'. Not realising that ambiguity, issues and bugs almost always do get uncovered, and therefore already 'exist' before a single line of code is written.

Even disciplined teams will often slip into their 'belief system' when the names are changed. For example, A new feature may be tested thoroughly, but a series of major 'bug-fixes' to the same system skip through testing with barely a glance. For all we know the programmer checked in the 'old code' and the feature is now completely gone! Even if you have an acceptance test in place - every non-acceptance tested execution path maybe broken!

In the software business we are starting to be seen as quacks. We often churn out half baked remedies that don't meet the customers needs. The customer can't even look at the label, and see the possible side effects. The testing has been so cursory and confirmatory that we didn't find any issues (There it is again: Lots of unknowns=no problems). If a doctor gave you a potent medicine, and when you read the label it didn't mention a single possible side-effect, would you really believe it was 100% safe?

The same applies to software... Until we question the hypothesis that it's all going to be OK; Until we put our proposed solutions through a 'trial' with testing, then we will remain charlatans. This questioning process can't be a series predefined checks. Much as a medical trial should not be to check that a drug like Thalidomide is -only- a potent antiemetic, or that an antibiotic like Penicillin was -only- 'good'. We'd hope they would check the drug for other potential problems, look deeper and learn about it's good and bad attributes, by building and testing hypotheses as they learn.

Comments

  1. Pete, you are spot on.
    Making the comparison between sw development and quacks is going to put some people off. Is it that bad? Yes, I'm afraid it is!
    Software engineering is in a crisis - we're taught to think everything is possible. Even testing to ensure bug-free software.

    ReplyDelete
  2. Good point Anders, Yes, there's a failure to know our own limits and our own flaws. I studied computer science, but looking back now I can see there was very little 'science' involved in the course.

    One of the most useful minor courses I took was in cognitive psychology, I think that has helped me at least as much as the technically focused CompSci courses.

    The introduction to how visual illusions and our assumptions affect our perception-unconsciously, is very relevant to software testing. Knowing what humans are capable of and how we are 'incapable' is essential.

    ReplyDelete
  3. Could we perhaps label this as 'belief bias' to complement the plethora of other biases out there?

    ReplyDelete

Post a Comment

Popular posts from this blog

The gamification of Software Testing

A while back, I sat in on a planning meeting. Many planning meetings slide awkwardly into a sort of ad-hoc technical analysis discussion, and this was no exception. With a little prompting, the team started to draw up what they wanted to build on a whiteboard.

The picture spoke its thousand words, and I could feel that the team now understood what needed to be done. The right questions were being asked, and initial development guesstimates were approaching common sense levels.

The discussion came around to testing, skipping over how they might test the feature, the team focused immediately on how long testing would take.

When probed as to how the testing would be performed? How we might find out what the team did wrong? Confused faces stared back at me. During our ensuing chat, I realised that they had been using BDD scenarios [only] as a metric of what testing needs to be done and when they are ready to ship. (Now I knew why I was hired to help)



There is nothing wrong with checking t…

Manumation, the worst best practice.

There is a pattern I see with many clients, often enough that I sought out a word to describe it: Manumation, A sort of well-meaning automation that usually requires frequent, extensive and expensive intervention to keep it 'working'.

You have probably seen it, the build server that needs a prod and a restart 'when things get a bit busy'. Or a deployment tool that, 'gets confused' and a 'test suite' that just needs another run or three.

The cause can be any number of the usual suspects - a corporate standard tool warped 5 ways to make it fit what your team needs. A one-off script 'that manager' decided was an investment and needed to be re-used... A well-intended attempt to 'automate all the things' that achieved the opposite.

They result in a manually intensive - automated process, where your team is like a character in the movie Metropolis, fighting with levers all day, just to keep the lights on upstairs. Manual-automation, manumatio…

Scatter guns and muskets.

Many, Many years ago I worked at a startup called Lastminute.com (a European online travel company, back when a travel company didn't have to be online). For a while, I worked in what would now be described as a 'DevOps' team. A group of technical people with both programming and operational skills.

I was in a hybrid development/operations role, where I spent my time investigating and remedying production issues using my development, investigative and still nascent testing skills. It was a hectic job working long hours away from home. Finding myself overloaded with work, I quickly learned to be a little ruthless with my time when trying to figure out what was broken and what needed to be fixed.
One skill I picked up, was being able to distinguish whether I was researching a bug or trying to find a new bug. When researching, I would be changing one thing or removing something (etc) and seeing if that made the issue better or worse. When looking for bugs, I'd be casting…