Skip to main content

Believing you don't know

People want to believe. If you are a tester then you've probably seen this in your work. A new build of the software has been delivered. "Just check it works" you're told, It's 'a given' for most people that the software will work. This isn't unusual, it's normal human behaviour. Research has suggested that it's in our nature to believe first. Then secondly, given the opportunity, we might have doubt cast upon those beliefs.

We see this problem everywhere in our daily lives. Quack remedies thrive on our need to believe there is a simple [sugar] pill or even an mp3 file that will solve our medical problems. Software like medicine is meant to solve problems, make our lives or businesses better, healthier. Software release are often presented to us as a one-build solution to a missing feature or a nasty bug in the code.

As teams, we often under-estimate the 'unknown' areas of our work. We frequently under-estimate the time taken to test and fix the features we create. I suspect the more 'unknowns' we can think of the less we actually prepare for them. We fail to see the potential link between the idea of an 'unknown' and its inherent ambiguity (It's unknown for a reason - probably the new function is not easy to 'know' or understand). As such, many development projects will deliberately avoid planning for the fixing of bugs. Refusing to accept that they can be accounted for until they 'exist'. Not realising that ambiguity, issues and bugs almost always do get uncovered, and therefore already 'exist' before a single line of code is written.

Even disciplined teams will often slip into their 'belief system' when the names are changed. For example, A new feature may be tested thoroughly, but a series of major 'bug-fixes' to the same system skip through testing with barely a glance. For all we know the programmer checked in the 'old code' and the feature is now completely gone! Even if you have an acceptance test in place - every non-acceptance tested execution path maybe broken!

In the software business we are starting to be seen as quacks. We often churn out half baked remedies that don't meet the customers needs. The customer can't even look at the label, and see the possible side effects. The testing has been so cursory and confirmatory that we didn't find any issues (There it is again: Lots of unknowns=no problems). If a doctor gave you a potent medicine, and when you read the label it didn't mention a single possible side-effect, would you really believe it was 100% safe?

The same applies to software... Until we question the hypothesis that it's all going to be OK; Until we put our proposed solutions through a 'trial' with testing, then we will remain charlatans. This questioning process can't be a series predefined checks. Much as a medical trial should not be to check that a drug like Thalidomide is -only- a potent antiemetic, or that an antibiotic like Penicillin was -only- 'good'. We'd hope they would check the drug for other potential problems, look deeper and learn about it's good and bad attributes, by building and testing hypotheses as they learn.

Comments

  1. Pete, you are spot on.
    Making the comparison between sw development and quacks is going to put some people off. Is it that bad? Yes, I'm afraid it is!
    Software engineering is in a crisis - we're taught to think everything is possible. Even testing to ensure bug-free software.

    ReplyDelete
  2. Good point Anders, Yes, there's a failure to know our own limits and our own flaws. I studied computer science, but looking back now I can see there was very little 'science' involved in the course.

    One of the most useful minor courses I took was in cognitive psychology, I think that has helped me at least as much as the technically focused CompSci courses.

    The introduction to how visual illusions and our assumptions affect our perception-unconsciously, is very relevant to software testing. Knowing what humans are capable of and how we are 'incapable' is essential.

    ReplyDelete
  3. Could we perhaps label this as 'belief bias' to complement the plethora of other biases out there?

    ReplyDelete

Post a Comment

Popular posts from this blog

What possible use could Gen AI be to me? (Part 1)

There’s a great scene in the Simpsons where the Monorail salesman comes to town and everyone (except Lisa of course) is quickly entranced by Monorail fever… He has an answer for every question and guess what? The Monorail will solve all the problems… somehow. The hype around Generative AI can seem a bit like that, and like Monorail-guy the sales-guy’s assure you Gen AI will solve all your problems - but can be pretty vague on the “how” part of the answer. So I’m going to provide a few short guides into how Generative (& other forms of AI) Artificial Intelligence can help you and your team. I’ll pitch the technical level differently for each one, and we’ll start with something fairly not technical: Custom Chatbots. ChatBots these days have evolved from the crude web sales tools of ten years ago, designed to hoover up leads for the sales team. They can now provide informative answers to questions based on documents or websites. If we take the most famous: Chat GPT 4. If we ignore the

Is your ChatBot actually using your data?

 In 316 AD Emperor Constantine issued a new coin,  there's nothing too unique about that in itself. But this coin is significant due to its pagan/roman religious symbols. Why is this odd? Constantine had converted himself, and probably with little consultation -  his empire to Christianity, years before. Yet the coin shows the emperor and the (pagan) sun god Sol.  Looks Legit! While this seems out of place, to us (1700 years later), it's not entirely surprising. Constantine and his people had followed different, older gods for centuries. The people would have been raised and taught the old pagan stories, and when presented with a new narrative it's not surprising they borrowed from and felt comfortable with both. I've seen much the same behaviour with Large Language Models (LLMs) like ChatGPT. You can provide them with fresh new data, from your own documents, but what's to stop it from listening to its old training instead?  You could spend a lot of time collating,

Can Gen-AI understand Payments?

When it comes to rolling out updates to large complex banking systems, things can get messy quickly. Of course, the holy grail is to have each subsystem work well independently and to do some form of Pact or contract testing – reducing the complex and painful integration work. But nonetheless – at some point you are going to need to see if the dog and the pony can do their show together – and its generally better to do that in a way that doesn’t make millions of pounds of transactions fail – in a highly public manner, in production.  (This post is based on my recent lightning talk at  PyData London ) For the last few years, I’ve worked in the world of high value, real time and cross border payments, And one of the sticking points in bank [software] integration is message generation. A lot of time is spent dreaming up and creating those messages, then maintaining what you have just built. The world of payments runs on messages, these days they are often XML messages – and they can be pa