Skip to main content

Believing you don't know

People want to believe. If you are a tester then you've probably seen this in your work. A new build of the software has been delivered. "Just check it works" you're told, It's 'a given' for most people that the software will work. This isn't unusual, it's normal human behaviour. Research has suggested that it's in our nature to believe first. Then secondly, given the opportunity, we might have doubt cast upon those beliefs.

We see this problem everywhere in our daily lives. Quack remedies thrive on our need to believe there is a simple [sugar] pill or even an mp3 file that will solve our medical problems. Software like medicine is meant to solve problems, make our lives or businesses better, healthier. Software release are often presented to us as a one-build solution to a missing feature or a nasty bug in the code.

As teams, we often under-estimate the 'unknown' areas of our work. We frequently under-estimate the time taken to test and fix the features we create. I suspect the more 'unknowns' we can think of the less we actually prepare for them. We fail to see the potential link between the idea of an 'unknown' and its inherent ambiguity (It's unknown for a reason - probably the new function is not easy to 'know' or understand). As such, many development projects will deliberately avoid planning for the fixing of bugs. Refusing to accept that they can be accounted for until they 'exist'. Not realising that ambiguity, issues and bugs almost always do get uncovered, and therefore already 'exist' before a single line of code is written.

Even disciplined teams will often slip into their 'belief system' when the names are changed. For example, A new feature may be tested thoroughly, but a series of major 'bug-fixes' to the same system skip through testing with barely a glance. For all we know the programmer checked in the 'old code' and the feature is now completely gone! Even if you have an acceptance test in place - every non-acceptance tested execution path maybe broken!

In the software business we are starting to be seen as quacks. We often churn out half baked remedies that don't meet the customers needs. The customer can't even look at the label, and see the possible side effects. The testing has been so cursory and confirmatory that we didn't find any issues (There it is again: Lots of unknowns=no problems). If a doctor gave you a potent medicine, and when you read the label it didn't mention a single possible side-effect, would you really believe it was 100% safe?

The same applies to software... Until we question the hypothesis that it's all going to be OK; Until we put our proposed solutions through a 'trial' with testing, then we will remain charlatans. This questioning process can't be a series predefined checks. Much as a medical trial should not be to check that a drug like Thalidomide is -only- a potent antiemetic, or that an antibiotic like Penicillin was -only- 'good'. We'd hope they would check the drug for other potential problems, look deeper and learn about it's good and bad attributes, by building and testing hypotheses as they learn.

Comments

  1. Pete, you are spot on.
    Making the comparison between sw development and quacks is going to put some people off. Is it that bad? Yes, I'm afraid it is!
    Software engineering is in a crisis - we're taught to think everything is possible. Even testing to ensure bug-free software.

    ReplyDelete
  2. Good point Anders, Yes, there's a failure to know our own limits and our own flaws. I studied computer science, but looking back now I can see there was very little 'science' involved in the course.

    One of the most useful minor courses I took was in cognitive psychology, I think that has helped me at least as much as the technically focused CompSci courses.

    The introduction to how visual illusions and our assumptions affect our perception-unconsciously, is very relevant to software testing. Knowing what humans are capable of and how we are 'incapable' is essential.

    ReplyDelete
  3. Could we perhaps label this as 'belief bias' to complement the plethora of other biases out there?

    ReplyDelete

Post a Comment

Popular posts from this blog

Betting in Testing

“I’ve completed my testing of this feature, and I think it's ready to ship” “Are you willing to bet on that?” No, Don't worry, I’m not going to list various ways you could test the feature better or things you might have forgotten. Instead, I recommend you to ask yourself that question next time you believe you are finished.  Why? It might cause you to analyse your belief more critically. We arrive at a decision usually by means of a mixture of emotion, convention and reason. Considering the question of whether the feature and the app are good enough as a bet is likely to make you use a more evidence-based approach. Testing is gambling with your time to find information about the app. Why do I think I am done here? Would I bet money/reputation on it? I have a checklist stuck to one of my screens, that I read and contemplate when I get to this point. When you have considered the options, you may decide to check some more things or ship the app

XSS and Open Redirect on Telegraph.co.uk Authentication pages

I recently found a couple of security issues with the Telegraph.co.uk website. The site contained an Open redirect as well as an XSS vulnerability. These issues were in the authentication section of the website, https://auth.telegraph.co.uk/ . The flaws could provide an easy means to phish customer details and passwords from unsuspecting users. I informed the telegraph's technical management, as part of a responsible disclosure process. The telegraph management forwarded the issue report and thanked me the same day. (12th May 2014) The fix went live between the 11th and 14th of July, 2 months after the issue was reported. The details: The code served via auth.telegraph.co.uk appeared to have 2 vulnerabilities, an open redirect and a reflected Cross Site Scripting (XSS) vulnerability. Both types of vulnerabilty are in the OWASP Top 10 and can be used to manipulate and phish users of a website. As well has potentially hijack a user's session. Compromised URLs, that exp

DevOps and Software Testing.

Most of my recent work has been with DevOps teams. While in one sense DevOps is another evolution in software development. It also introduces some new skill requirements and responsibilities into the daily routine of a tester. These diagrams tend to confuse people, hence the video... I've created a short video to highlight some of these changes and the opportunities they bring. It's not an exhaustive view of DevOps but it gives a highlight of what you could be working with. While DevOps isn't a panacea to our software development problems, I have found that empowering teams with the ability to build and use the tools they need, can rapidly improve team morale and productivity.