Skip to main content

Why so negative?

Have you ever had trouble explaining to a non-tester why you appear to be intent on breaking their software? It can be difficult to explain why it's important. So I thought a video might help...



If you want to read more about the scientific method, check out the hunky-dory hypothesis.

Comments

  1. It's an excellent point and a wonderful way of showing it, Pete. A few refinements:

    1) We don't break software; the software was broken when we got it.

    2) We don't create tests designed to cause failure; we create tests designed to expose the failures that are lurking.

    3) The illusion that the software wasn't broken and the illusion that we're creating failure are among the most important illusions we testers need to dispel.

    I'm delighted at the steady stream of excellent posts, and especially chuffed that it started to flow just after the Rapid Software Testing course in London. That was a rare group!

    ---Michael B.

    ReplyDelete
  2. Thanks Michael, Thank you for your support - and yes the RST course definitely helped motivate me! I recommend the course to testers, programmers and project managers!

    I agree with your main point - that the software is essentially broken before it reaches the tester. The tester finds out that these problems are present in the system, and reports them.

    1) In the blurb when I refer to breaking the software, I'm describing how the process appears to others. i.e.: "to a non-tester why you appear to be intent on breaking their..."
    I tend not to use the phrase myself, except lightheartedly.

    2&3) I'm not so sure about these... for example: a judgement, coding or configuration mistake was made before the system is examined by the tester - But the system may not 'fail' until we perform certain actions. By fail I'm thinking: Displeases or confuses user, performs slowly, crashes or loses data etc.

    The incident on the Silver Bridge springs to mind (http://en.wikipedia.org/wiki/Silver_Bridge#Wreckage_analysis ) A contributing factor in the bridges 'failure' was a problem in the manufacture of a constituent part. Although this problem was in the system for many years, along with others such as a lack of redundancy, they did not 'fail' until December 15 1967.

    If we were testing such a system, might we not add higher than expected load in an attempt to 'cause a failure'?

    Though I can see that this engineering style language in a software setting is far from a perfect fit. Issues such as corrosion and decay don't apply. Though unplanned-for user load and change in usage do apply. I'm going to think about this...

    ReplyDelete

Post a Comment

Popular posts from this blog

Betting in Testing

“I’ve completed my testing of this feature, and I think it's ready to ship” “Are you willing to bet on that?” No, Don't worry, I’m not going to list various ways you could test the feature better or things you might have forgotten. Instead, I recommend you to ask yourself that question next time you believe you are finished.  Why? It might cause you to analyse your belief more critically. We arrive at a decision usually by means of a mixture of emotion, convention and reason. Considering the question of whether the feature and the app are good enough as a bet is likely to make you use a more evidence-based approach. Testing is gambling with your time to find information about the app. Why do I think I am done here? Would I bet money/reputation on it? I have a checklist stuck to one of my screens, that I read and contemplate when I get to this point. When you have considered the options, you may decide to check some more things or ship the app

XSS and Open Redirect on Telegraph.co.uk Authentication pages

I recently found a couple of security issues with the Telegraph.co.uk website. The site contained an Open redirect as well as an XSS vulnerability. These issues were in the authentication section of the website, https://auth.telegraph.co.uk/ . The flaws could provide an easy means to phish customer details and passwords from unsuspecting users. I informed the telegraph's technical management, as part of a responsible disclosure process. The telegraph management forwarded the issue report and thanked me the same day. (12th May 2014) The fix went live between the 11th and 14th of July, 2 months after the issue was reported. The details: The code served via auth.telegraph.co.uk appeared to have 2 vulnerabilities, an open redirect and a reflected Cross Site Scripting (XSS) vulnerability. Both types of vulnerabilty are in the OWASP Top 10 and can be used to manipulate and phish users of a website. As well has potentially hijack a user's session. Compromised URLs, that exp

What possible use could Gen AI be to me? (Part 1)

There’s a great scene in the Simpsons where the Monorail salesman comes to town and everyone (except Lisa of course) is quickly entranced by Monorail fever… He has an answer for every question and guess what? The Monorail will solve all the problems… somehow. The hype around Generative AI can seem a bit like that, and like Monorail-guy the sales-guy’s assure you Gen AI will solve all your problems - but can be pretty vague on the “how” part of the answer. So I’m going to provide a few short guides into how Generative (& other forms of AI) Artificial Intelligence can help you and your team. I’ll pitch the technical level differently for each one, and we’ll start with something fairly not technical: Custom Chatbots. ChatBots these days have evolved from the crude web sales tools of ten years ago, designed to hoover up leads for the sales team. They can now provide informative answers to questions based on documents or websites. If we take the most famous: Chat GPT 4. If we ignore the