Skip to main content

Why so negative?

Have you ever had trouble explaining to a non-tester why you appear to be intent on breaking their software? It can be difficult to explain why it's important. So I thought a video might help...



If you want to read more about the scientific method, check out the hunky-dory hypothesis.

Comments

  1. It's an excellent point and a wonderful way of showing it, Pete. A few refinements:

    1) We don't break software; the software was broken when we got it.

    2) We don't create tests designed to cause failure; we create tests designed to expose the failures that are lurking.

    3) The illusion that the software wasn't broken and the illusion that we're creating failure are among the most important illusions we testers need to dispel.

    I'm delighted at the steady stream of excellent posts, and especially chuffed that it started to flow just after the Rapid Software Testing course in London. That was a rare group!

    ---Michael B.

    ReplyDelete
  2. Thanks Michael, Thank you for your support - and yes the RST course definitely helped motivate me! I recommend the course to testers, programmers and project managers!

    I agree with your main point - that the software is essentially broken before it reaches the tester. The tester finds out that these problems are present in the system, and reports them.

    1) In the blurb when I refer to breaking the software, I'm describing how the process appears to others. i.e.: "to a non-tester why you appear to be intent on breaking their..."
    I tend not to use the phrase myself, except lightheartedly.

    2&3) I'm not so sure about these... for example: a judgement, coding or configuration mistake was made before the system is examined by the tester - But the system may not 'fail' until we perform certain actions. By fail I'm thinking: Displeases or confuses user, performs slowly, crashes or loses data etc.

    The incident on the Silver Bridge springs to mind (http://en.wikipedia.org/wiki/Silver_Bridge#Wreckage_analysis ) A contributing factor in the bridges 'failure' was a problem in the manufacture of a constituent part. Although this problem was in the system for many years, along with others such as a lack of redundancy, they did not 'fail' until December 15 1967.

    If we were testing such a system, might we not add higher than expected load in an attempt to 'cause a failure'?

    Though I can see that this engineering style language in a software setting is far from a perfect fit. Issues such as corrosion and decay don't apply. Though unplanned-for user load and change in usage do apply. I'm going to think about this...

    ReplyDelete

Post a Comment

Popular posts from this blog

Betting in Testing

“I’ve completed my testing of this feature, and I think it's ready to ship” “Are you willing to bet on that?” No, Don't worry, I’m not going to list various ways you could test the feature better or things you might have forgotten. Instead, I recommend you to ask yourself that question next time you believe you are finished.  Why? It might cause you to analyse your belief more critically. We arrive at a decision usually by means of a mixture of emotion, convention and reason. Considering the question of whether the feature and the app are good enough as a bet is likely to make you use a more evidence-based approach. Testing is gambling with your time to find information about the app. Why do I think I am done here? Would I bet money/reputation on it? I have a checklist stuck to one of my screens, that I read and contemplate when I get to this point. When you have considered the options, you may decide to check some more things or ship the app

Test Engineers, counsel for... all of the above!

Sometimes people discuss test engineers and QA as if they were a sort of police force, patrolling the streets of code looking for offences and offenders. While I can see the parallels, the investigation, checking the veracity of claims and a belief that we are making things safer. The simile soon falls down. But testers are not on the other side of the problem, we work alongside core developers, we often write code and follow all the same procedures (pull requests, planning, requirements analysis etc) they do. We also have the same goals, the delivery of working software that fulfills the team’s/company's goals and avoids harm. "A few good men" a great courtroom drama, all about finding the truth. Software quality, whatever that means for you and your company is helped by Test Engineers. Test Engineers approach the problem from another vantage point. We are the lawyers (& their investigators) in the court-room, sifting the evidence, questioning the facts and viewing t

XSS and Open Redirect on Telegraph.co.uk Authentication pages

I recently found a couple of security issues with the Telegraph.co.uk website. The site contained an Open redirect as well as an XSS vulnerability. These issues were in the authentication section of the website, https://auth.telegraph.co.uk/ . The flaws could provide an easy means to phish customer details and passwords from unsuspecting users. I informed the telegraph's technical management, as part of a responsible disclosure process. The telegraph management forwarded the issue report and thanked me the same day. (12th May 2014) The fix went live between the 11th and 14th of July, 2 months after the issue was reported. The details: The code served via auth.telegraph.co.uk appeared to have 2 vulnerabilities, an open redirect and a reflected Cross Site Scripting (XSS) vulnerability. Both types of vulnerabilty are in the OWASP Top 10 and can be used to manipulate and phish users of a website. As well has potentially hijack a user's session. Compromised URLs, that exp