Skip to main content

Posts

If it's not good testing, it's not good regression testing either.

Pick a coin from your pocket, and hold it at arms length. Take a good look. Now take another one, of the same denomination and hold it out at arms length as before. Based on your observations alone - can you say they are the identical? Lets go a step further. If someone had given you one coin to look at, then exchanged it for another, could you have determined whether they are the same or different coins? Maybe, yes? If the differences had been large enough e.g. one coin was heavily tarnished or scratched, then the different coins would be identifiable. Or if you'd been given the opportunity to examine the coin using magnifying equipment, you probably could of found differences. But lets assume our only test was a standard set of checks i.e.: viewing at arms length and comparing what we see with our notes/records. It's better than nothing, I would see some differences, some might be important ones. For example if my next coin was blank: I might have suspected an issue with

The Mythical Standard Build

Do your hear phrases like "All our users use [insert some technology]" spoken in your office? or possibly "We have a 'corporate standard' desktop". I have a lot. I have since my first job, back in the '90s. It's a commonly held belief in most of the client companies I've worked with. Programmers, testers, project managers and product owners frequently hold faith in the standard build. It -is- a matter of faith. Often based on little more than wishful thinking or at best very loose 'standards'. The problem isn't purely one of client machines or end-users. I've often seen servers defined as 'clones' that in fact have quite different properties. e.g. different versions of java or application servers or even different time. The blind faith on these standard systems has caught myself and colleagues out so many times that I now find myself instantly questioning the assumption, and encouraging others to do the same. Even in thi

Controlling software development

Do you ever feel like we do all this work and maybe we needn't of bothered? Things might have worked-out without our intervention. Or we are actually worse off, now, after the work? You're not alone. This is a common problem in any role where you need to investigate the effects of changes. What you're feeling is a lack of control . A control is a view of the world, without your work. It's an alternate view of the world where everything is the same except for your fix/hack/intervention. They behave like 3D TV, they let your mind's-eye 'see' the effects, by making them standout from the background. They are commonly used in scientific and especially pharmaceutical research studies. They let the researchers know how effective a treatment was, compared with similar patients who received placebo (or  older established medicine) pills rather than the new treatment. The researchers can tell whether, for example, a new flu remedy actually helped the patients. O

Google testing blog comment...

I recently read a post on the G o o g l e Testing blog titled:  How Google Tests Software - Part Three . I added a comment to the post, but that comment has yet to appear . I thought I'd add post my comment here in the mean time. (I've added some links here, for the curious) “I agree that 'quality' can not be 'tested in'. But the approach you describe appears to go-ahead and attempt to do something just , if not more, difficult. You suggest that a programmer will produce quality work by just coding 'better'. While a skilled and experienced programmer is capable of producing high quality software, who will tell them when they don't or can't? We are all potentially victims of the Dunning–Kruger effect, and as such we need co-workers to help. There are a host of biases that stop a programmer, product owner or project manager from questioning their work. The confirmation and congruence bias to name just two. These are magnified by group-think

2.2250738585072012e-308

Meet my new friend 2.2250738585072012e-308, We've been hanging out recently. If you've not heard of him, he's about ten years old but thats pretty old in [dog and in] software years. He's getting pretty famous in his old age, but he had humble beginnings as a lowly bug report on a Sun Microsystems website. It's rumoured he was first discovered back in 2001, but his big break didn't come until recently , when it was realised that he has the potential to be a key component of a Denial of Service attack that could bring down many java based systems [that accept floating point numbers  as input]. This includes commonplace application servers like Tomcat, who accept floating point numbers as part of the HTTP protocol. 2.2250738585072012e-308 has now been placed firmly in my mental bag of tricks along with divide by zero, 2^32, null, imaginary numbers, localised floats and all the others that routinely get brought out to help me test and investigate software. Bu

Why so negative?

Have you ever had trouble explaining to a non-tester why you appear to be intent on breaking their software? It can be difficult to explain why it's important. So I thought a video might help... If you want to read more about the scientific method, check out the hunky-dory hypothesis .

Believing you don't know

People want to believe. If you are a tester then you've probably seen this in your work. A new build of the software has been delivered. "Just check it works" you're told, It's 'a given ' for most people that the software will work. This isn't unusual, it's normal human behaviour. Research has suggested  that it's in our nature to believe first. Then secondly, given the opportunity, we might have doubt cast upon those beliefs. We see this problem everywhere in our daily lives. Quack remedies thrive on our need to believe there is a simple [sugar] pill or even an mp3  file that will solve our medical problems. Software like medicine is meant to solve problems, make our lives or businesses better, healthier. Software release are often presented to us as a one-build solution to a missing feature or a nasty bug in the code. As teams, we often under-estimate the 'unknown' areas of our work. We frequently under-estimate the time taken to