Skip to main content

Posts

Showing posts with the label testing

The Like-Live Paradox

I was recently struck by a glaring difference between how I and a programmer prepared for testing. Unlike the majority of the testing I am involved in, this particular testing 'phase' had to be scheduled in advance and we couldn't "just do it". This also meant we had more time to prepare and plan than we typically do. This 'waiting period' had its uses. We had time to create tools that might be useful and check the configuration of the systems we would be testing. The team, familiar with the concepts of exploratory testing, were comfortable with an approach that meant we did not spend the majority of the time pre-scripting tests [be they coded or in a spreadsheet etc]. We did however build a high-level checklist of areas to test and used this to drive our program of tools and configuration checking/fixing/building. The key difference I observed was the absolute nature of the programmers comparisons between our test systems and our live-production syste

Wrong in front of you.

In 2008 I attended GTAC in Seattle , a conference devoted to the use of automation in software testing. Since their first in London in 2006 , Google have been running about one a year, in various locations around the world. This post isn't really about the conference, its about a realisation that I had the day after the conference. After the conference I went sight-seeing in Seattle. I rode the short Simpsons -like Monorail and took a lift up the Jetsons -like space needle. I enjoyed my time there, and found the people very friendly. The conference had been very technology focused, many (but granted, not all) speakers focused on tools and how to use tools. While useful, the tools are only part of testing - and even then they typically just support testing rather than "do" testing. The Seattle Monorail. While I was at the top of the Space needle, I took out my phone and like a good tourist started taking pictures. I'd typically take a couple of pictures then

Random text tool

I recently blogged about some of the tools I use , and how some are so useful I keep using them. As I mentioned, randomness is pretty useful, and I have tools to help me generate random text. A few of my readers requested a copy of my simple random text generating script, so I've decided to open it up for everyone to use and test. It will have bugs, like all software, please send details and I'll try and fix them. If you are interested in what UTF-8 is and what all that Unicode stuff is about, there is a great article by Joel Spolsky that explains all, and the wikipedia page is ok . To use it... First download the script, its on GitHub . The script is fairly short and is all in one file. You don't have to 'install it', its not a GEM. Second, make sure you have Ruby version 1.9 or greater. You need version 1.9, because Ruby didn't handle UTF-8 well in older versions. Thirdly run the script like this: ruby fuzzutf8.rb That will give you some usag

2 minutes on Bing Maps

Consistency, is one thing I test for in software. For example, if software refers to something by a particular name, then [usually] it should always refer to it by that name. Furthermore, when it uses that name e.g. 'London Tube Map' I would expect to see such a map, when I click to view it, and not another kind of map e.g.: a street map. Conventions, These are also an important part of software. People will [usually] expect your software to use conventions that are appropriate for the field. For example, The traditional London Tube map is a schematic diagram, designed to show the relative positions of the stations rather than their geographic location. Though, sometimes it's actually useful to have geographic information, e.g.: is Queensway (Central line) station very close to Bayswater (Circle line)? So if a map isn't using the schematic form, then the geographic form also has it uses. I would be surprised if I received a London Tube map that was neither schemati

Tools

Do you ever examine what you carry around with you every day, and wonder if you actually use it? For example, in my pockets I've got a 'smart' phone, wallet (credit & debit cards, cash and ID), keys, Travelcard ( Oyster ) and some coins. Every now and then something gets added, if its unlucky it stays. Over the years, I've noticed, that the criteria for being kept is usually convenience or enablement. That is, the items that don't get chucked or deposited somewhere about my home are usually 'tools' that make other 'things' easier like a smart-phone - I can just look up something or text someone at any time. I could just wait until I got back to my office, or see the person later but it can be easier to just act in the moment, and do it there and then. Enablement items, are things that mean I -can- do things, that without, I'm stuck. For example: door keys. The smart phone fits into this category also, if I want to meet up with someone at sh

A Fair Witness

About 10 years ago, I was working with a client who were in the process of developing a new ecommerce website. The new website and servers were designed to replace an entire existing suite of systems, and provide a platform for the company's future expansion. As you might imagine, the project was sprawling, new front end servers, new middleware and a host of back-end business to business systems. The project had expanded during its course, as new areas of old functionality were identified and the business requested new systems to support new ventures. This isn't an unusual scenario for software development projects, it is for exactly this type of situation that many companies now look to agile methodologies for help. But don't worry this isn't a rant about the benefits of one methodology over another. What interested me was the how the project team members performed and viewed their testing efforts. Each build of the code would include new functionality, [hopefully]

Fishing for bugs.

You probably don't know, but I'm keen on fishing (Honest! Ok, maybe not, but bare with me...) I spend my free time, by the river bank or out on the sea searching for 'the big one'. The big catch that'll stand as tall as me, and feed my family for a week. My dream is to be the guy standing next to his prize-fish on that black and white picture behind the bar. Over the years, I've become reasonably skilled. I usually find a fish or two when ever I'm out on the imaginary water. I've learned where they live, where they spawn, and of course where's best to catch them. For example, There's a little bend in the river upstream from my home, that has some great fishing spots. The overhanging rocks protect small fish from the predatory eyes of birds, and people. Of course where there's small fish, there's usually the odd big fish or two. Ok, lets imagine that fishing had a profitable side too, and wasn't just a [fictitious] hobby. For exam

If it's not good testing, it's not good regression testing either.

Pick a coin from your pocket, and hold it at arms length. Take a good look. Now take another one, of the same denomination and hold it out at arms length as before. Based on your observations alone - can you say they are the identical? Lets go a step further. If someone had given you one coin to look at, then exchanged it for another, could you have determined whether they are the same or different coins? Maybe, yes? If the differences had been large enough e.g. one coin was heavily tarnished or scratched, then the different coins would be identifiable. Or if you'd been given the opportunity to examine the coin using magnifying equipment, you probably could of found differences. But lets assume our only test was a standard set of checks i.e.: viewing at arms length and comparing what we see with our notes/records. It's better than nothing, I would see some differences, some might be important ones. For example if my next coin was blank: I might have suspected an issue with

The Mythical Standard Build

Do your hear phrases like "All our users use [insert some technology]" spoken in your office? or possibly "We have a 'corporate standard' desktop". I have a lot. I have since my first job, back in the '90s. It's a commonly held belief in most of the client companies I've worked with. Programmers, testers, project managers and product owners frequently hold faith in the standard build. It -is- a matter of faith. Often based on little more than wishful thinking or at best very loose 'standards'. The problem isn't purely one of client machines or end-users. I've often seen servers defined as 'clones' that in fact have quite different properties. e.g. different versions of java or application servers or even different time. The blind faith on these standard systems has caught myself and colleagues out so many times that I now find myself instantly questioning the assumption, and encouraging others to do the same. Even in thi

Controlling software development

Do you ever feel like we do all this work and maybe we needn't of bothered? Things might have worked-out without our intervention. Or we are actually worse off, now, after the work? You're not alone. This is a common problem in any role where you need to investigate the effects of changes. What you're feeling is a lack of control . A control is a view of the world, without your work. It's an alternate view of the world where everything is the same except for your fix/hack/intervention. They behave like 3D TV, they let your mind's-eye 'see' the effects, by making them standout from the background. They are commonly used in scientific and especially pharmaceutical research studies. They let the researchers know how effective a treatment was, compared with similar patients who received placebo (or  older established medicine) pills rather than the new treatment. The researchers can tell whether, for example, a new flu remedy actually helped the patients. O

Google testing blog comment...

I recently read a post on the G o o g l e Testing blog titled:  How Google Tests Software - Part Three . I added a comment to the post, but that comment has yet to appear . I thought I'd add post my comment here in the mean time. (I've added some links here, for the curious) “I agree that 'quality' can not be 'tested in'. But the approach you describe appears to go-ahead and attempt to do something just , if not more, difficult. You suggest that a programmer will produce quality work by just coding 'better'. While a skilled and experienced programmer is capable of producing high quality software, who will tell them when they don't or can't? We are all potentially victims of the Dunning–Kruger effect, and as such we need co-workers to help. There are a host of biases that stop a programmer, product owner or project manager from questioning their work. The confirmation and congruence bias to name just two. These are magnified by group-think

2.2250738585072012e-308

Meet my new friend 2.2250738585072012e-308, We've been hanging out recently. If you've not heard of him, he's about ten years old but thats pretty old in [dog and in] software years. He's getting pretty famous in his old age, but he had humble beginnings as a lowly bug report on a Sun Microsystems website. It's rumoured he was first discovered back in 2001, but his big break didn't come until recently , when it was realised that he has the potential to be a key component of a Denial of Service attack that could bring down many java based systems [that accept floating point numbers  as input]. This includes commonplace application servers like Tomcat, who accept floating point numbers as part of the HTTP protocol. 2.2250738585072012e-308 has now been placed firmly in my mental bag of tricks along with divide by zero, 2^32, null, imaginary numbers, localised floats and all the others that routinely get brought out to help me test and investigate software. Bu

Why so negative?

Have you ever had trouble explaining to a non-tester why you appear to be intent on breaking their software? It can be difficult to explain why it's important. So I thought a video might help... If you want to read more about the scientific method, check out the hunky-dory hypothesis .

Into the testing hinterland.

Why do we refer to our ancestors as Cavemen? The evidence of course! The cave paintings, the rubbish piles found in caves all round the world. It's simple, Cavemen lived in caves, they painted on the walls and threw rubbish into the corner of the cave. Thousands of years later we find the evidence, demonstrating they lived in caves. Hence the moniker 'caveman'. How many caves have you seen? Seriously, How many have you seen or even heard of? Now I'm lucky, as former resident of Nottingham [in the UK], I've at least heard of a few . But if you think about it, you probably haven't seen that many. Even assuming you've seen a fair-few, how many were dry, spacious and safe enough for human habitation? As you can guess, my point is: there probably isn't a great selection of prime cave real-estate available. It doesn't add up: The whole of mankind descended from cave [dwelling] men? Before you roll your eyes, and think I'm some sort of Creationist ,

ID Skeptic

At a client site, a few years ago, I had an interesting discussion with a 'senior programmer'. Our discussion centred on a configurable home-page. A user could decide what news or other information etc, they wanted to display on their home page. They'd start off by being given a generic page - and the customer could add or remove certain types of content to customise their page. Once they saved the 'new look' site, their choices would be saved on the web-server. The company didn't want to force people to login, or even make them sign-up for an account. The goal was to keep it simple for the user. But they needed a way to uniquely identify the users, so they used an existing feature of the website. The first time a user came to the site, they were cookied with a 'unique' id 'code'. We could then use this identifier as a key in our database to store the details of what the users had configured their homepage to look like. The testers reading th

The Hunky-dory Hypothesis

"If we just run it with this data, and it looks Ok, we'll know it works" the architect says expectantly, "Right?" "You're right, We might see 'it work'", "How would that help?" I answer. "Well errh, it works, so we can put it live tomorrow." We've seen this situation before; it can arrive in conversation like the above, or from a review of Acceptance Test results or in a host of other forms. The premise is: everything is fine, we've done the work - we have evidence everything is done and working. But, how can demonstrating that the application can 'work' help? How will seeing the acceptance test results as 'green' help? That might sound nonsensical, but seriously: how does it help our customer make the decision to ship [or not]? Or help them distribute people and resources better? It may seem that telling them 'its all hunky-dory' is news and good news at that, but it isn't. It

The arrogance of regression testing

Lets assume we know that our software is not perfect. How can it be? Its complex, mortals created it and we don’t have enough time to test every execution path & environment – so we could never be sure anyway. This is Ok - this is normal, testers deal with this situation every day. This tends to be a typical scenario... Our team has been working on some new features. They’re looking good, initial teething issues have been fixed and the new features are considered worthwhile enough and bug-sparse enough to be released into the wild. This is where things can get a little awkward. The team member’s opinions are often split across a wide spectrum. The relatively minor perceived impact of the work leads some to conclude that the work is ready for release as is. Other team members, who are possibly twice shy from previous ‘minor change’ induced problems, argue for a comprehensive ‘regression test’ of the software. There is usually a range of views in between suggesting for example only ‘