Skip to main content

Posts

Showing posts from 2011

Wrong end of the stick

There's a story about air-force scientists during world war 2, that reflects an interesting concept about the things we see and how they can alter our assumptions. The story goes that the allied bombers were suffering great losses during their air-raids of continental europe. The allied scientists got together and anaylsed the damage reports from the engineers tasked with fixing the planes after each raid. (One of the scientists working on these problems was Abraham Wald ) Here is an example of the sort of summary engineering reports they might of been faced with. The report details the parts of the plane and what proportion of aeroplanes had been damaged in that area: (This data is completely made up by me): 15% had damage to 1 or more engines 25% had tail damage 25% had damage to the nose and cockpit area. 35% had damage to the fuselage The aircraft engineers could only add extra-armour to one part of the plane, any more armour would limit the aeroplane in other ways e.g

Testing, Testing, 1, 2, 3.

When I have a spare moment, I usually try and think about how to test something. In fact thats not true, what I do is actually test something. It might be an app on my phone, an online tool, parking-ticket machine or search engine. Usually it is what-ever is to hand, at the time. This is a good way to practice my skills, and can take as long as I have free. In fact having only moments is beneficial, you soon get better at finding out more issues - more quickly. For example, a few moments ago I thought I'd test Google's currency converter. If you haven't seen it, it looks like this: You enter a value and two currencies in the format shown, and Google will give you an answer with great precision. (I haven't examined the accuracy.) Starting from this I varied the text slightly, using "euro" instead of "EUR", also swapping "gbp" and "euro" to see how precedence affected the results. This seemed to behave as expected, but it did

The Like-Live Paradox

I was recently struck by a glaring difference between how I and a programmer prepared for testing. Unlike the majority of the testing I am involved in, this particular testing 'phase' had to be scheduled in advance and we couldn't "just do it". This also meant we had more time to prepare and plan than we typically do. This 'waiting period' had its uses. We had time to create tools that might be useful and check the configuration of the systems we would be testing. The team, familiar with the concepts of exploratory testing, were comfortable with an approach that meant we did not spend the majority of the time pre-scripting tests [be they coded or in a spreadsheet etc]. We did however build a high-level checklist of areas to test and used this to drive our program of tools and configuration checking/fixing/building. The key difference I observed was the absolute nature of the programmers comparisons between our test systems and our live-production syste

Wrong in front of you.

In 2008 I attended GTAC in Seattle , a conference devoted to the use of automation in software testing. Since their first in London in 2006 , Google have been running about one a year, in various locations around the world. This post isn't really about the conference, its about a realisation that I had the day after the conference. After the conference I went sight-seeing in Seattle. I rode the short Simpsons -like Monorail and took a lift up the Jetsons -like space needle. I enjoyed my time there, and found the people very friendly. The conference had been very technology focused, many (but granted, not all) speakers focused on tools and how to use tools. While useful, the tools are only part of testing - and even then they typically just support testing rather than "do" testing. The Seattle Monorail. While I was at the top of the Space needle, I took out my phone and like a good tourist started taking pictures. I'd typically take a couple of pictures then

Testing as War?

We are fighting an invincible opponent. The legions of bugs in our software far outnumber our attempts to find them all. Even the simplest of software releases, inevitably contains a 5th column of hidden pre-existing bugs or quirks that combined with our changes could strike at any time. The question we need to understand as testers is, how can we win? or at least: not lose this battle? Military examples and analogies can be useful in software testing, and not just those in reconnaissance . For example: the Millennium Challenge . This pre-gulf war 2 military exercise pitted two forces against one another, in the middle-east. In summary the modern US military was fighting a rogue element in a smaller country. The vast resources of the western power should of have faced few problems. But in fact the former US general  playing the role of the 'Rogue nation' trounced the western forces in a devastating blow that saw several warships sunk. How did the 'rogue' general do

Are you sure you've "completed" testing? A Guardian Content API example.

Testing doesn't complete, it might end, it might finish, but it doesn't complete. There's too much to test. If you ever need confirmation of this, test something, something that's been tested already. Better still test a piece of software, you know has been tested by someone you think is a brilliant tester. A good tester like you, will still find new issues, ambiguities and bugs. That's because the complexity of modern software is huge: as well as all the potential code paths of your code, there's all the other underlying code's paths and the near infinite domain of data it might process. Thats part of the beauty of testing, you have to be able to get a handle on this vast test space. That is, review a near infinite test-space in a [very] finite time-frame. We are unable to give a complete picture of the product to our clients. But we are also free to find out new issues, that have so far eluded others. In fact the consequences are potentially more drama

Is your test automation actually agile? A Guardian Content API example.

In my last post I discussed how test automation could be used to do things that I couldn't easily do unaided. In that example, execute thousands of news 'content searches' and help me sort through them. With the help of some simple test automation I found some potential issues with the results returned by the REST API. In that case, I started out with the aim of implementing a tool. But your testing might not lead you that way, often your own hands-on investigation can find an issue. But you don't know how widespread it is, is it a one-off curiosity? or a sign of something more widespread.? Again, this is where test automation can help, and if done well, without being an implementation or maintenance burden. Many test automation efforts are blind to the very Agile idea of YAGNI or You Ain't Gonna Need it . They often presume to know all that needs to be tested in advance, deciding to invest most of their time writing 'tests' blindly against a specific

Test automation that helps, A Guardian Content API example.

Have you ever had to test an API that's accessible over the internet? or even one thats available internally within your organisation? They often take the form of a REST service (or similar) through which other software can easily access information in a machine readable form. Even if you are not familiar with these APIs, you've probably heard-of or seen the results of them. Some examples of APIs are the Twitter API ,   Flickr  and the Guardian's Open Platform . Some examples of what people have built using the Flickr API are published on the flickr site. Despite being 'machine-readable' they are often human readable, greatly helping you test and debug them. Companies use these APIs to ease the distribution of their content, encourage community and commercial development around their content or to simply provide a clear and documentable line between their role as data-provider and where the consumer's role begins. When testing an API like the above, many

Random text tool

I recently blogged about some of the tools I use , and how some are so useful I keep using them. As I mentioned, randomness is pretty useful, and I have tools to help me generate random text. A few of my readers requested a copy of my simple random text generating script, so I've decided to open it up for everyone to use and test. It will have bugs, like all software, please send details and I'll try and fix them. If you are interested in what UTF-8 is and what all that Unicode stuff is about, there is a great article by Joel Spolsky that explains all, and the wikipedia page is ok . To use it... First download the script, its on GitHub . The script is fairly short and is all in one file. You don't have to 'install it', its not a GEM. Second, make sure you have Ruby version 1.9 or greater. You need version 1.9, because Ruby didn't handle UTF-8 well in older versions. Thirdly run the script like this: ruby fuzzutf8.rb That will give you some usag

2 minutes on Bing Maps

Consistency, is one thing I test for in software. For example, if software refers to something by a particular name, then [usually] it should always refer to it by that name. Furthermore, when it uses that name e.g. 'London Tube Map' I would expect to see such a map, when I click to view it, and not another kind of map e.g.: a street map. Conventions, These are also an important part of software. People will [usually] expect your software to use conventions that are appropriate for the field. For example, The traditional London Tube map is a schematic diagram, designed to show the relative positions of the stations rather than their geographic location. Though, sometimes it's actually useful to have geographic information, e.g.: is Queensway (Central line) station very close to Bayswater (Circle line)? So if a map isn't using the schematic form, then the geographic form also has it uses. I would be surprised if I received a London Tube map that was neither schemati

Testing Mindset

Once upon a time there was a young and naive tester, he was new to the world of software testing. He often felt he didn't have what it took to be a tester. Sure, he found the odd bug, and he enjoyed his work, but he also often missed bugs, issues or problems. After a while, he admitted to himself that this was a problem, and decided to seek help. He stood up from his desk and walked over to his test manager's desk. His manager was wise and experienced. He was the Mr Miyagi of testing, and as such was always offering zen-like advice for his team. A simple question about where the stapler had escaped to could turn into a somewhat baffling series of Haiku , leaving our young tester baffled. Our novice explained his problem, and his concerns about how maybe he wasn't cut out for testing. The wise test manager smiled, thought for a moment and then opened his little Moleskine notebook. He turned carefully through the pages, settled on a page, looked up and said: "I over

Tools

Do you ever examine what you carry around with you every day, and wonder if you actually use it? For example, in my pockets I've got a 'smart' phone, wallet (credit & debit cards, cash and ID), keys, Travelcard ( Oyster ) and some coins. Every now and then something gets added, if its unlucky it stays. Over the years, I've noticed, that the criteria for being kept is usually convenience or enablement. That is, the items that don't get chucked or deposited somewhere about my home are usually 'tools' that make other 'things' easier like a smart-phone - I can just look up something or text someone at any time. I could just wait until I got back to my office, or see the person later but it can be easier to just act in the moment, and do it there and then. Enablement items, are things that mean I -can- do things, that without, I'm stuck. For example: door keys. The smart phone fits into this category also, if I want to meet up with someone at sh

A Fair Witness

About 10 years ago, I was working with a client who were in the process of developing a new ecommerce website. The new website and servers were designed to replace an entire existing suite of systems, and provide a platform for the company's future expansion. As you might imagine, the project was sprawling, new front end servers, new middleware and a host of back-end business to business systems. The project had expanded during its course, as new areas of old functionality were identified and the business requested new systems to support new ventures. This isn't an unusual scenario for software development projects, it is for exactly this type of situation that many companies now look to agile methodologies for help. But don't worry this isn't a rant about the benefits of one methodology over another. What interested me was the how the project team members performed and viewed their testing efforts. Each build of the code would include new functionality, [hopefully]

Fishing for bugs.

You probably don't know, but I'm keen on fishing (Honest! Ok, maybe not, but bare with me...) I spend my free time, by the river bank or out on the sea searching for 'the big one'. The big catch that'll stand as tall as me, and feed my family for a week. My dream is to be the guy standing next to his prize-fish on that black and white picture behind the bar. Over the years, I've become reasonably skilled. I usually find a fish or two when ever I'm out on the imaginary water. I've learned where they live, where they spawn, and of course where's best to catch them. For example, There's a little bend in the river upstream from my home, that has some great fishing spots. The overhanging rocks protect small fish from the predatory eyes of birds, and people. Of course where there's small fish, there's usually the odd big fish or two. Ok, lets imagine that fishing had a profitable side too, and wasn't just a [fictitious] hobby. For exam

If it's not good testing, it's not good regression testing either.

Pick a coin from your pocket, and hold it at arms length. Take a good look. Now take another one, of the same denomination and hold it out at arms length as before. Based on your observations alone - can you say they are the identical? Lets go a step further. If someone had given you one coin to look at, then exchanged it for another, could you have determined whether they are the same or different coins? Maybe, yes? If the differences had been large enough e.g. one coin was heavily tarnished or scratched, then the different coins would be identifiable. Or if you'd been given the opportunity to examine the coin using magnifying equipment, you probably could of found differences. But lets assume our only test was a standard set of checks i.e.: viewing at arms length and comparing what we see with our notes/records. It's better than nothing, I would see some differences, some might be important ones. For example if my next coin was blank: I might have suspected an issue with

The Mythical Standard Build

Do your hear phrases like "All our users use [insert some technology]" spoken in your office? or possibly "We have a 'corporate standard' desktop". I have a lot. I have since my first job, back in the '90s. It's a commonly held belief in most of the client companies I've worked with. Programmers, testers, project managers and product owners frequently hold faith in the standard build. It -is- a matter of faith. Often based on little more than wishful thinking or at best very loose 'standards'. The problem isn't purely one of client machines or end-users. I've often seen servers defined as 'clones' that in fact have quite different properties. e.g. different versions of java or application servers or even different time. The blind faith on these standard systems has caught myself and colleagues out so many times that I now find myself instantly questioning the assumption, and encouraging others to do the same. Even in thi