Skip to main content

Posts

How to avoid testing in circles.

I once had an interesting conversation with a colleague who worked in a company selling hotel room bookings. The problem was interesting. Their profits depended on many factors. Firstly, fluctuating demand e.g.: Holidays, Weekends, Local events etc. Secondly, varying types of demand e.g. Business customers, Tourists, Single night bookings or e.g.: 11 day holidays. They also had multiple types of contracts on the rooms. For some, they might have had the exclusive right to sell [as they had pre-paid], for others they had an option to sell [at a lower profit] etc. My naive view had been they priced the room bookings at a suitable mark up, upping that markup for known busy times etc. For example a tourist hotel hotel near the Olympics would be a high mark up, the tourist hotel room in winter would have incurred less of a markup. Better to get some money than none at all). He smiled and said some places do that, but he didn't. He had realised his team had a bias towards making a hea

Manual means using your hands (and your head)

I recently purchased a Samsung Galaxy Tab and an iPad2. Unlike many of my previous gadget purchases, these new gadgets have become very much part of the way I now work and play. One thing I like about them, is their tactile nature. You have a real sense that their is less barrier between you and what you want to do. If you want to do something - you touch it - and it 'just' does it. I don't have to look at a different device, click a couple of keys or move a box on a string  to get access to what I can see right in front of me. Features such as the haptic feedback provide a greater feeling that you are actually working with a tool, rather than herding unresponsive 'icons' or typing magic incantations into a typing device, originally conceived 300 years ago . The underlying software systems used in these devices is a UNIX variant, just like the computer systems that underpin the majority of real world systems from the internet to a developer's shiny Apple Ma

Nobody expects the...

In a previous post I discussed one method I use to improve my testing skills, spending spare minutes testing a machine or website that is readily at hand. The example I used was Google's search, in particular its currency conversion feature. This is useful for getting practice, and trying to speed up my testing, that is - finding information more quickly. Another activity I perform is watching someone else test something. As testers, we are often asked to be a second pair of eyes, as its assumed that a programmer might not notice some issues in their own code. The idea being that you will not be blinded by the same assumptions, and will hopefully find new issues with the software. Using the same logic, by watching someone else test, I can examine their successes and failures more easily. I've asked many people to test a variety of objects, usually things to hand, like a wristwatch or something I've recently bought. One recurring pattern I have noticed is how programmer

Wrong end of the stick

There's a story about air-force scientists during world war 2, that reflects an interesting concept about the things we see and how they can alter our assumptions. The story goes that the allied bombers were suffering great losses during their air-raids of continental europe. The allied scientists got together and anaylsed the damage reports from the engineers tasked with fixing the planes after each raid. (One of the scientists working on these problems was Abraham Wald ) Here is an example of the sort of summary engineering reports they might of been faced with. The report details the parts of the plane and what proportion of aeroplanes had been damaged in that area: (This data is completely made up by me): 15% had damage to 1 or more engines 25% had tail damage 25% had damage to the nose and cockpit area. 35% had damage to the fuselage The aircraft engineers could only add extra-armour to one part of the plane, any more armour would limit the aeroplane in other ways e.g

Testing, Testing, 1, 2, 3.

When I have a spare moment, I usually try and think about how to test something. In fact thats not true, what I do is actually test something. It might be an app on my phone, an online tool, parking-ticket machine or search engine. Usually it is what-ever is to hand, at the time. This is a good way to practice my skills, and can take as long as I have free. In fact having only moments is beneficial, you soon get better at finding out more issues - more quickly. For example, a few moments ago I thought I'd test Google's currency converter. If you haven't seen it, it looks like this: You enter a value and two currencies in the format shown, and Google will give you an answer with great precision. (I haven't examined the accuracy.) Starting from this I varied the text slightly, using "euro" instead of "EUR", also swapping "gbp" and "euro" to see how precedence affected the results. This seemed to behave as expected, but it did

The Like-Live Paradox

I was recently struck by a glaring difference between how I and a programmer prepared for testing. Unlike the majority of the testing I am involved in, this particular testing 'phase' had to be scheduled in advance and we couldn't "just do it". This also meant we had more time to prepare and plan than we typically do. This 'waiting period' had its uses. We had time to create tools that might be useful and check the configuration of the systems we would be testing. The team, familiar with the concepts of exploratory testing, were comfortable with an approach that meant we did not spend the majority of the time pre-scripting tests [be they coded or in a spreadsheet etc]. We did however build a high-level checklist of areas to test and used this to drive our program of tools and configuration checking/fixing/building. The key difference I observed was the absolute nature of the programmers comparisons between our test systems and our live-production syste

Wrong in front of you.

In 2008 I attended GTAC in Seattle , a conference devoted to the use of automation in software testing. Since their first in London in 2006 , Google have been running about one a year, in various locations around the world. This post isn't really about the conference, its about a realisation that I had the day after the conference. After the conference I went sight-seeing in Seattle. I rode the short Simpsons -like Monorail and took a lift up the Jetsons -like space needle. I enjoyed my time there, and found the people very friendly. The conference had been very technology focused, many (but granted, not all) speakers focused on tools and how to use tools. While useful, the tools are only part of testing - and even then they typically just support testing rather than "do" testing. The Seattle Monorail. While I was at the top of the Space needle, I took out my phone and like a good tourist started taking pictures. I'd typically take a couple of pictures then