Skip to main content

Testing, Testing, 1, 2, 3.

When I have a spare moment, I usually try and think about how to test something. In fact thats not true, what I do is actually test something. It might be an app on my phone, an online tool, parking-ticket machine or search engine. Usually it is what-ever is to hand, at the time. This is a good way to practice my skills, and can take as long as I have free. In fact having only moments is beneficial, you soon get better at finding out more issues - more quickly.

For example, a few moments ago I thought I'd test Google's currency converter. If you haven't seen it, it looks like this:


You enter a value and two currencies in the format shown, and Google will give you an answer with great precision. (I haven't examined the accuracy.)

Starting from this I varied the text slightly, using "euro" instead of "EUR", also swapping "gbp" and "euro" to see how precedence affected the results. This seemed to behave as expected, but it did make me think about how Google was parsing the query. How might I confuse Google? Could I get it to misinterpret the order of the currencies?

Inspired by this question I tried typing:



This was actually the result of me pausing while typing, and observing the automatically updated search results presented by Google. I had [probably] confused the parser into trying to convert the result of my currency conversion into metric length measurements. This seemed like odd behaviour, but possibly acceptable to Google.

Next I checked how the search engine handles the reverse...



Rather than converting to Imperial measurements, Google has stayed metric, but displayed the result in millimetres. This started me thinking that this area had not been heavily tested - or at least had not been a focus for bug fixing. I'd found two unexpected behaviours, albeit slight, in seconds.

So how could I use this to highlight something that may confuse users or undermine the confidence that a user might have in this product. If this was my product, I'd want to know about such issues, as they might be bad for business.

The next thing I tried was deliberately aimed at being typical if not a semantically perfect query, that Google might misinterpret. I used the '/' character. Commonly used informally to mean English OR.




This result was interesting. I can imagine a user typing this query, or copying similar text into Google. This seems like it would be a problem for some users, If only because its confusing. For many users it would seem Google is 'broken', especially those unfamiliar with imperial measurements. I'll stop documenting the process there, but the leads generated in these quick tests suggest more avenues of investigation. Its clearly easy to confuse the search engine's parser.

If I'd created these tests in advance, be that in a spreadsheet or test automation, how would I have jumped from one result to the next? By taking out the feedback-loop, I would have unlikely known to try those tests. I also would probably be still writing the tests, long after the point in time I had found the above information, and was on my way to finding more.

Comments

  1. Pete, thanks for sharing, not only it is interesting and made me laugh... but finally I know how much a centimeter costs.

    ReplyDelete
  2. Awesome! this is a great blog post

    ReplyDelete

Post a Comment

Popular posts from this blog

The gamification of Software Testing

A while back, I sat in on a planning meeting. Many planning meetings slide awkwardly into a sort of ad-hoc technical analysis discussion, and this was no exception. With a little prompting, the team started to draw up what they wanted to build on a whiteboard.

The picture spoke its thousand words, and I could feel that the team now understood what needed to be done. The right questions were being asked, and initial development guesstimates were approaching common sense levels.

The discussion came around to testing, skipping over how they might test the feature, the team focused immediately on how long testing would take.

When probed as to how the testing would be performed? How we might find out what the team did wrong? Confused faces stared back at me. During our ensuing chat, I realised that they had been using BDD scenarios [only] as a metric of what testing needs to be done and when they are ready to ship. (Now I knew why I was hired to help)



There is nothing wrong with checking t…

Manumation, the worst best practice.

There is a pattern I see with many clients, often enough that I sought out a word to describe it: Manumation, A sort of well-meaning automation that usually requires frequent, extensive and expensive intervention to keep it 'working'.

You have probably seen it, the build server that needs a prod and a restart 'when things get a bit busy'. Or a deployment tool that, 'gets confused' and a 'test suite' that just needs another run or three.

The cause can be any number of the usual suspects - a corporate standard tool warped 5 ways to make it fit what your team needs. A one-off script 'that manager' decided was an investment and needed to be re-used... A well-intended attempt to 'automate all the things' that achieved the opposite.

They result in a manually intensive - automated process, where your team is like a character in the movie Metropolis, fighting with levers all day, just to keep the lights on upstairs. Manual-automation, manumatio…

Scatter guns and muskets.

Many, Many years ago I worked at a startup called Lastminute.com (a European online travel company, back when a travel company didn't have to be online). For a while, I worked in what would now be described as a 'DevOps' team. A group of technical people with both programming and operational skills.

I was in a hybrid development/operations role, where I spent my time investigating and remedying production issues using my development, investigative and still nascent testing skills. It was a hectic job working long hours away from home. Finding myself overloaded with work, I quickly learned to be a little ruthless with my time when trying to figure out what was broken and what needed to be fixed.
One skill I picked up, was being able to distinguish whether I was researching a bug or trying to find a new bug. When researching, I would be changing one thing or removing something (etc) and seeing if that made the issue better or worse. When looking for bugs, I'd be casting…