Skip to main content

Posts

Showing posts with the label automation

'No More ASCII' Firefox Add-on

Many of my clients have a multi-national (and multi-lingual) user base, and their software receives input from a range of devices, not just those configured to UK or US locales. The sites may also need to process and publish content that is 'non-ASCII'. So when I'm quickly testing a website or web application, I need to investigate how they handle inputs from a multitude of locales, quickly. That's why I created the No More ASCII, a Firefox Add-on , it has a set of stock text strings from a range of languages and scripts. These have been chosen for their widespread use around the world, as well as their ability to highlight deficiencies in many web-sites. For example these features of the scripts can cause problems for ASCII/poor-Unicode implementations: Right To Left text  - Hebrew Diacritics - Swedish Non-Roman - Mandarin, Hindi etc. The text strings may not make 'sense' as some are partial sentences or Monty Python quotes. They are aimed to have

Counting Strings Firefox Addon

A while back I created a simple web based tool that helped you create text strings of a specified length. The text strings are created to make it easy to tell their length even if they are truncated. The tool was based on a similar tool by James Bach, called perlclip . I've now updated my Counting Strings script to be a free Firefox add-on . So you can now have it with you where ever you test online. You don't even need to restart your browser. Counting Strings opens right in your browser, without affecting your website.

A security bug in SymphonyCMS ( Predictable Forgotten Password Token Generation )

(This issue is now raised in OSVDB .) On the 20th October 2013, The SymphonyCMS project released version 2.3.4 of their Content Management System. The release included a security fix for an issue I’d found in their software. The bug made it much easier for people to gain unauthorised access to the SymphonyCMS administration pages. More about that in a moment. The date of the release is also relevant, its a couple of days shy of 60 days after I had informed the development team of the issue. When I’d informed the team of the bug, I’d mentioned that I’d blog about the issue, sometime on or after the 60 days had elapsed. (That was in line with my Responsible Disclosure policy at the time) Which product had the bug? Symphony CMS is a web content management system, built in PHP. It appears to be used by several larger companies & organisations, learn more here .  What was the bug? The forgotten password functionality in v2.3.3 had a weakness, This meant an attack

Simple test automation, with no moving parts.

Can you see the 74? This is an Ishihara Color Test. Its used to help diagnose colour blindness, people with certain forms of colour blindness would not be able to read the text contained in the image. The full set of 38 plates would allow a doctor to accurately diagnose the colour-perception deficiencies affecting a patient. The test is ingenious in its concept, yet remarkably simple in its execution. No complicated lenses, lighting, tools or measuring devices are required. The doctor or nurse can quickly administer the test with a simple and portable pack of cards. The Ishihara test is an end to end test. Anything, from lighting in the room, to the brain of the patient can influence the result. The examiner will endeavour minimise many of the controllable factors, such as switching off the disco lights, asking the patient to remove their blue tinted sun-glasses and maybe checking they can read normal cards (e.g. your patient might be a child.). End to end tests like this are messy

Using test automation to help me test, a Google Elevation API example

Someone once asked me if "Testing a login-process was a good thing to 'automate'?". We discussed the actual testing and checking they were concerned with. Their real concern was that their product's 'login' feature was a fundamental requirement, if that was 'broken' they wanted the team to know quick and to get it fixed quicker. A failure to login was probably going to be a show-stopping defect in the product. Another hope was that they could 'liberate' the testers from testing this functionality laboriously in every build/release etc. At this point the context becomes relevant, the answers can change depending the team, company and application involved. We have an idea of what the team are thinking - we need to think about why they have those ideas. For example, do we host or own the login/authentication service? if not, how much value is their in testing the actual login-process? Would a mock of that service suffice for our automated c

Manual means using your hands (and your head)

I recently purchased a Samsung Galaxy Tab and an iPad2. Unlike many of my previous gadget purchases, these new gadgets have become very much part of the way I now work and play. One thing I like about them, is their tactile nature. You have a real sense that their is less barrier between you and what you want to do. If you want to do something - you touch it - and it 'just' does it. I don't have to look at a different device, click a couple of keys or move a box on a string  to get access to what I can see right in front of me. Features such as the haptic feedback provide a greater feeling that you are actually working with a tool, rather than herding unresponsive 'icons' or typing magic incantations into a typing device, originally conceived 300 years ago . The underlying software systems used in these devices is a UNIX variant, just like the computer systems that underpin the majority of real world systems from the internet to a developer's shiny Apple Ma

Wrong in front of you.

In 2008 I attended GTAC in Seattle , a conference devoted to the use of automation in software testing. Since their first in London in 2006 , Google have been running about one a year, in various locations around the world. This post isn't really about the conference, its about a realisation that I had the day after the conference. After the conference I went sight-seeing in Seattle. I rode the short Simpsons -like Monorail and took a lift up the Jetsons -like space needle. I enjoyed my time there, and found the people very friendly. The conference had been very technology focused, many (but granted, not all) speakers focused on tools and how to use tools. While useful, the tools are only part of testing - and even then they typically just support testing rather than "do" testing. The Seattle Monorail. While I was at the top of the Space needle, I took out my phone and like a good tourist started taking pictures. I'd typically take a couple of pictures then

Testing as War?

We are fighting an invincible opponent. The legions of bugs in our software far outnumber our attempts to find them all. Even the simplest of software releases, inevitably contains a 5th column of hidden pre-existing bugs or quirks that combined with our changes could strike at any time. The question we need to understand as testers is, how can we win? or at least: not lose this battle? Military examples and analogies can be useful in software testing, and not just those in reconnaissance . For example: the Millennium Challenge . This pre-gulf war 2 military exercise pitted two forces against one another, in the middle-east. In summary the modern US military was fighting a rogue element in a smaller country. The vast resources of the western power should of have faced few problems. But in fact the former US general  playing the role of the 'Rogue nation' trounced the western forces in a devastating blow that saw several warships sunk. How did the 'rogue' general do

Are you sure you've "completed" testing? A Guardian Content API example.

Testing doesn't complete, it might end, it might finish, but it doesn't complete. There's too much to test. If you ever need confirmation of this, test something, something that's been tested already. Better still test a piece of software, you know has been tested by someone you think is a brilliant tester. A good tester like you, will still find new issues, ambiguities and bugs. That's because the complexity of modern software is huge: as well as all the potential code paths of your code, there's all the other underlying code's paths and the near infinite domain of data it might process. Thats part of the beauty of testing, you have to be able to get a handle on this vast test space. That is, review a near infinite test-space in a [very] finite time-frame. We are unable to give a complete picture of the product to our clients. But we are also free to find out new issues, that have so far eluded others. In fact the consequences are potentially more drama