Skip to main content

Posts

Being a square keeps you from going around in circles.

After a weary few hours sorting through, re-running and manually double checking the "automated test" results, the team decide they need to "run the tests again!", that's a problem to the team. Why? because they are too slow. The 'test' runs take too long and they won't have the results until tomorrow. How does our team intend to fix the problem? ... make the tests run faster. Maybe use a new framework, get better hardware or some other cool trick. The team get busy, update the test tools and soon find them selves in a similar position. Now of course they need to rewrite them in language X or using a new [A-Z]+DD methodology. I can't believe you are still using technology Z , Luddites! Updating your tooling, and using a methodology appropriate to your context makes sense and should be factored into your workflow and estimates. But the above approach to solving the problem, starts with the wrong problem. As such, its not likely to find ...

A Good Run!

“We got a good run from the tests” the tester stated. “So what’s the story?” the scrum master asked. “85% Pass” comes the reply, meekly. “OK, just need to fix that 5% then.” The scrum master announces before striding off to announce that the team is only a couple of % away from success. Our tester takes a moment to try and process the exchange… Firstly , their own words: “We got a good run” Why had they said that? Well - in a sense - it was true. They had executed the tests before, and they had returned a much higher failure rate. But the code being checked was the same... OK, so there were at least 3 obvious ways to interpret the data. The app code meets the criteria checked by the tests. ( Based on test run 2 ) The app code does not meet the criteria checked by the tests. ( Based on test run 1 ) The tests are as reliable a the toss of the coin. ( Based on both test runs ) Its surprising how unlikely people are to choose (3). Secondly , the scrum...

Programmers & Testers, two roles divided by a common skill-set.

When we switch people from programming to testing and vice versa may reduce the quality of our software. I’ll get some quick objections out of the way: But, A person can be a great tester and programmer.  - Yes I agree. But, Programmers do a lot of good testing. - Yes I agree. None of the above are in conflict with my conjecture. Programming or writing software automates things that would be expensive or hard to do otherwise. Our software might also be faster or less error prone at doing whatever it replaces. It may achieve or enable something that couldn't be done before the software was employed. Writing software involves overcoming and working around constraints to achieve improvement. That is, looking for ways to bypass limitations imposed by external factors or limitations in the tools we use. For example, coping with a high latency internet connection, legacy code or poor quality inputs. A programmer might say they were taking advantage of the technologies’...

Synecdoche

A common but often unnoticed figure of speech is the synecdoche. When I say “Beijing opened its borders”. We know I mean “The People's Republic of China has opened its borders.”) That’s a Synecdoche, in this case I named part of something (Beijing) to mean the whole (P.R.C.). Conversely, I might say “Westminster is in turmoil” when anyone with knowledge of British politics will know I mean, “The politicians in the Houses of Parliament are in turmoil”. The reader will know I am not referring to The City of Westminster, a region of London. (Or the place in Canada etc.) Synecdoche can be a useful and illustrating tool of conversation. Helping to convey the size or importance of the subject or illustrate in more detail a subtlety of the situation. For example: “Beijing opened its Borders” also indicates the power of that country's central government. Some residents of one city in China, can open [or close] the borders of a vast country spanning thousands of miles and compr...

Your software sucks (any data you give it)

At 1524h, On the afternoon of January 15th 2009, US Airways Flight 1549 was cleared for takeoff from Runway 4 at New York's La Guardia airport. The airplane carried 150 passengers and 5 flight crew, on a flight to Charlotte Douglas, North Carolina. The Airbus A320's twin CFM56 engines had been serviced just over a month prior to the flight. The plane climbed to a height of 859m (2818 feet) before disaster struck. Passengers reported hearing several loud bangs and then flames being visible from the engines' exhaust. Shortly thereafter the 2 engines shut-down, robbing the Airbus of thrust and its primary source of electrical power. At this point the Captain took over from the First officer and between them they spent the next 3 minutes both looking for somewhere to land, while also desperately trying to restart their aircraft's engines. What Happened? A flock of birds had crossed the path of the Airbus and several had struck the plane. Both engines had ingested bi...

Even the errors are broken!

An amused but slightly exasperated developer once turned to me and said "I not only have to get all the features correct, I have to get the errors correct too!". He was referring to the need to implement graceful and useful failure behaviour for his application. Rather than present the customer or user with an error message or stack trace - give them a route to succeed in their goal. E.g. Find the product they seek or even buy it. Bing Suggestions demonstrates ungraceful failure. Graceful failure can take several forms, take a look at this Bing [search] Suggestions bug in Internet Explorer 11. As you can see, the user is presented with a useful feature, most of the time. But should they paste a long URL into the location bar - They get hit with an error message. There are multiple issues here. What else is allowing this to happen to the user? The user is presented with an error message - Why? What could the user possibly do with it? Bing Suggestions does not ...

Counting Images, a FireFox Add-on

Many of my clients ask me to test their content management and processing systems. Often this involves investigating how the software handles images of various sizes as well as text of various lengths or types . To help create test-images, I created this little FireFox Add-on . The Counting Images add-on starts with one click and can be used to create an image of a custom size. For example: if you need a 300x250 MPU advert image - just enter 300 and 250 into the panel and click Create Image . To download the image, just click on it - as you would a a link and choose Save. The image files are named width x height .png , and include markings to help identify if they have been truncated e.g.: The marked numbers refer to the size in pixels of the rectangle they are in. E.g.: the blue rectangle (always the outermost one) is 150x100 pixels in size. Another example: As you can see the rectangles start at the defined size and count down in steps of 20 pixels. What could...