Skip to main content

Posts

Counting Strings

If you've done the Rapid Software Testing course , then you'll probably be familiar with the Perlclip tool, from James Bach . If not, its a useful tool for generating strings of test text. In particular I find the Perlclip Counter-string function to be pretty useful. Counterstring builds a string that indicates its own length. E.g.: "*3*5*7*10*" The last asterisk is the 10th character. Now available as a Firefox Add-on ! I've taken the counterstring functionality and implemented it in HTML and Javascript. While this form does not do everything the original [and the best] does, It might be useful for it to be accessible anywhere via a web page. All credit for the usefulness of this goes to James Bach, all the bugs are probably my doing. Thats right - its got bugs, like you don't have to enter a character for the 'mark' or it lets you use numbers for the 'mark'. Older versions did odd things in IE6 etc. There are no doubt many more bugs...

The Hunky-dory Hypothesis

"If we just run it with this data, and it looks Ok, we'll know it works" the architect says expectantly, "Right?" "You're right, We might see 'it work'", "How would that help?" I answer. "Well errh, it works, so we can put it live tomorrow." We've seen this situation before; it can arrive in conversation like the above, or from a review of Acceptance Test results or in a host of other forms. The premise is: everything is fine, we've done the work - we have evidence everything is done and working. But, how can demonstrating that the application can 'work' help? How will seeing the acceptance test results as 'green' help? That might sound nonsensical, but seriously: how does it help our customer make the decision to ship [or not]? Or help them distribute people and resources better? It may seem that telling them 'its all hunky-dory' is news and good news at that, but it isn't. It

Conspicuous in their absence

If you're a tester then you'll no doubt of heard phrases to the effect of "That's pretty unlikely", "Our users don't do that" or "Thats a fairly minor browser". Its been blogged about before , and elsewhere. The argument is many users are niche, novice, confused or from different backgrounds / viewpoints / languages. These are realistic and probably correct hypotheses, for many situations. From my experience, thats often where the discussion ends, someone makes a judgement call, and the issue is fixed, mitigated or ignored. More often, than not, its ignored. That decision should probably be a business decision, its their money. But can they make such a decision safely? We are asking for consent to 'not operate' or 'operate' on their software. To come to the right decision, they need to be fully informed. i.e.: Are we sure that the issue is indeed rare? Are they making a properly informed decision? For example what if th

Wasting your time with Test Automation

Software Testing is essentially an infinitely time consuming task being attempted in a finite time. The 'test space' is almost always vast and near infinite. Your time to test is usually counted in hours. There's an obvious mismatch there. We're testers, we are hired to to help marry the two. We need to find as many issues, and important issues, in that vast test-space in less than a few hours. Thats fine, thats testing, thats what testers work (live?) for. Roll on Test Automation, our saviour, it can check vast areas of the test space rapidly and efficiently. It can use its 'Data' ( http://en.wikipedia.org/wiki/Data_%28Star_Trek%29 ) like abilities to test the application tirelessly. No? Well Ok, It could 'check' the application tirelessly, for a set of expected results. This itself is potentially valuable, and could examine a range of combinations or test data that we could not be reach alone. Why do many of the test automation efforts I've wit

Quality in a Jar

Its an oft chanted mantra that software quality can be 'baked in'. Like flour or eggs you just mix in the right ingredients and out comes perfection. If my wife and I were ever to take part in our own bake-off, then you'd soon see that hypothesis undermined. We'd both begin with the same ingredients, oven, spoons etc. My better half would no-doubt deliver another scrumptious banana cake and I - a new and interesting exterior wall sealant [not my intention]. Batteries and LEDs, but no quality . The problems here are manifold, There's a Reification error, for quality is not a tangible entity (A favourite issue for Michael Bolton ). There's the idea that once built software and its use is immutable. If you've ever had to use one of those unmaintained applications (typically a time sheet/billing system) thats gone to seed you'll agree that it hasn't exactly aged like fine Claret. But what's most naive is the assumption that the simple use of a

The arrogance of regression testing

Lets assume we know that our software is not perfect. How can it be? Its complex, mortals created it and we don’t have enough time to test every execution path & environment – so we could never be sure anyway. This is Ok - this is normal, testers deal with this situation every day. This tends to be a typical scenario... Our team has been working on some new features. They’re looking good, initial teething issues have been fixed and the new features are considered worthwhile enough and bug-sparse enough to be released into the wild. This is where things can get a little awkward. The team member’s opinions are often split across a wide spectrum. The relatively minor perceived impact of the work leads some to conclude that the work is ready for release as is. Other team members, who are possibly twice shy from previous ‘minor change’ induced problems, argue for a comprehensive ‘regression test’ of the software. There is usually a range of views in between suggesting for example only ‘

Heurism

I'm watching my son, a toddler, at play. He picks up his toy train, a hefty piece of wind-up fisher-price-esque technology, and hurls it at the water bottle. I'll not pass judgement - but suffice to say - the bottle is still standing - several other objects in the room are not. He reaches down with both arms and picks up the train again. He steps a bit further away, turns his back on the bottle, and slings it back over his shoulder. A few more similar attempts end in much the same result, Until finally the killer-move is identified: You stand point-blank over the bottle and drop/throw the train down onto the bottle. A chip off the old block. I'm glad my son is having fun. But I'm interested - What's he thinking? No, that's not it... How is he thinking? What he's doing has strong parallels with what his father does for a living. I spend much of my time learning how [for example] a tool works or, maybe more often, how they don't work. If that takes the ap