Skip to main content

Controlling software development

Do you ever feel like we do all this work and maybe we needn't of bothered? Things might have worked-out without our intervention. Or we are actually worse off, now, after the work? You're not alone. This is a common problem in any role where you need to investigate the effects of changes. What you're feeling is a lack of control.

A control is a view of the world, without your work. It's an alternate view of the world where everything is the same except for your fix/hack/intervention. They behave like 3D TV, they let your mind's-eye 'see' the effects, by making them standout from the background.

They are commonly used in scientific and especially pharmaceutical research studies. They let the researchers know how effective a treatment was, compared with similar patients who received placebo (or  older established medicine) pills rather than the new treatment. The researchers can tell whether, for example, a new flu remedy actually helped the patients. Or whether [like the control group] the patients got better/worse at the usual rate, suggesting the medicine had little or no affect.

In testing you probably already use controls, and sometimes without knowing it. For example, if a programmer claims to have fixed a bug - you probably check that the old buggy code before you check the new [allegedly] fixed code (you do don't you?). This lets you see the difference, and discern whether the fix is really -the treatment- the system needed and did it actually fix-the-problem? Controls can even help highlight that which was thought of as a bug, was in fact useful albeit buggy behaviour. For example what if that ugly error in the data-entry application is 'fixed', so that users are no-longer bothered by that error. Great! the new code shows no error message, thats good right? Well maybe not, if I go back to the old code, I might see that the ugly error was stopping users from typing in bad data into the computer. Bad data that might cause other important systems to fail. The real fix to the ugly error might have been to better report the problem to the user, so they can see the data is corrupt and fix it at source.

Often you might find that there is resistance to your wish to examine and test the older pre-fix code. This can be for several reasons, including limited test-system resources or a poorly set up build and deployment system that doesn't let testers deploy old 'builds' of the system.

Performance and load testing is another area whether controls are often needed and used. Though here many projects proceed blind, not using controls or even a baseline. Not knowing how your latest changes affect the system, compared with unchanged code running on the same systems at a similar time. Often teams use a baseline, established long ago. This of course is better than no comparison at all, but makes accurate measurement difficult. Who knows what other changes have occurred during the time-period since the baseline was taken. Network upgrades, code-library changes, database growth and other peoples changes are just a few possible things that can influence your results. Unless you have results from before and after your changes [and -only- your changes], you are guessing at the effect of your fix.

Being aware of the usefulness of controls is an essential part of software testing. When asked if the new build is better, we can answer "Better than what?" When we suspect the new feature is more smoke and mirrors than fix, we can ensure we have access to the unchanged system for comparison. If you find resistance to your requests for access to controls, well, you have there some interesting information to put in the test report.


  1. In IT, the word "control" is most often used as a synonym for management. It's great to see the word used in this important meaning (which by the way is the more common use of the word in Danish).

    Controls are needed where you can't accept the risk of not controlling. But quite often the problem is that people don't thinkg aobut the consequences of not controlling. It's easier to make something if it doesn't need to work.

    This is where testing should be controlled too: On the static level before testing takes place e.g. doing a SWAT to assess what's in focus, and on the dynamic level by controlling the test results when the product is released and one can start learning how it performs in real life.



Post a Comment

Popular posts from this blog

The gamification of Software Testing

A while back, I sat in on a planning meeting. Many planning meetings slide awkwardly into a sort of ad-hoc technical analysis discussion, and this was no exception. With a little prompting, the team started to draw up what they wanted to build on a whiteboard.

The picture spoke its thousand words, and I could feel that the team now understood what needed to be done. The right questions were being asked, and initial development guesstimates were approaching common sense levels.

The discussion came around to testing, skipping over how they might test the feature, the team focused immediately on how long testing would take.

When probed as to how the testing would be performed? How we might find out what the team did wrong? Confused faces stared back at me. During our ensuing chat, I realised that they had been using BDD scenarios [only] as a metric of what testing needs to be done and when they are ready to ship. (Now I knew why I was hired to help)

There is nothing wrong with checking t…

Manumation, the worst best practice.

There is a pattern I see with many clients, often enough that I sought out a word to describe it: Manumation, A sort of well-meaning automation that usually requires frequent, extensive and expensive intervention to keep it 'working'.

You have probably seen it, the build server that needs a prod and a restart 'when things get a bit busy'. Or a deployment tool that, 'gets confused' and a 'test suite' that just needs another run or three.

The cause can be any number of the usual suspects - a corporate standard tool warped 5 ways to make it fit what your team needs. A one-off script 'that manager' decided was an investment and needed to be re-used... A well-intended attempt to 'automate all the things' that achieved the opposite.

They result in a manually intensive - automated process, where your team is like a character in the movie Metropolis, fighting with levers all day, just to keep the lights on upstairs. Manual-automation, manumatio…

Scatter guns and muskets.

Many, Many years ago I worked at a startup called (a European online travel company, back when a travel company didn't have to be online). For a while, I worked in what would now be described as a 'DevOps' team. A group of technical people with both programming and operational skills.

I was in a hybrid development/operations role, where I spent my time investigating and remedying production issues using my development, investigative and still nascent testing skills. It was a hectic job working long hours away from home. Finding myself overloaded with work, I quickly learned to be a little ruthless with my time when trying to figure out what was broken and what needed to be fixed.
One skill I picked up, was being able to distinguish whether I was researching a bug or trying to find a new bug. When researching, I would be changing one thing or removing something (etc) and seeing if that made the issue better or worse. When looking for bugs, I'd be casting…