Skip to main content

The Mythical Standard Build

Do your hear phrases like "All our users use [insert some technology]" spoken in your office? or possibly "We have a 'corporate standard' desktop". I have a lot. I have since my first job, back in the '90s. It's a commonly held belief in most of the client companies I've worked with. Programmers, testers, project managers and product owners frequently hold faith in the standard build.

It -is- a matter of faith. Often based on little more than wishful thinking or at best very loose 'standards'. The problem isn't purely one of client machines or end-users. I've often seen servers defined as 'clones' that in fact have quite different properties. e.g. different versions of java or application servers or even different time. The blind faith on these standard systems has caught myself and colleagues out so many times that I now find myself instantly questioning the assumption, and encouraging others to do the same. Even in this era of virtualisation, from experience, I can still safely say the Standard Build is a myth.

Lets take an example, the corporate PC. Customer sites often have a 'standard' desktop or note-book they hand out to new employees. Great, cloned or 'ghosted' from a central standard build. They have Standard Windows XP or Windows 7 etc. But how long does the PC's hardware stay the same? Sure it's all Dell or HP, with the same model number. But how often do Dell or HP (or example) update their chips/firmware/drivers...? How often do their external suppliers tweak a subsystem, if only to fix a bug? From my experience PC hardware changes frequently and that's at a level -I- can see, there's probably many more subtle changes under-the-hood that I'm missing.

But lets assume we've somehow managed to co-ordinate the Dell/Lenovo/HP/etc global supply chain to provide perfectly identical systems... What about where they are deployed? System 001 is being used in the USA, so it's got a US keyboard, US power supply and is configured with locale and date settings for the USA. System 002 is in France, has a French keyboard, power supply and is configured with French locale and date settings. Also of course, there's the Spanish, UK and did I mention the new Hong Kong office?

Think about how these systems are deployed. It's unlikely they'll be rolled out overnight. They've probably been deployed over at least several weeks, and probably months and years. So we hit our first issue: Each site is probably using a different version of various pieces of hardware and operating system software. Furthermore, the offices themselves probably contain different hardware etc. Over time, as new people arrived, new hardware was purchased or repurposed to get them up and running.

We've still yet to consider the everyday usage of the systems. They each have, for example, Windows 7 installed. But Microsoft Windows is routinely patched, sorry 'updated' with new software. The patches are unlikely to hit all the systems at the same time for several reasons including different regional-deployment schedules and individual system usage. If a user rarely restarts their system, it could be a while before updates get installed. I'll labour the point and mention the anti-virus system, Microsoft Office, and browser plugins as more examples of software on rolling updates.

Real users 'tweak' their computers. They configure their systems to suit them. Computers are designed that way, each user can setup their desktop how they want. In fact they may well be encouraged - to switch to an ergonomic keyboard for example. They may need to increase the font size, or change the screen-resolution. And what's more, they kept getting this annoying warning message in their 'Internet browser' every time they visited their favourite web-site - so they 'fixed' that in the preferences panel.

You get the picture. The standard is at best a guideline or goal, and at worst a dangerous simplification. When people talk of a standard desktop, server or 'client' think like a tester and question "What standard is it?" and "How standard is it?" As a tester those 30% of users who don't have a 'Euro' key on their keyboard [that's easy to find] might do things a little differently. Or maybe the new desktop-support lead in the New York office has enabled all the windows-firewalls on the desktop systems, hows your application going to handle that?

Comments

  1. Excellent analysis as always. Worth pointing out that the machines that usually deviate furthest from the 'standard build' are the ones the developers themselves are using to write the software. These tend to be higher spec and often have local admin rights. Developers will therefore usually have the latest updates, newest browser versions and so forth. We then wonder why the stuff we build doesn't work right when used by our production users!

    ReplyDelete

Post a Comment

Popular posts from this blog

The gamification of Software Testing

A while back, I sat in on a planning meeting. Many planning meetings slide awkwardly into a sort of ad-hoc technical analysis discussion, and this was no exception. With a little prompting, the team started to draw up what they wanted to build on a whiteboard.

The picture spoke its thousand words, and I could feel that the team now understood what needed to be done. The right questions were being asked, and initial development guesstimates were approaching common sense levels.

The discussion came around to testing, skipping over how they might test the feature, the team focused immediately on how long testing would take.

When probed as to how the testing would be performed? How we might find out what the team did wrong? Confused faces stared back at me. During our ensuing chat, I realised that they had been using BDD scenarios [only] as a metric of what testing needs to be done and when they are ready to ship. (Now I knew why I was hired to help)



There is nothing wrong with checking t…

Manumation, the worst best practice.

There is a pattern I see with many clients, often enough that I sought out a word to describe it: Manumation, A sort of well-meaning automation that usually requires frequent, extensive and expensive intervention to keep it 'working'.

You have probably seen it, the build server that needs a prod and a restart 'when things get a bit busy'. Or a deployment tool that, 'gets confused' and a 'test suite' that just needs another run or three.

The cause can be any number of the usual suspects - a corporate standard tool warped 5 ways to make it fit what your team needs. A one-off script 'that manager' decided was an investment and needed to be re-used... A well-intended attempt to 'automate all the things' that achieved the opposite.

They result in a manually intensive - automated process, where your team is like a character in the movie Metropolis, fighting with levers all day, just to keep the lights on upstairs. Manual-automation, manumatio…

Scatter guns and muskets.

Many, Many years ago I worked at a startup called Lastminute.com (a European online travel company, back when a travel company didn't have to be online). For a while, I worked in what would now be described as a 'DevOps' team. A group of technical people with both programming and operational skills.

I was in a hybrid development/operations role, where I spent my time investigating and remedying production issues using my development, investigative and still nascent testing skills. It was a hectic job working long hours away from home. Finding myself overloaded with work, I quickly learned to be a little ruthless with my time when trying to figure out what was broken and what needed to be fixed.
One skill I picked up, was being able to distinguish whether I was researching a bug or trying to find a new bug. When researching, I would be changing one thing or removing something (etc) and seeing if that made the issue better or worse. When looking for bugs, I'd be casting…