Skip to main content

Build, Test, Ship, Learn, Rinse & repeat.

Ever feel like your team is in a deadlock? The product owner wants Gizmo+ to be shipped, your senior engineers are split between grokking Gizmo+ and fixing Widget++. Meanwhile the SDETs are franticly updating automated checks/BDD scripts and exploratory testers are uncovering that Widget+ and Gizmo+ should have been named ...+10 given the number of surprise bonus features they are finding. As a consequence feature delivery can start to slow and quality is inevitably hit as difficult decisions are made on what to fix.

The typical reactions to such a situation can depend on your project's context, but to highlight a few common ones:
  • Ramp up team size.
  • Push back on deadlines.
  • Push back on new features.
  • Delay releases until 'it all gets sorted' ...
I don't have to break it to you that these options are 'far from optimal'. In summary they all revolve around costing more and delivering less (from my time as a programme manager I can tell you - thats what we call a hard sell ).

I won't claim there is a 'magic bullet', because - there isn't. But lets try and break the problem down a little:
  1. We can't seem to get enough stuff out the door
  2. Our definition of 'enough stuff' seems to be growing
  3. What stuff does get 'done', often needs 'redoing'
  4. GO TO (1)
Speak to your team, and you'll probably find a common feeling among the team is that they are 'overwhelmed' and 'have to much to do'. Hence the instinct many people have to just 'ramp up the team'. But you can interpret those statements in two ways. Another way to see the problem is, they 'have too much to do [all at once]'.

A good analogy is the dishwasher. For example, We don't have a large kitchen, so we have a small or to be more precise: a slimline dish washer. Its a top of the range model complete with a confusing array of space age controls, and an ability to run extra quiet. But it doesn't have enough space to fit a whole days worth of dishes, cups etc. Now I could 'ramp up' to a bigger dishwasher. But thats going to get messy and expensive I'll have start plumbing in a new washer, remodeling the kitchen - replacing other equipment and fittings that work just fine right now.

So My next option is to casually disregard the sensible arrangement of dishes suggested in the manual, and load the dishes in to something approaching the density of super-dense collapsing star. As you might imagine, the cleaning ability of the dishwasher is somewhat reduced. And a lot of the dishes end up needing to be re-done in the morning. Whats even worse, I can't easily tell which dishes were cleaned - and which are providing a nutritious environment for bacteria. I have to awkwardly examine each dish in turn. As each dish was crammed in at the last minute in an undocumented free-for-all - I can't even reliably automate some tests/checks to help with this.

I lose many of the benefits of my machine as well, Its no-longer quiet and efficient, hugging a tree, healing my karma and cleaning my dishes in one go. Instead, I have to switch it to the options that translate roughly as 'noisy and inefficient' and 'i hate the planet'.

The following day, I of course have a slightly larger pile of dishes to clean. The new days dishes and the failure-demand I inherited from the day before. Just like your real feature releases, this is more work and reputational harm for your team.

Why not run the dishwasher twice? After breakfast, Fill the system to a level that seems to deliver and run it. If you haven't quite filled all the slots, thats not a problem - run it anyway. Why? because you are evening out the flow of dishes. By under-loading the dishwasher in the morning, you don't have to over-load it in the evening. Inspecting the product becomes quicker and easier to do.


Your next release cometh.

This approach provides a steady cadence to your engineers and testers. The tsunami of each big cycle becomes a more manageable ebb and flow. Getting those fixes and features out cleanly and regularly helps provide regular feedback to the team.

Did we develop something wrong? or miss a bug? we'll know sooner and can fix our team to stop it happening again [sooner]. Leave it a while, and soon the list of missed and broken things becomes something to have long and painful meetings about.


Frequent releases, can help deliver this calmer, more stable development and test process, where some features are delivered sooner and the team isn't trying to build a large complicated system, with every release. They can focus on developing and testing a small set of changes, against the back drop of a relatively stable system.



Comments

Popular posts from this blog

Betting in Testing

“I’ve completed my testing of this feature, and I think it's ready to ship” “Are you willing to bet on that?” No, Don't worry, I’m not going to list various ways you could test the feature better or things you might have forgotten. Instead, I recommend you to ask yourself that question next time you believe you are finished.  Why? It might cause you to analyse your belief more critically. We arrive at a decision usually by means of a mixture of emotion, convention and reason. Considering the question of whether the feature and the app are good enough as a bet is likely to make you use a more evidence-based approach. Testing is gambling with your time to find information about the app. Why do I think I am done here? Would I bet money/reputation on it? I have a checklist stuck to one of my screens, that I read and contemplate when I get to this point. When you have considered the options, you may decide to check some more things or ship the app

Test Engineers, counsel for... all of the above!

Sometimes people discuss test engineers and QA as if they were a sort of police force, patrolling the streets of code looking for offences and offenders. While I can see the parallels, the investigation, checking the veracity of claims and a belief that we are making things safer. The simile soon falls down. But testers are not on the other side of the problem, we work alongside core developers, we often write code and follow all the same procedures (pull requests, planning, requirements analysis etc) they do. We also have the same goals, the delivery of working software that fulfills the team’s/company's goals and avoids harm. "A few good men" a great courtroom drama, all about finding the truth. Software quality, whatever that means for you and your company is helped by Test Engineers. Test Engineers approach the problem from another vantage point. We are the lawyers (& their investigators) in the court-room, sifting the evidence, questioning the facts and viewing t

XSS and Open Redirect on Telegraph.co.uk Authentication pages

I recently found a couple of security issues with the Telegraph.co.uk website. The site contained an Open redirect as well as an XSS vulnerability. These issues were in the authentication section of the website, https://auth.telegraph.co.uk/ . The flaws could provide an easy means to phish customer details and passwords from unsuspecting users. I informed the telegraph's technical management, as part of a responsible disclosure process. The telegraph management forwarded the issue report and thanked me the same day. (12th May 2014) The fix went live between the 11th and 14th of July, 2 months after the issue was reported. The details: The code served via auth.telegraph.co.uk appeared to have 2 vulnerabilities, an open redirect and a reflected Cross Site Scripting (XSS) vulnerability. Both types of vulnerabilty are in the OWASP Top 10 and can be used to manipulate and phish users of a website. As well has potentially hijack a user's session. Compromised URLs, that exp