Skip to main content

Shutter Sync, when failure provides enlightenment

Shutter sync is an interesting artefact generated when we video moving objects. Take a look at this video of a Helicopter taking off:

Notice how the boats are moving as normal, but the rotors appear to be barely moving at all. This isn’t a ‘Photoshop’. It’s an effect of video camera’s frame rate matching the speed/position of the rotors. Each time the camera takes a picture or ‘frame’ the rotors happen to be in approximately the same relative position.

The regular and deterministic behaviour of both machines enables the helicopter to appear to be both broken and flying. The rotors don’t appear to be working, while other evidence suggests its rotors are providing all the lift required.

What's so exciting is that this tells us something useful, as well as apparently being a flaw or fail. We could both assume the rotors move with a constant rotation, and estimate a series of possible values for the speed of the rotors, given this video.

Your automated checks/tests can be exhibit this too. Take for example a check that often/always ‘fails’. But when you examine the software with other tools the problem disappears.
This might be a probe effect - that is, the ‘bug’ may only happen because of the testing tool. This is actually quite common. It was a bugbear of mine in the days of pre-webdriver browser automation e.g. Selenium RC, as RC inserted a lot of JavaScript into the page - often resulting in erroneous behaviour.

The ‘failure’ could also be a race condition. The regular systematic behaviour of the the testing framework, interacts with near perfect timing with the software being tested. The checking code, sees the problem frequently & repeatedly - as it always checks in the narrow window of time, when there is a problem.

Automated test/check ‘failures’ like the above are often dismissed immediately as things to work-around or fix. While it might make sense to ‘clean’ this from our results, we could miss potentially valuable avenues for testing. The ‘failing’ test is presenting us with information. That information might be more valuable than a clean pass/fail result - especially if the apparent failures have an inconsistency of this kind.

Just as with shutter sync, where we determine the behaviour of the rotors from the video. We can glean useful information from the ‘failing’ test/check. Investigation of these test ‘failures’ might show that the GUI is not quite in sync with the database or other users screens etc.  Maybe when the UI suggests an action is done - the user/system could actually still write some data for a short while. Or two events that as far as an API shown happen sequentially - in reality happen at the same time.

Comments

Popular posts from this blog

Why you might need testers

I remember teaching my son to ride his bike. No, Strike that, Helping him to learn to ride his bike. It’s that way round – if we are honest – he was changing his brain so it could adapt to the mechanism and behaviour of the bike. I was just holding the bike, pushing and showering him with praise and tips.


If he fell, I didn’t and couldn’t change the way he was riding the bike. I suggested things, rubbed his sore knee and pointed out that he had just cycled more in that last attempt – than he had ever managed before - Son this is working, you’re getting it.

I had help of course, Gravity being one. When he lost balance, it hurt. Not a lot, but enough for his brain to get the feedback it needed to rewire a few neurons. If the mistakes were subtler, advice might help – try going faster – that will make the bike less wobbly. The excitement of going faster and better helped rewire a few more neurons.

When we have this sort of immediate feedback we learn quicker, we improve our game. When the f…

Thank you for finding the bug I missed.

Thank you to the colleague/customer/product owner, who found the bug I missed. That oversight, was (at least in part) my mistake. I've been thinking about what happened and what that means to me and my team.


I'm happy you told me about the issue you found, because you...

1) Opened my eyes to a situation I'd never have thought to investigate.

2) Gave me another item for my checklist of things to check in future.

3) Made me remember, that we are never done testing.

4) Are never sure if the application 'works' well enough.

5) Reminded me to explore more and build less.

6) To request that we may wish to assign more time to finding these issues.

7) Let me experience the hindsight bias, so that the edge-case now seems obvious!

Google Maps Queue Jumps.

Google Maps directs me to and from my client sites. I've saved the location of the client's car parks, when I start the app in the morning - it knows where I want to go. When I start it at the end of the day, Google knows where I want to go.
This is great! It guides me around traffic jams, adjusts when I miss a turn and even offers faster routes en-route as they become available.
But sometimes Google Maps does something wrong. I don't mean incorrect, like how it sometimes gets a street name wrong (typically in a rural area). I don't mean how its GPS fix might put me in a neighbouring street (10m to my left - when there are trees overhead).
I mean wrong - As in something unfair and socially unacceptable. An action, that if a person did it, would be frowned upon.
Example:
Let’s assume a road has a traffic jam, so instead of the cars doing around 60 mph, we are crawling at <10 mph.
In the middle of this traffic jam, the road has a junction, an example is shown here: