Skip to main content

Shutter Sync, when failure provides enlightenment

Shutter sync is an interesting artefact generated when we video moving objects. Take a look at this video of a Helicopter taking off:

Notice how the boats are moving as normal, but the rotors appear to be barely moving at all. This isn’t a ‘Photoshop’. It’s an effect of video camera’s frame rate matching the speed/position of the rotors. Each time the camera takes a picture or ‘frame’ the rotors happen to be in approximately the same relative position.

The regular and deterministic behaviour of both machines enables the helicopter to appear to be both broken and flying. The rotors don’t appear to be working, while other evidence suggests its rotors are providing all the lift required.

What's so exciting is that this tells us something useful, as well as apparently being a flaw or fail. We could both assume the rotors move with a constant rotation, and estimate a series of possible values for the speed of the rotors, given this video.

Your automated checks/tests can be exhibit this too. Take for example a check that often/always ‘fails’. But when you examine the software with other tools the problem disappears.
This might be a probe effect - that is, the ‘bug’ may only happen because of the testing tool. This is actually quite common. It was a bugbear of mine in the days of pre-webdriver browser automation e.g. Selenium RC, as RC inserted a lot of JavaScript into the page - often resulting in erroneous behaviour.

The ‘failure’ could also be a race condition. The regular systematic behaviour of the the testing framework, interacts with near perfect timing with the software being tested. The checking code, sees the problem frequently & repeatedly - as it always checks in the narrow window of time, when there is a problem.

Automated test/check ‘failures’ like the above are often dismissed immediately as things to work-around or fix. While it might make sense to ‘clean’ this from our results, we could miss potentially valuable avenues for testing. The ‘failing’ test is presenting us with information. That information might be more valuable than a clean pass/fail result - especially if the apparent failures have an inconsistency of this kind.

Just as with shutter sync, where we determine the behaviour of the rotors from the video. We can glean useful information from the ‘failing’ test/check. Investigation of these test ‘failures’ might show that the GUI is not quite in sync with the database or other users screens etc.  Maybe when the UI suggests an action is done - the user/system could actually still write some data for a short while. Or two events that as far as an API shown happen sequentially - in reality happen at the same time.

Comments

Popular posts from this blog

Betting in Testing

“I’ve completed my testing of this feature, and I think it's ready to ship” “Are you willing to bet on that?” No, Don't worry, I’m not going to list various ways you could test the feature better or things you might have forgotten. Instead, I recommend you to ask yourself that question next time you believe you are finished.  Why? It might cause you to analyse your belief more critically. We arrive at a decision usually by means of a mixture of emotion, convention and reason. Considering the question of whether the feature and the app are good enough as a bet is likely to make you use a more evidence-based approach. Testing is gambling with your time to find information about the app. Why do I think I am done here? Would I bet money/reputation on it? I have a checklist stuck to one of my screens, that I read and contemplate when I get to this point. When you have considered the options, you may decide to check some more things or ship the app

Test Engineers, counsel for... all of the above!

Sometimes people discuss test engineers and QA as if they were a sort of police force, patrolling the streets of code looking for offences and offenders. While I can see the parallels, the investigation, checking the veracity of claims and a belief that we are making things safer. The simile soon falls down. But testers are not on the other side of the problem, we work alongside core developers, we often write code and follow all the same procedures (pull requests, planning, requirements analysis etc) they do. We also have the same goals, the delivery of working software that fulfills the team’s/company's goals and avoids harm. "A few good men" a great courtroom drama, all about finding the truth. Software quality, whatever that means for you and your company is helped by Test Engineers. Test Engineers approach the problem from another vantage point. We are the lawyers (& their investigators) in the court-room, sifting the evidence, questioning the facts and viewing t

XSS and Open Redirect on Telegraph.co.uk Authentication pages

I recently found a couple of security issues with the Telegraph.co.uk website. The site contained an Open redirect as well as an XSS vulnerability. These issues were in the authentication section of the website, https://auth.telegraph.co.uk/ . The flaws could provide an easy means to phish customer details and passwords from unsuspecting users. I informed the telegraph's technical management, as part of a responsible disclosure process. The telegraph management forwarded the issue report and thanked me the same day. (12th May 2014) The fix went live between the 11th and 14th of July, 2 months after the issue was reported. The details: The code served via auth.telegraph.co.uk appeared to have 2 vulnerabilities, an open redirect and a reflected Cross Site Scripting (XSS) vulnerability. Both types of vulnerabilty are in the OWASP Top 10 and can be used to manipulate and phish users of a website. As well has potentially hijack a user's session. Compromised URLs, that exp