Skip to main content

Is your test automation actually agile? A Guardian Content API example.


In my last post I discussed how test automation could be used to do things that I couldn't easily do unaided. In that example, execute thousands of news 'content searches' and help me sort through them. With the help of some simple test automation I found some potential issues with the results returned by the REST API.

In that case, I started out with the aim of implementing a tool. But your testing might not lead you that way, often your own hands-on investigation can find an issue. But you don't know how widespread it is, is it a one-off curiosity? or a sign of something more widespread.?

Again, this is where test automation can help, and if done well, without being an implementation or maintenance burden. Many test automation efforts are blind to the very Agile idea of YAGNI or You Ain't Gonna Need it . They often presume to know all that needs to be tested in advance, deciding to invest most of their time writing 'tests' blindly against a specification, that is as-yet un-implemented. This example shows how simple test automation based on your own feedback during testing can be very powerful.

The test:

I had the idea that the Guardian Content API might be overly fussy with its inputs. Often software is written [and tested] using canned data, that's designed 'to work'. These data inputs usually confirm to a happy path, and even then, only a subset of the data that could be considered 'happy path' is used.

Using the Guardian's own HTML GUI ( API Explorer ) that allows you to easily query the REST API, by hand, I tried a few quick tests. These included, some random text as well as a few typical characters that are likely to occur in text but I suspect would not be present in the usual canned test data. For example, a single SPACE character.

That quick test, of the SPACE character highlighted an issue. Entering a single SPACE character into the Tags search API explorer, appeared to cause the HTML GUI to not return a response. The API Explorer appeared to hang. At that point, I didn't know the cause, the issue could be a problem with this developer-GUI, and not the API itself.

A closer examination using Firebug, clearly identified the cause as a HTTP 500 Error from the server.


Failing query using Guardians Tags Search API - Firebug showing the HTTP 500 Error.

I could have reported this one issue. That despite the documentation stating any free text is ok for this field a simple space character can expose a failure. But using some simple automation I was better able to define the extent and distribution of this issue. For example, is it a general problem with entering single characters? Does it only affect one part of the API?

With a minor script change, my previous Ruby API-tool could report error responses and details of whether a JSON response object was returned by the server. (Though a simple cURL based shell script could have just as easily done the job.) I also wrote a little script to output every ASCII character:

lower_bound=0x0000
upper_bound=0x007F 


lower_bound.upto(upper_bound) do |codepoint|
  puts  '' << codepoint
end # end upto

The output of the above was directed to a file, and used as the input for my API-script. The script now reads one ASCII character at a time and uses that character to query the Guardians Content API. As I had found this issue in the Tags Search I also ran this script against the Guardians Tags search API.

This is ideal ground for test automation, there are 128 ASCII characters, and I'm examining 2 services, making 256 queries. That's too many to do by hand, but trivial for a simple test automation script. Common characters that are not available in ASCII, are nonetheless very common in English. And therefore will be present in the body and headlines of the Guardians content. A simple example is the € (euro) symbol.  The script would also allow me to query these many [millions of] non-ASCII characters if my current testing suggested that it might be fruitful.

The results of this initial run, were quite interesting:













Table indicates non-HTTP 500 response codes, and the corresponding queries. 

The results when filtered to only show non-HTTP 200 results clearly indicate the Tags API is less robust that the Content Search API, over these inputs. For example the space character produced no error in Content Search but did in the Tags Search. The same is true of the Horizontal tab, both characters that might be present in 'any free text'.

The lack of consistency between the two APIs is the most striking factor, to me as a tester. The two systems clearly handles these inputs differently. That information is invaluable to testers. We can instantly use this information in our next round of testing as well as discussions with programmers and product owners. Asking such questions as:

  1. What code is shared between the two services? There are clearly some differences.
  2. How do the two APIs handle these characters in combination? with each other or 'typical' english words.
  3. As error handling code is itself often flaky, what new bugs can I find in the Tags search API?
  4. How badly will the APIs, in particular the Tags API, handle non-ASCII characters? Should the APIs handle all UTF-8 codepoints without exposing failures?
  5. As far as the product owner is concerned, What is expected behaviour when a character is not-accepted?

Light weight, easy to build test automation that lets the team quickly get a mental-model of how the software actually-works is clearly valuable. I'm using the computer to do the laborious work its good at, extending my reach as a tester to help me see the bigger picture. Showing that there is more than just one isolated character not being handled well, but in fact the Tags Search API is generally a lot more prone to failure. This, more exploratory, automation is freeing me to do analysis and face to face communication with team members and product owners. Allowing me to adapt quickly to those discussions and how I see the software behaving, a fundamentally more Agile (and agile) approach.

Comments

  1. Hi Pete,

    This is excellent stuff. Great to see you exploring but using the tools to help you, rather than doing a massive up front framework design for the automation.

    Really enjoying these posts.

    Rob..

    ReplyDelete

Post a Comment

Popular posts from this blog

Betting in Testing

“I’ve completed my testing of this feature, and I think it's ready to ship” “Are you willing to bet on that?” No, Don't worry, I’m not going to list various ways you could test the feature better or things you might have forgotten. Instead, I recommend you to ask yourself that question next time you believe you are finished.  Why? It might cause you to analyse your belief more critically. We arrive at a decision usually by means of a mixture of emotion, convention and reason. Considering the question of whether the feature and the app are good enough as a bet is likely to make you use a more evidence-based approach. Testing is gambling with your time to find information about the app. Why do I think I am done here? Would I bet money/reputation on it? I have a checklist stuck to one of my screens, that I read and contemplate when I get to this point. When you have considered the options, you may decide to check some more things or ship the app

Test Engineers, counsel for... all of the above!

Sometimes people discuss test engineers and QA as if they were a sort of police force, patrolling the streets of code looking for offences and offenders. While I can see the parallels, the investigation, checking the veracity of claims and a belief that we are making things safer. The simile soon falls down. But testers are not on the other side of the problem, we work alongside core developers, we often write code and follow all the same procedures (pull requests, planning, requirements analysis etc) they do. We also have the same goals, the delivery of working software that fulfills the team’s/company's goals and avoids harm. "A few good men" a great courtroom drama, all about finding the truth. Software quality, whatever that means for you and your company is helped by Test Engineers. Test Engineers approach the problem from another vantage point. We are the lawyers (& their investigators) in the court-room, sifting the evidence, questioning the facts and viewing t

XSS and Open Redirect on Telegraph.co.uk Authentication pages

I recently found a couple of security issues with the Telegraph.co.uk website. The site contained an Open redirect as well as an XSS vulnerability. These issues were in the authentication section of the website, https://auth.telegraph.co.uk/ . The flaws could provide an easy means to phish customer details and passwords from unsuspecting users. I informed the telegraph's technical management, as part of a responsible disclosure process. The telegraph management forwarded the issue report and thanked me the same day. (12th May 2014) The fix went live between the 11th and 14th of July, 2 months after the issue was reported. The details: The code served via auth.telegraph.co.uk appeared to have 2 vulnerabilities, an open redirect and a reflected Cross Site Scripting (XSS) vulnerability. Both types of vulnerabilty are in the OWASP Top 10 and can be used to manipulate and phish users of a website. As well has potentially hijack a user's session. Compromised URLs, that exp