Test Automation Architectures

I recently read a well written and helpful paper written by Doug Hoffman titled: Test Automation Architectures: Planning for Test Automation. You can find it online at http://softwarequalitymethods.com/papers/autoarch.pdf

It covers many key points that need to be considered if you want to have effective and useful automated tests. Thank you Doug for writing it so many years ago and for sharing it.

 

 

Test Automation Interfaces – the glue between your tests and the app

Over the last seven months I have been talking to various people about how test automation ‘works’ and how the working affect the viability of their test automation. In December 2012, LogiGear published an abridged version of an article I have written on the topic http://www.logigear.com/magazine/mobile-testing/test-automation-interfaces-for-mobile-apps/ I hope you find the article informative and helpful.

I sometimes find analogies help people to grasp concepts and ideas which I otherwise might struggle to communicate effectively. So here are a couple of analogies for test automation interfaces:

  1. They are the glue between your automated tests and the app you want to test. By picking the most appropriate glue for the job, your tests are more likely to stick around and work effectively.
  2. The interface is similar to the way Velcro works, the hooks bind with the eyes to establish an effective connection.

I have some ideas and plans to expand the initial article into a small book on effective software test automation. e-mail me if you’d like to encourage that work. My email address is my name (julianharty) at Google’s fine email service: gmail.com I assume a human will be able to create the correct email address from this information 🙂

Slides for presentation at QA&TEST 2011

Here is the link to my slides which I presented at QA&Test 2011. As ever, these slides are the latest version of the work on UX Test Automation.

UX Test Automation for QAandTEST 2011 (27 Oct 2011)

The main content is identical to the presentation planned for EuroSTAR 2011 21st to 24th November 2011. I may revise the main content again by the time of EuroSTAR, if so, I’ll post the updated material online.

Update: I received the best presentation award at the conference for this presentation 🙂

 

 

 

Slides for a talk I presented at Microsoft Redmond

 

Designs and Need adding perspective to our testing (11 Oct 2011) This presentation was given at Microsoft’s Redmond office. The material is not specific to any company or web site, rather I present concepts, ideas, and tools which should be generally relevant to people who want to create software which suits the needs of a wide range of users.

The aim is to encourage additional perspective to our software, as developers, designers and testers, etc. rather than focusing purely on ‘functionality’.

 

 

 

 

 

 

 

Preview of material for StarWest 2011

I hope to meet some of you at StarWest in October where I’m presenting a full day tutorial on testing mobile phone applications on the Tuesday and a track session on pushing the boundaries of test automation on the Wednesday.

If you’d like to come to the conference, the web site is http://www.sqe.com/starwest/

I have made the materials available online and you are welcome to download and use them. The material on ‘pushing the boundaries’ is on this site at UX Test Automation for StarWest 2011 The material for the tutorial is hosted at http://code.google.com/p/mwta/downloads/list

Pushing the Boundaries of Test Automation

One of my current responsibilities is to find ways to automate as much as practical of the ‘testing’ of the user experience (UX) of complex web-based applications. In my view, full test automation of UX is impractical and probably unwise, however we can use automation to find potential problems[1] in UX even of rich, complex applications. I, and others, are working to find ways to use automation to discover various types of these potential problems. Here’s an overview of some of the points I made. I intend to extend and expand on my work in future posts.

In my experience, heuristics are useful in helping identify potential issues. Various people have managed to create test automation that essentially automates various heuristics.

Examples of pushing the boundaries

You might notice that all the examples I’ve provided are available as free opensource software (FOSS). I’ve learnt to value opensource because it reduces the cost of experimentation and allows us to extend and modify the code e.g. to add new heuristics relatively easily (you still need to be able to write code, however the code is freely and immediately available).

Automation is (often) necessary, but not sufficient

Automation and automated tests can be beguiling, and paradoxically increase the chances of missing critical problems if we chose to rely mainly or even solely on the automated tests. Even with state of the art (the best we can do across the industry) automated tests I still believe we need to ask additional questions about the software being tested. Sadly, in my experience, most automated tests are poorly designed and implemented, which increases the likelihood of problems eluding the automated tests.

Here are 2 articles which describe some of the key concerns.

The first describes how people can be biased into over-reliance on automation. It is called “Beware of Automation Bias” by M.L. Cummings, in 2004. The article is available online at http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.91.2634&rep=rep1&type=pdf

The second helped me understand where testing helps us work out which questions to ask (of the software), and that we need to use a process to identify the relevant questions. The article is called 5 Orders of Ignorance, by Phillip G Armour, CACM 2000 http://www-plan.cs.colorado.edu/diwan/3308-s10/p17-armour.pdf

Note: the essence of this material was presented as a lightning keynote at the Belgium Testing Days conference on 15th February 2011

[1] potential problems is one term I use to avoid getting into arguments about whether a problem is a bug or not. I prefer to use the term ‘undesirable effects’ since software (and things in general) may meet the requirements but still have undesirable effects. Here I’m happy to focus on potential problems; perhaps I’ll write a post on the topic of undesirable effects soon…

Improving the maintainability of automated tests

Introduction

Lots of companies, and teams have a burden of unreliable, problematic, automated tests that are troublesome and time-consuming to maintain. We need ways to address these poor-quality tests, ways to recover from the current unhealthy situation to a healthy environment where the tests are reliable, trustworthy, and easy to maintain as the underlying software changes.

For new test automation projects and teams, with the ‘luxury’ of starting fresh, you might also find these topics salutary, advance warning of a situation you might end up in if you don’t apply good test automation practices from the outset. Don’t say you weren’t warned 🙂

Here are the initial topics I’m going to cover

  • Re-engineering and Refactoring automated tests
  • Understanding Critical Success Factors for test automation
  • Applying Design Patterns
  • Design and Structure your test automation code
  • Coping with large volumes of ‘legacy’ and ‘broken’ tests (including record & playback)
  • Remove boiler-plating or dumbed-down interfaces
  • Making the interaction with web pages resilient and robust (using IDs, working with developers, etc).
  • Don’t be fooled (again) avoid being beguiled by automated tests
  • Sunk by dependencies (e.g. on live back-end servers). Coping with Environmental Issues.
  • Slow tests
  • Modelling Techniques
  • The intersection of automation and in-person testing
  • Three possible outcomes of a test
  • Writing readable code
  • Stringing tools together
  • Patterns for: Data creation, reuse, sharing and cleanup
  • Designing tests to safely run in parallel

One of my aims is to create a useful, succinct guide to help you, and others, create and establish useful, readable and maintainable automated tests for your software projects. I’m drawing on the work, experience and expertise of various people in the software testing communities, and welcome your input and ideas.

Lean Software Testing

By Julian Harty
Originally written on 02 October 2010
This idea was first mooted at StarWest 2010 last week when I was looking at stacks of ‘Lean’ books at the same time as discussing some of the many flaws I’ve seen in test automation. Here’s essentially the original draft, although I’ve edited it slightly and added a couple of examples to make the article more concrete. I expect to write more about the topic as I continue working in this area.

“Sufficient to the purpose of Test Automation”

There are plenty of books on Lean Manufacturing, Lean Software Development, etc. However there doesn’t seem to  be much thought given to Lean Test Automation, and given the many poorly designed and implemented Test Automation work I’ve seen and experienced, perhaps the time to develop the concepts, ideas and practices for Lean Test Automation is overdue.  We can borrow and adapt existing work on Lean Software Development as and when they apply.

“Sufficient to the purpose”

Let’s get started. Our challenge to address is to develop the minimum Test Automation that provides significant benefit to the rest of the project, and to guard against:
  • Gilding the test code e.g. by calling it a ‘framework’ in the vain hope of making it more than it should be (avoid the beguiling siren of trying to convince your managers and your peers that you’re creating something ‘reusable’ that others will use. Virtually all the test automation frameworks I’ve seen aren’t even used by the person who created them 6 months later!)
  • Assuming the tests check more than they actually do? I’ve yet to see an omnipotent test suite, they don’t catch all the possible classes of bugs, and many catch none, not even their own bugs!
  • Assuming our automated test code is perfect. Luke 6 41:42 reminds us to first remove the plank from our own eye before removing the speck from our “brother’s”. So make sure our code is well written, and useful, know its flaws to keep us humble and aware of our fallibility. Sadly, there are many examples where the test automation code is so poorly designed and written to make the situation worse than having no automated tests at all.

Heuristics

Heuristics are useful, but fallible, guidelines which help guide us to produce useful work. Here are some heuristics for Test Automation.
  • Minimal code e.g. 5 to 10 lines, is more likely to provide a positive return-on-investment (ROI) than spending months and years writing a large test automation framework and expecting others to use it.
  • Test automation should be able to detect known faults, bugs, and issues. At one extreme, Test Driven Development (TDD) practices force the author of the automated test to make sure it fails first, before the author writes the code that will cause the test to pass [or pass the test”].
  • Some tests are inappropriate to automate, or not worth the effort.
  • Automation can help improve our testing in many ways, ranging from generating test data through test automation, to analysis and presentation of the test results.
  • Test Automation is fallible. Consider how the automated tests can be ‘fooled’ into reporting success when they should have reported a problem e.g. because they didn’t check adequately; or reporting failure when they should have passed. Here the aim is not [necessarily] to make the tests complete or perfect – doing so might cost more than the value these tests provide; rather the aim is to quickly identify (and possibly address) weaknesses in the current implementation of the tests.

Examples of Lean Test Automation

Here are a couple of personal examples where I managed to use test automation to find significant issues that the development teams then fixed (one measure of value of the test automation).

  1. Writing a short script in Perl, about 10 lines of code, that helped expose a fundamental security flaw in a nationwide system, back in 2005. The work was presented at CAST 2006 in Indianapolis, USA. The proceedings used to be available from http://www.associationforsoftwaretesting.org/ but they’ve reorganised the site and don’t seem to have it available currently. I’ll see if I can make it available again.
  2. Writing about 50 lines of Java to dynamically navigate web pages and web applications. These tests found several significant bugs on a variety of projects at work. The code is freely available at http://code.google.com/p/web-accessibility-testing/

In both cases, the code was relatively simple and relied on existing libraries to interact with the systems being tested. The effort to write the code was low (hours) compared to the value running the tests provided.

Further Reading

Applying lean concepts to software testing, Matt Heusser http://go.techtarget.com/r/12859187/10638937