By Julian Harty
Originally written on 02 October 2010
Originally written on 02 October 2010
This idea was first mooted at StarWest 2010 last week when I was looking at stacks of ‘Lean’ books at the same time as discussing some of the many flaws I’ve seen in test automation. Here’s essentially the original draft, although I’ve edited it slightly and added a couple of examples to make the article more concrete. I expect to write more about the topic as I continue working in this area.
“Sufficient to the purpose of Test Automation”
There are plenty of books on Lean Manufacturing, Lean Software Development, etc. However there doesn’t seem to be much thought given to Lean Test Automation, and given the many poorly designed and implemented Test Automation work I’ve seen and experienced, perhaps the time to develop the concepts, ideas and practices for Lean Test Automation is overdue. We can borrow and adapt existing work on Lean Software Development as and when they apply.
“Sufficient to the purpose”
Let’s get started. Our challenge to address is to develop the minimum Test Automation that provides significant benefit to the rest of the project, and to guard against:
- Gilding the test code e.g. by calling it a ‘framework’ in the vain hope of making it more than it should be (avoid the beguiling siren of trying to convince your managers and your peers that you’re creating something ‘reusable’ that others will use. Virtually all the test automation frameworks I’ve seen aren’t even used by the person who created them 6 months later!)
- Assuming the tests check more than they actually do? I’ve yet to see an omnipotent test suite, they don’t catch all the possible classes of bugs, and many catch none, not even their own bugs!
- Assuming our automated test code is perfect. Luke 6 41:42 reminds us to first remove the plank from our own eye before removing the speck from our “brother’s”. So make sure our code is well written, and useful, know its flaws to keep us humble and aware of our fallibility. Sadly, there are many examples where the test automation code is so poorly designed and written to make the situation worse than having no automated tests at all.
Heuristics
Heuristics are useful, but fallible, guidelines which help guide us to produce useful work. Here are some heuristics for Test Automation.
- Minimal code e.g. 5 to 10 lines, is more likely to provide a positive return-on-investment (ROI) than spending months and years writing a large test automation framework and expecting others to use it.
- Test automation should be able to detect known faults, bugs, and issues. At one extreme, Test Driven Development (TDD) practices force the author of the automated test to make sure it fails first, before the author writes the code that will cause the test to pass [or pass the test”].
- Some tests are inappropriate to automate, or not worth the effort.
- Automation can help improve our testing in many ways, ranging from generating test data through test automation, to analysis and presentation of the test results.
- Test Automation is fallible. Consider how the automated tests can be ‘fooled’ into reporting success when they should have reported a problem e.g. because they didn’t check adequately; or reporting failure when they should have passed. Here the aim is not [necessarily] to make the tests complete or perfect – doing so might cost more than the value these tests provide; rather the aim is to quickly identify (and possibly address) weaknesses in the current implementation of the tests.
Examples of Lean Test Automation
Here are a couple of personal examples where I managed to use test automation to find significant issues that the development teams then fixed (one measure of value of the test automation).
- Writing a short script in Perl, about 10 lines of code, that helped expose a fundamental security flaw in a nationwide system, back in 2005. The work was presented at CAST 2006 in Indianapolis, USA. The proceedings used to be available from http://www.associationforsoftwaretesting.org/ but they’ve reorganised the site and don’t seem to have it available currently. I’ll see if I can make it available again.
- Writing about 50 lines of Java to dynamically navigate web pages and web applications. These tests found several significant bugs on a variety of projects at work. The code is freely available at http://code.google.com/p/web-accessibility-testing/
In both cases, the code was relatively simple and relied on existing libraries to interact with the systems being tested. The effort to write the code was low (hours) compared to the value running the tests provided.
Further Reading
Applying lean concepts to software testing, Matt Heusser http://go.techtarget.com/r/12859187/10638937
Great title and goal.
Several good points I got from post above.
1. A Lean SW Test Automation
– is well designed
– is “meaty”
– finds important bugs
2. just like the software we test, our test automation should also be of high “quality”
I am a fan of software development, thus automation, I have yet to learn that everything cannot be automated. I still have the point of view of a developer where possibly anything can be automated in the software world as long as enough time is given to it.
A couple of thoughts and questions in my mind from my day to day experience,
1. there should still be automation for features that may change due to re-use of code, that is the purpose of regression tests. which should be automated because in my practical QA world, code / components are re-used and changed and may break already existing functionality. typical cycle, test – roll – bugfix – test – reroll, 1st/2nd/3rd release. manual regression testing is not time-efficient and is tedious.
2. am looking for right architecture and exploring on what is the proper and efficient way of automated testing
3. is there really no definite answer on the right balance of manual and automated testing.
my thoughts and 2 cents.
http://blog.bettersoftwaretesting.com/?feed=rss2 And I tested in in Google Reader today, where it seems to work currently.