Originally written on 02 October 2010
“Sufficient to the purpose of Test Automation”
“Sufficient to the purpose”
- Gilding the test code e.g. by calling it a ‘framework’ in the vain hope of making it more than it should be (avoid the beguiling siren of trying to convince your managers and your peers that you’re creating something ‘reusable’ that others will use. Virtually all the test automation frameworks I’ve seen aren’t even used by the person who created them 6 months later!)
- Assuming the tests check more than they actually do? I’ve yet to see an omnipotent test suite, they don’t catch all the possible classes of bugs, and many catch none, not even their own bugs!
- Assuming our automated test code is perfect. Luke 6 41:42 reminds us to first remove the plank from our own eye before removing the speck from our “brother’s”. So make sure our code is well written, and useful, know its flaws to keep us humble and aware of our fallibility. Sadly, there are many examples where the test automation code is so poorly designed and written to make the situation worse than having no automated tests at all.
- Minimal code e.g. 5 to 10 lines, is more likely to provide a positive return-on-investment (ROI) than spending months and years writing a large test automation framework and expecting others to use it.
- Test automation should be able to detect known faults, bugs, and issues. At one extreme, Test Driven Development (TDD) practices force the author of the automated test to make sure it fails first, before the author writes the code that will cause the test to pass [or pass the test”].
- Some tests are inappropriate to automate, or not worth the effort.
- Automation can help improve our testing in many ways, ranging from generating test data through test automation, to analysis and presentation of the test results.
- Test Automation is fallible. Consider how the automated tests can be ‘fooled’ into reporting success when they should have reported a problem e.g. because they didn’t check adequately; or reporting failure when they should have passed. Here the aim is not [necessarily] to make the tests complete or perfect – doing so might cost more than the value these tests provide; rather the aim is to quickly identify (and possibly address) weaknesses in the current implementation of the tests.
Examples of Lean Test Automation
Here are a couple of personal examples where I managed to use test automation to find significant issues that the development teams then fixed (one measure of value of the test automation).
- Writing a short script in Perl, about 10 lines of code, that helped expose a fundamental security flaw in a nationwide system, back in 2005. The work was presented at CAST 2006 in Indianapolis, USA. The proceedings used to be available from http://www.associationforsoftwaretesting.org/ but they’ve reorganised the site and don’t seem to have it available currently. I’ll see if I can make it available again.
- Writing about 50 lines of Java to dynamically navigate web pages and web applications. These tests found several significant bugs on a variety of projects at work. The code is freely available at http://code.google.com/p/web-accessibility-testing/
In both cases, the code was relatively simple and relied on existing libraries to interact with the systems being tested. The effort to write the code was low (hours) compared to the value running the tests provided.