Improving the maintainability of automated tests

Introduction

Lots of companies, and teams have a burden of unreliable, problematic, automated tests that are troublesome and time-consuming to maintain. We need ways to address these poor-quality tests, ways to recover from the current unhealthy situation to a healthy environment where the tests are reliable, trustworthy, and easy to maintain as the underlying software changes.

For new test automation projects and teams, with the ‘luxury’ of starting fresh, you might also find these topics salutary, advance warning of a situation you might end up in if you don’t apply good test automation practices from the outset. Don’t say you weren’t warned 🙂

Here are the initial topics I’m going to cover

  • Re-engineering and Refactoring automated tests
  • Understanding Critical Success Factors for test automation
  • Applying Design Patterns
  • Design and Structure your test automation code
  • Coping with large volumes of ‘legacy’ and ‘broken’ tests (including record & playback)
  • Remove boiler-plating or dumbed-down interfaces
  • Making the interaction with web pages resilient and robust (using IDs, working with developers, etc).
  • Don’t be fooled (again) avoid being beguiled by automated tests
  • Sunk by dependencies (e.g. on live back-end servers). Coping with Environmental Issues.
  • Slow tests
  • Modelling Techniques
  • The intersection of automation and in-person testing
  • Three possible outcomes of a test
  • Writing readable code
  • Stringing tools together
  • Patterns for: Data creation, reuse, sharing and cleanup
  • Designing tests to safely run in parallel

One of my aims is to create a useful, succinct guide to help you, and others, create and establish useful, readable and maintainable automated tests for your software projects. I’m drawing on the work, experience and expertise of various people in the software testing communities, and welcome your input and ideas.