About Julian Harty

I've been working in technology since 1980 and over the years have held an eclectic collection of roles and responsibilities, including: The first software test engineer at Google outside the USA, where I worked for 4 years as a Senior Test Engineer on areas such as mobile testing, AdSense, and Chrome OS. A main board company director in England for a mix of companies involved in technology including software development, recruitment software, eCommerce, etc. Running the systems and operations for the European aspects of Dun & Bradstreet's Advanced Research and Development company, called DunsGate for 11 years. Creating and leading a small specialist software testing company called CommerceTest Limited in 1999. The company is currently resting while I work on other projects. Currently my main responsibility is Tester At Large for eBay. My main passion and driver is to find ways to help improve people's lives (albeit generally in minor ways) by helping adapt technology to suit the user, rather than watching users struggle with unsuitable software. I work on opensource projects, many hosted at http://code.google.com/u/julianharty and try to make my material available to as many interested people as practical, ideally for free and in forms they can take, adapt and use without restriction. One example of this work is on test automation for mobile phone applications, available at http://tr.im/mobtest I'm based in the South East of England. You can find me at conferences, events, and peer workshops globally. Julian Harty November 2010

Preview of material for StarWest 2011

I hope to meet some of you at StarWest in October where I’m presenting a full day tutorial on testing mobile phone applications on the Tuesday and a track session on pushing the boundaries of test automation on the Wednesday.

If you’d like to come to the conference, the web site is http://www.sqe.com/starwest/

I have made the materials available online and you are welcome to download and use them. The material on ‘pushing the boundaries’ is on this site at UX Test Automation for StarWest 2011 The material for the tutorial is hosted at http://code.google.com/p/mwta/downloads/list

Slides used for STEP-AUTO conference in May 2011

Here are the slides I presented at the STEP-AUTO conference in Bangalore, India, in May 2011. UX Test Automation for STEP-AUTO 2011 (12 May 2011)

I’m continuing to revise the material for various conferences so expect to see updates published on this site from time to time. The next time I present on this topic is at StarWest 2011 Track W9 on Wednesday October 5th. http://www.sqe.com/StarWest/Concurrent/Default.aspx

Copy of my presentation for EuroStar 2011’s Virtual Conference

Testing is both performance art and a scientific process. When we test well the performance is beautiful and the science ‘good science’ rather than ‘bad science’. I provided a virtual presentation for EuroStar’s 2011’s Virtual Conference (screened on 13th September 2011 and available online for 30 days for registered users). I’ve included the slides on this site Testing, The Crucible of Software Development (08 Sep 2011)

Pushing the Boundaries of Test Automation

One of my current responsibilities is to find ways to automate as much as practical of the ‘testing’ of the user experience (UX) of complex web-based applications. In my view, full test automation of UX is impractical and probably unwise, however we can use automation to find potential problems[1] in UX even of rich, complex applications. I, and others, are working to find ways to use automation to discover various types of these potential problems. Here’s an overview of some of the points I made. I intend to extend and expand on my work in future posts.

In my experience, heuristics are useful in helping identify potential issues. Various people have managed to create test automation that essentially automates various heuristics.

Examples of pushing the boundaries

You might notice that all the examples I’ve provided are available as free opensource software (FOSS). I’ve learnt to value opensource because it reduces the cost of experimentation and allows us to extend and modify the code e.g. to add new heuristics relatively easily (you still need to be able to write code, however the code is freely and immediately available).

Automation is (often) necessary, but not sufficient

Automation and automated tests can be beguiling, and paradoxically increase the chances of missing critical problems if we chose to rely mainly or even solely on the automated tests. Even with state of the art (the best we can do across the industry) automated tests I still believe we need to ask additional questions about the software being tested. Sadly, in my experience, most automated tests are poorly designed and implemented, which increases the likelihood of problems eluding the automated tests.

Here are 2 articles which describe some of the key concerns.

The first describes how people can be biased into over-reliance on automation. It is called “Beware of Automation Bias” by M.L. Cummings, in 2004. The article is available online at http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.91.2634&rep=rep1&type=pdf

The second helped me understand where testing helps us work out which questions to ask (of the software), and that we need to use a process to identify the relevant questions. The article is called 5 Orders of Ignorance, by Phillip G Armour, CACM 2000 http://www-plan.cs.colorado.edu/diwan/3308-s10/p17-armour.pdf

Note: the essence of this material was presented as a lightning keynote at the Belgium Testing Days conference on 15th February 2011

[1] potential problems is one term I use to avoid getting into arguments about whether a problem is a bug or not. I prefer to use the term ‘undesirable effects’ since software (and things in general) may meet the requirements but still have undesirable effects. Here I’m happy to focus on potential problems; perhaps I’ll write a post on the topic of undesirable effects soon…

Improving the maintainability of automated tests

Introduction

Lots of companies, and teams have a burden of unreliable, problematic, automated tests that are troublesome and time-consuming to maintain. We need ways to address these poor-quality tests, ways to recover from the current unhealthy situation to a healthy environment where the tests are reliable, trustworthy, and easy to maintain as the underlying software changes.

For new test automation projects and teams, with the ‘luxury’ of starting fresh, you might also find these topics salutary, advance warning of a situation you might end up in if you don’t apply good test automation practices from the outset. Don’t say you weren’t warned 🙂

Here are the initial topics I’m going to cover

  • Re-engineering and Refactoring automated tests
  • Understanding Critical Success Factors for test automation
  • Applying Design Patterns
  • Design and Structure your test automation code
  • Coping with large volumes of ‘legacy’ and ‘broken’ tests (including record & playback)
  • Remove boiler-plating or dumbed-down interfaces
  • Making the interaction with web pages resilient and robust (using IDs, working with developers, etc).
  • Don’t be fooled (again) avoid being beguiled by automated tests
  • Sunk by dependencies (e.g. on live back-end servers). Coping with Environmental Issues.
  • Slow tests
  • Modelling Techniques
  • The intersection of automation and in-person testing
  • Three possible outcomes of a test
  • Writing readable code
  • Stringing tools together
  • Patterns for: Data creation, reuse, sharing and cleanup
  • Designing tests to safely run in parallel

One of my aims is to create a useful, succinct guide to help you, and others, create and establish useful, readable and maintainable automated tests for your software projects. I’m drawing on the work, experience and expertise of various people in the software testing communities, and welcome your input and ideas.

Lean Software Testing

By Julian Harty
Originally written on 02 October 2010
This idea was first mooted at StarWest 2010 last week when I was looking at stacks of ‘Lean’ books at the same time as discussing some of the many flaws I’ve seen in test automation. Here’s essentially the original draft, although I’ve edited it slightly and added a couple of examples to make the article more concrete. I expect to write more about the topic as I continue working in this area.

“Sufficient to the purpose of Test Automation”

There are plenty of books on Lean Manufacturing, Lean Software Development, etc. However there doesn’t seem to  be much thought given to Lean Test Automation, and given the many poorly designed and implemented Test Automation work I’ve seen and experienced, perhaps the time to develop the concepts, ideas and practices for Lean Test Automation is overdue.  We can borrow and adapt existing work on Lean Software Development as and when they apply.

“Sufficient to the purpose”

Let’s get started. Our challenge to address is to develop the minimum Test Automation that provides significant benefit to the rest of the project, and to guard against:
  • Gilding the test code e.g. by calling it a ‘framework’ in the vain hope of making it more than it should be (avoid the beguiling siren of trying to convince your managers and your peers that you’re creating something ‘reusable’ that others will use. Virtually all the test automation frameworks I’ve seen aren’t even used by the person who created them 6 months later!)
  • Assuming the tests check more than they actually do? I’ve yet to see an omnipotent test suite, they don’t catch all the possible classes of bugs, and many catch none, not even their own bugs!
  • Assuming our automated test code is perfect. Luke 6 41:42 reminds us to first remove the plank from our own eye before removing the speck from our “brother’s”. So make sure our code is well written, and useful, know its flaws to keep us humble and aware of our fallibility. Sadly, there are many examples where the test automation code is so poorly designed and written to make the situation worse than having no automated tests at all.

Heuristics

Heuristics are useful, but fallible, guidelines which help guide us to produce useful work. Here are some heuristics for Test Automation.
  • Minimal code e.g. 5 to 10 lines, is more likely to provide a positive return-on-investment (ROI) than spending months and years writing a large test automation framework and expecting others to use it.
  • Test automation should be able to detect known faults, bugs, and issues. At one extreme, Test Driven Development (TDD) practices force the author of the automated test to make sure it fails first, before the author writes the code that will cause the test to pass [or pass the test”].
  • Some tests are inappropriate to automate, or not worth the effort.
  • Automation can help improve our testing in many ways, ranging from generating test data through test automation, to analysis and presentation of the test results.
  • Test Automation is fallible. Consider how the automated tests can be ‘fooled’ into reporting success when they should have reported a problem e.g. because they didn’t check adequately; or reporting failure when they should have passed. Here the aim is not [necessarily] to make the tests complete or perfect – doing so might cost more than the value these tests provide; rather the aim is to quickly identify (and possibly address) weaknesses in the current implementation of the tests.

Examples of Lean Test Automation

Here are a couple of personal examples where I managed to use test automation to find significant issues that the development teams then fixed (one measure of value of the test automation).

  1. Writing a short script in Perl, about 10 lines of code, that helped expose a fundamental security flaw in a nationwide system, back in 2005. The work was presented at CAST 2006 in Indianapolis, USA. The proceedings used to be available from http://www.associationforsoftwaretesting.org/ but they’ve reorganised the site and don’t seem to have it available currently. I’ll see if I can make it available again.
  2. Writing about 50 lines of Java to dynamically navigate web pages and web applications. These tests found several significant bugs on a variety of projects at work. The code is freely available at http://code.google.com/p/web-accessibility-testing/

In both cases, the code was relatively simple and relied on existing libraries to interact with the systems being tested. The effort to write the code was low (hours) compared to the value running the tests provided.

Further Reading

Applying lean concepts to software testing, Matt Heusser http://go.techtarget.com/r/12859187/10638937

Why I’ve created another blog on software testing

Hello, and welcome.

This blog is part of an initiative to help me, others, and people involved in software testing to improve the efficiency and effectiveness of our work in this domain. I’ve seen too many cases of poor quality, ineffective and time-consuming work called software testing which is dispiriting, ineffective, and which dilutes the good work done by people who are doing a better job of software testing.

I consider the topic broad, and it includes test automation, interactive testing, testing by ‘testers’, testing by people who simply want to test something in order to get something else (more important) done, etc. The main aim of this site is to share and collaborate on ideas, experiences and practices related to software testing.

I’m Julian Harty, currently Tester At Large at eBay. I’ll post a bio so you can see some of my history. My work here contains my personal views and opinions, it does not represent the views of any of my employers, although they are welcome to comment on the material if they wish.

This blog supersedes several earlier attempts to write material on blogger. I didn’t like the formatting of the content and I like the various blogs that use WordPress, so here I am to see if WordPress suits me better.