fledgling heuristics for testing android apps

I’ve been inspired to have a go at creating some guidelines for testing of Android apps. The initial request was to help shape interviews to be able to identify people who’re understand some of the challenges and approaches for testing on Android and for Android apps. I hope these will serve actual testing of the apps too.

  • Android Releases and API Versions
  • OnRotation and other Configuration Changes
  • Fundamental Android concepts including: Activities, Services, Intents, Broadcast Receivers, and Content Providers  https://developer.android.com/guide/components/fundamentals
  • Accessibility settings
  • Applying App Store data and statistics
  • Crashes and ANR‘s
  • Using in-app Analytics to compare our testing and how the app is used by the Users
  • Logs and Screenshots
  • SDK tools, including Logcat, adb, and monitor
  • Devices, including sensors, resources, and CPUs
  • Device Farms, sources of devices available to rent remote devices (often in the ‘cloud’)
  • Permissions granted & denied
  • Alpha & Beta channels
  • Build Targets (Debug, Release & others)
  • Test Automation Frameworks and Monkey Testing

I’ll continue exploring ideas and topics to include. Perhaps a memorable heuristic phrase will emerge, suggestions welcome on twitter https://twitter.com/julianharty

Seeking more robust and purposeful automated tests

I’ve recently been evaluating some of the automated tests for one of the projects I help, the Kiwix Android app. We have a moderate loose collection of automated tests written using Android’s Espresso framework. The tests that interact with the external environment are prone to problems and failures for various reasons. We need these tests to be trustworthy in order to run them in the CI environment across a wider range of devices. For now we can’t as these tests fail just over half the time. (Details are available in one of the issues being tracked by the project team: https://github.com/kiwix/kiwix-android/issues/283.)

The most common failure is in DownloadTest, followed by NetworkTest. From reading the code we have a mix of silent continuations (where the test proceeds regardless of errors) and implicit expectations (of what’s on the server and the local device), these may well be major contributors to the failures of the tests. Furthermore, when a test fails the error message tells us which line of code the test failed on but don’t help us understand the situation which caused the test to fail. At best we know an expectation wasn’t met at run-time, (i.e. an assertion in the code).

Meanwhile I’ve been exploring how Espresso is intended to be used and how much information it can provide about the state of the app via the app’s GUI. It seems that the intended use is for it to keep information private where it checks on behalf of the running test whether an assertion holds true, or not. However, perhaps we can encourage it to be more forthcoming and share information about what the GUI comprises and contains?

I’ll use these two tests (DownloadTest and NetworkTest) as worked examples where I’ll try to find ways to make these tests more robust and also more informative about the state of the server, the device, and the app.

Situations I’d like the tests to cope with:

  • One or more of the ZIM files are already on the local device: we don’t need to assume the device doesn’t have these files locally.
  • We can download any small ZIM file, not necessarily a predetermined ‘canned’ one.

Examples of information I’d like to ascertain:

  • How many files are already on the device, and details of these files
  • Details of ZIM files available from the server, including the filename and size

Possible approaches to interacting with Espresso

I’m going to assume you either know how Espresso works or be willing to learn about it – perhaps by writing some automated tests using it? 🙂 A good place to start is the Android Testing Codelab, freely available online.

Perhaps we could experiment with a question or query interface where the automated test can ask questions and elicit responses from Espresso. Something akin to the Socratic Method? This isn’t intended to replace the current way of using Espresso and the Hamcrest Matchers.

Who is the decision maker?

In popular opensource test automation frameworks, including junit and espresso (via hamcrest) the arbiter, or decision maker, is the assertion where the tests passes information to the assertion and it decides whether to allow the test to continue or halt and abort this test. The author of the automated test can choose whether to write extra code to handle any rejection but it still doesn’t know the cause of the rejection. Here’s an example of part of the DownloadTest at the time of writing. The try/catch means the test will continue regardless of whether the click works.


onData(withContent("ray_charles")).inAdapterView(withId(R.id.library_list)).perform(click());

try {
onView(withId(android.R.id.button1)).perform(click());
} catch (RuntimeException e) {
}

This code snippet exemplifies many espresso tests, where a reader can determine certain details such as the content the test is intended to click on, however there’s little clue what the second click is intended to do from the user’s perspective – what’s the button, what’s the button ‘for’, and why would a click legitimately fail and yet the test be OK to continue?

Sometimes I’d like the test to be able to decide what to do depending on the actual state of the system. What would we like the test to do?

For a download test, perhaps another file would be as useful?

Increasing robustness of the tests

For me, a download test should focus on being able to test the download of a representative file and be able to do so even if the expected file is already on the local device. We can decide what it’d like to do in various circumstances e.g. perhaps it could simply delete the local instance of a test file such as the one for Ray Charles? The ‘cost’ of re-downloading this file is tiny (at least compared to Wikipedia in English) if the user wants to have this file on the device. Or conversely perhaps the test could leave the file on the device once it’s downloaded it if the file was there before it started  – a sort-of refresh of the content… (I’m aware there are potential side-effects if the contents have been changed; or if the download fails.)

Would we like the automated test to retry a download if the download fails? if so, how often? and should the tests report failed downloads anywhere? I’ll cover logging shortly.

More purposeful tests

Tests often serve multiple purposes, such as:

  • Confidence: Having large volumes of tests ‘passing’ may provide confidence to project teams.
  • Feedback: Automated tests can provide fast, almost immediate, feedback on changes to the codebase. They can also be run on additional devices, configurations (e.g. locales), etc. to provide extra feedback about aspects of the app’s behaviours in these circumstances.
  • Information: tests can gather and present information such as response times, installation time, collecting screenshots (useful for updating them in the app store blurb), etc.
  • Early ‘warning’: for instance, of things that might go awry soon if volumes increase, conditions worsen, etc.
  • Diagnostics: tests can help us compare behaviours e.g. not only where, when, etc. does something fail? but also where, when, etc. does it work? The comparisons can help establish boundaries and equivalence partitions to help us hone in on problems, find patterns, and so on.

Test runners (e.g. junit) don’t encourage logging information especially if the test completes ‘successfully’ (i.e. without unhandled exceptions or assertion failures). Logging is often used in the application code, it can also be used by the tests. As a good example, Espresso logs all interactions automatically to the Android log which may help us (assuming we read the logs and pay attention to their contents) to diagnose aspects of how the tests are performing.

 

Next Steps

This blog post is a snapshot of where I’ve got to. I’ll publish updates as I learn and discover more.

Further reading

My presentations at the Agile India 2017 Conference

I had an excellent week at the Agile India 2017 conference ably hosted by Naresh Jain. During the week I led a 1-day workshop on software engineering for mobile apps. This included various discussions and a code walkthrough so I’m not going to publish the slides I used out of context. Thankfully much of the content also applied to my 2 talks at the conference, where I’m happy to share the slides. The talks were also videoed, and these should be available at some point (I’ll try to remember to update this post with the relevant links when they are).

Here are the links to my presentations

Improving Mobile Apps using an Analytics Feedback Approach (09 Mar 2017)a

Julian Harty Does Software Testing need to be this way (10 Mar 2017)

Can you help me discover effective ways to test mobile apps please?

There are several strands which have led to what I’m hoping to do, somehow, which is to involve and record various people actually testing mobile apps. Some of that testing is likely to be unguided by me i.e. you’d do whatever you think makes sense, and some of the testing would be guided e.g. where I might ask you to try an approach such as SFDPOT. I’d like to collect data and your perspectives on the testing, and in some cases have the session recorded as a video (probably done with yet another mobile phone).
Assuming I manage to collect enough examples (and I don’t yet know how much ‘enough’ is) then I’d like to use what’s been gathered to help others learn various practical ways they can improve their testing based on the actual testing you and others did. There’s a possibility a web site called TechBeacon (owned by HP Enterprises who collaborated with my book on using mobile analytics to help improve testing) would host and publish some of the videos and discoveries. Similarly I may end up writing some research papers aimed at academia to improve the evidence of what testing works better than others and particularly to compare testing that’s done in an office (as many companies do when testing their mobile apps) vs. testing in more realistic scenarios.
The idea I’m proposing is not one I’ve tried before and I expect that we’ll learn and improve what we do by practicing and sometimes things going wrong in interesting ways.
So as a start – if you’re willing to give this a go – you’ll probably need a smartphone (I assume you’ve got at least one) and a way to record you using the phone when testing an app on it. We’ll probably end up testing several and various apps. I’m involved in some of the apps, not others e.g. I help support the offline Wikipedia app called Kiwix (available for Android, iOS and desktop operating systems see kiwix.org and/or your local app store for a copy).
Your involvement would be voluntary and unpaid. If you find bugs it’d be good to try reporting them to whoever’s responsible for the app which gives them the opportunity to address them and we’d also learn which app developers are responsive (and I expect some will seemingly ignore us entirely). You’d also be welcome to write about your experiences publicly, tweet, etc. whatever helps encourage and demonstrate ways to help us collectively learn how to test mobile apps effectively (and identify what doesn’t work well).
I’ve already had 4 people interested in getting involved, which led me to write this post on my blog to encourage more people to get involved. I also aim to encourage additional people who come to my workshops (at Agile India on 6th March) and at Let’s Test in Sweden in mid May. Perhaps we’ll end up having a forum/group so you can collectively share experiences, ideas, notes, etc. It’s hard for me to guess whether this will grow or atrophy. If you’re willing to join in the discovery, please do.
Please ping me on twitter https://twitter.com/julianharty if you’d like to know more and/or get started.

Software Talks, are you listening?

I gave the closing keynote at the Special Interest Group in Software Testing’s Autumn conference. Here is a PDF of the slides Software Talks, are you listening? I hope you learn some useful ideas from these materials. I’ll be presenting an updated version of this topic at the EuroSTAR 2015 Mobile Deep Dive event on 6th November 2015 based on my ongoing work and research.

Update: my presentation received the highest score for the day.

Software Talks, Are You Listening EuroSTAR 2014 Keynote

Here are the slides from my keynote at the recent EuroSTAR Conference Software Talks Are You Listening EuroSTAR_DUBLIN_2014 (05 Dec 2014) I have released these under a Creative Commons License.

The aim of the keynote is to explain how adding Analytics to record information about the application, while it is running, can help us to improve the software, and the development and the testing practices.

Mobile Testers Guide to the Galaxy slides presented at the Dutch Testing Day

I gave the opening keynote at the Dutch Testing Day conference in Groningen, NL. Here are the slides Don’t Panic Mobile Testers Guide to the Galaxy (21 Nov 2013) compressed As you may infer from the filename I compressed the contents to reduce the size of the download for you.

These slides are an updated set from the material I presented at SQuAD in September 2013.

Free continuous builds to run your automated Android Selenium WebDriver tests

Last week I helped with various workshops for the testingmachine.eu project. The project has implemented virtual machine technology to enable automated web tests to run on various operating systems more easily, without needing physical machines for each platform.

One of the friction points with test automation is the ease of deployment and execution of automated tests each time the codebase is updated. So I decided to try using github and travis-ci to see if we could automatically deploy and run automated tests written using Selenium WebDriver that used Android as the host for the automated tests. If we could achieve this, potentially we’d reduce the friction and the amount of lore people would need to know in order to get their tests to run. I’d some experience of building Android code using travis-ci which provided a good base to work from, since building Android code on travis-ci (and on continuous builds generally) can be fiddly and brittle to changes in the SDK, etc.

From the outset we decided to implement our project in small discrete, traceable steps. The many micro-commits to the codebase are intended to make the steps relatively easy to comprehend (and tweak). They’re public at https://github.com/julianharty/android-webdriver-vm-demo/commits/master. We also configured travis-ci to build this project from the outset to enable us to test the continuous build configuration worked and address any blockages early before focusing on customising the build to run the specific additional steps for Android Selenium WebDriver.

We used git subtree (and optional addition to git) to integrate the existing sample tests from the testingmachine.eu project whilst allowing that project to retain a distinct identity and to make that project easy to replace with ‘your’ code.

There were some fiddly things we needed to address, for instance the newer Android Driver seems to trigger timeouts for the calling code (the automated tests) and this problem took a while to identify and debug. However, within 24 hours the new example git project was ready and working. https://travis-ci.org/julianharty/android-webdriver-vm-demo

I hope you will be able to take advantage of this work and it’ll enable you to run some automated tests emulating requests from Android phones to your web site.  There’s lots of opportunity to improve the implementation – feel free to fork the project on github and improve it 🙂

Slides from my keynote on testing mobile apps at SQuAD conference

Thanks to my hosts at SQuAD conference for inviting me. I’ve revamped much of my material on the topic of testing mobile apps. As ever I have lots more I could do to improve the materials. Time and practice will help me do so. As would useful feedback. Here’s the set of slides in PDF format

Don’t Panic Mobile Testers Guide to the Galaxy (16 Sep 2013)c

The c at the end reminds me it’s the compressed version so I could get the file size down to 6MB.

Julian Harty