My presentations at the Agile India 2017 Conference

I had an excellent week at the Agile India 2017 conference ably hosted by Naresh Jain. During the week I led a 1-day workshop on software engineering for mobile apps. This included various discussions and a code walkthrough so I’m not going to publish the slides I used out of context. Thankfully much of the content also applied to my 2 talks at the conference, where I’m happy to share the slides. The talks were also videoed, and these should be available at some point (I’ll try to remember to update this post with the relevant links when they are).

Here are the links to my presentations

Improving Mobile Apps using an Analytics Feedback Approach (09 Mar 2017)a

Julian Harty Does Software Testing need to be this way (10 Mar 2017)

Mobile Testers Guide to the Galaxy slides presented at the Dutch Testing Day

I gave the opening keynote at the Dutch Testing Day conference in Groningen, NL. Here are the slides Don’t Panic Mobile Testers Guide to the Galaxy (21 Nov 2013) compressed As you may infer from the filename I compressed the contents to reduce the size of the download for you.

These slides are an updated set from the material I presented at SQuAD in September 2013.

Free continuous builds to run your automated Android Selenium WebDriver tests

Last week I helped with various workshops for the testingmachine.eu project. The project has implemented virtual machine technology to enable automated web tests to run on various operating systems more easily, without needing physical machines for each platform.

One of the friction points with test automation is the ease of deployment and execution of automated tests each time the codebase is updated. So I decided to try using github and travis-ci to see if we could automatically deploy and run automated tests written using Selenium WebDriver that used Android as the host for the automated tests. If we could achieve this, potentially we’d reduce the friction and the amount of lore people would need to know in order to get their tests to run. I’d some experience of building Android code using travis-ci which provided a good base to work from, since building Android code on travis-ci (and on continuous builds generally) can be fiddly and brittle to changes in the SDK, etc.

From the outset we decided to implement our project in small discrete, traceable steps. The many micro-commits to the codebase are intended to make the steps relatively easy to comprehend (and tweak). They’re public at https://github.com/julianharty/android-webdriver-vm-demo/commits/master. We also configured travis-ci to build this project from the outset to enable us to test the continuous build configuration worked and address any blockages early before focusing on customising the build to run the specific additional steps for Android Selenium WebDriver.

We used git subtree (and optional addition to git) to integrate the existing sample tests from the testingmachine.eu project whilst allowing that project to retain a distinct identity and to make that project easy to replace with ‘your’ code.

There were some fiddly things we needed to address, for instance the newer Android Driver seems to trigger timeouts for the calling code (the automated tests) and this problem took a while to identify and debug. However, within 24 hours the new example git project was ready and working. https://travis-ci.org/julianharty/android-webdriver-vm-demo

I hope you will be able to take advantage of this work and it’ll enable you to run some automated tests emulating requests from Android phones to your web site.  There’s lots of opportunity to improve the implementation – feel free to fork the project on github and improve it 🙂

Slides from my talk at SFSCon 2013

I gave a brief presentation, in English, at https://www.sfscon.it/program/2013

The topics include:

  • An introduction to software test automation and the Selenium project
  • Examples of how e-Government services differ in various web browsers and where the differences adversely affect some services for the users
  • A summary of pre-conference workshops for the testingmachine.eu project
  • Some suggestions to improve the testing and even the design of e-Government web services
  • Encouragement to get involved in the testingmachine.eu project.

Here are the slides Testing Web Applications (rev 15 Nov 2013) small

Human Testing for Mobile Apps

Automated software tests are topical where they seem to be replacing much of the testing done by humans. Automated tests are faster, provide early feedback and cost little to run many times. Agile projects need automated tests to keep up with the frequent builds which may arrive tens or hundreds of times a day and need testing.

So human testing seems to be gathering cobwebs, even despised as unproductive, low-skilled work done by testers who don’t have the ‘skills’ to write automated tests. However, as an industry we ignore testing by humans at our peril. There’s so much testing that’s beyond practical reach of automated tests. It’s time to revive interactive testing performed by motivated and interested humans. This talk will help you to find a new impetus and focus for your interactive testing to complement automated tests.

Feelings and emotions are what users will judge your apps on, so let’s test and explore how users may feel about the mobile apps. Michael Bolton published an insightful article called: “I’ve Got a Feeling: Emotions in Testing by Michael Bolton”

Fast, efficient testing can augment the repetitive automated testing. BugFests, where a group of people meet to test the same piece of software together for up to an hour can be extremely productive at finding problems the automated tests haven’t.

Another technique is moving both you (from place to place) and the phone (by rotating it from portrait to landscape modes, etc.) may help find and expose bugs which are hard for your automated tests to discover.

I will be giving a keynote at VistaCon 2013 in April 2013 on this topic. Please email me if you would like to get involved in the discussion, share ideas, criticize, etc.

Android Test Automation Getting to grips with UI Automator

Over the last week I have spent about a day of effort getting to grips with the recently launched UIAutomator test automation framework for Android. It was launched with version 16 of Android (Android 4.1) however on 4.1 devices the framework doesn’t even have all the documented methods available. With version 17 of Android (Android 4.2), support has improved to the point that the examples can work acceptably. Here is the official example http://developer.android.com/tools/testing/testing_ui.html

However in the minor update between Android 4.2.1 and Android 4.2.2 someone seems to have broken the support for automatic scrolling through pages of results.  I have reported the problem on the adt-dev forum, https://groups.google.com/forum/?fromgroups=#!topic/adt-dev/TjeewtpNWf8 which seems to be where the Android development team monitor comments. I have implemented a workaround, using a helper method, below:

    /**
     * Launches an app by it's name. 
     * 
     * @param nameOfAppToLaunch the localized name, an exact match is required to launch it.
     */
    protected static void launchAppCalled(String nameOfAppToLaunch) throws UiObjectNotFoundException {
        UiScrollable appViews = new UiScrollable(new UiSelector().scrollable(true));
          // Set the swiping mode to horizontal (the default is vertical)
          appViews.setAsHorizontalList();
          appViews.scrollToBeginning(10);  // Otherwise the Apps may be on a later page of apps.
          int maxSearchSwipes = appViews.getMaxSearchSwipes();

          UiSelector selector;
          selector = new UiSelector().className(android.widget.TextView.class.getName());
          
          UiObject appToLaunch;
          
          // The following loop is to workaround a bug in Android 4.2.2 which
          // fails to scroll more than once into view.
          for (int i = 0; i < maxSearchSwipes; i++) {

              try {
                  appToLaunch = appViews.getChildByText(selector, nameOfAppToLaunch);
                  if (appToLaunch != null) {
                      // Create a UiSelector to find the Settings app and simulate      
                      // a user click to launch the app.
                      appToLaunch.clickAndWaitForNewWindow();
                      break;
                  }
              } catch (UiObjectNotFoundException e) {
                  System.out.println("Did not find match for " + e.getLocalizedMessage());
              }

              for (int j = 0; j < i; j++) {
                  appViews.scrollForward();
                  System.out.println("scrolling forward 1 page of apps.");
              }
          }
    }

I ended up writing several skeletal demo Android apps to help me explore the capabilities of UI Automator. In each case I was working through publicly reported problems on http://stackoverflow.com where I’ve posted answers and feedback to several reported problems.

Here are the links to my comments:

http://stackoverflow.com/questions/13991977/how-to-switch-on-wifi-in-uiautomator-test-case-in-android-device

http://stackoverflow.com/questions/15204154/uiautomator-failing-on-4-1-2-device

http://stackoverflow.com/questions/15111001/uiautomator-getlasttraversedtext

Strengths of UI Automator

The key strengths include:

  • We can test most applications, including Google’s installed apps such as Settings. Thankfully the example from the Android site does just that, albeit at a perfunctory level. However the example to change the Wi-Fi setting on stackoverflow provides a better example of what we can now do. Because the tests interact with the objects, they have a direct connection to the app being tested, rather than crude interactions by clicking at locations, OCR, etc.
  • Using UI Automator relies on the underlying support for Accessibility in the platform and therefore may help to encourage improved support for Accessible Android apps as developers refine their apps to make them testable by Ui Automator.
  • We can test apps on several devices from one computer, through related changes to the Android build tools.
  • There are debug and exploration tools available on both the device (using adb shell uiautomator) and from my computer, using uiautomationviewer.

Weaknesses

  • Text based matching makes testing localized apps much harder than using the older Android Instrumentation which could easily share resource files with the app being tested.
  • There is virtually no documentation or examples, and the documentation that does exist doesn’t provide enough clues to address key challenges e.g. obtaining the text from WebViews.
  • UI Automation cannot be used when the Accessibility features e.g. Explore-By-Touch is enabled on the device.
  • There are bugs in the current version of Android and there’s no easy way to revert devices to 4.2.1
  • Automation is very slow e.g. paging through the set of apps takes several seconds to go to the next page.

Other characteristics

  • All the tests are bundled into a single jar file, deployed to the device. This risks one set of tests overwriting the bundle of tests.

Further reading

Test Automation Architectures

I recently read a well written and helpful paper written by Doug Hoffman titled: Test Automation Architectures: Planning for Test Automation. You can find it online at http://softwarequalitymethods.com/papers/autoarch.pdf

It covers many key points that need to be considered if you want to have effective and useful automated tests. Thank you Doug for writing it so many years ago and for sharing it.

 

 

Test Automation Interfaces – the glue between your tests and the app

Over the last seven months I have been talking to various people about how test automation ‘works’ and how the working affect the viability of their test automation. In December 2012, LogiGear published an abridged version of an article I have written on the topic http://www.logigear.com/magazine/mobile-testing/test-automation-interfaces-for-mobile-apps/ I hope you find the article informative and helpful.

I sometimes find analogies help people to grasp concepts and ideas which I otherwise might struggle to communicate effectively. So here are a couple of analogies for test automation interfaces:

  1. They are the glue between your automated tests and the app you want to test. By picking the most appropriate glue for the job, your tests are more likely to stick around and work effectively.
  2. The interface is similar to the way Velcro works, the hooks bind with the eyes to establish an effective connection.

I have some ideas and plans to expand the initial article into a small book on effective software test automation. e-mail me if you’d like to encourage that work. My email address is my name (julianharty) at Google’s fine email service: gmail.com I assume a human will be able to create the correct email address from this information 🙂

Slides for presentation at QA&TEST 2011

Here is the link to my slides which I presented at QA&Test 2011. As ever, these slides are the latest version of the work on UX Test Automation.

UX Test Automation for QAandTEST 2011 (27 Oct 2011)

The main content is identical to the presentation planned for EuroSTAR 2011 21st to 24th November 2011. I may revise the main content again by the time of EuroSTAR, if so, I’ll post the updated material online.

Update: I received the best presentation award at the conference for this presentation 🙂