Six months review: learning how to test new technologies

I’ve not published for over a year, although I have several draft blog posts written and waiting for completion. One of the main reasons I didn’t publish was the direction my commercial work has been going recently, into domains and fields I’d not worked in for perhaps a decade or more. One of those assignments was for six months and I’d like to review various aspects of how I approached the testing, and some of my pertinent experiences and discoveries.

The core technology uses Apache Kafka which needed to be tested as a candidate for replicated data sharing between organisations. There were various criteria which took the deployment off the beaten track of many other uses of Apache Kafka’s popular deployment models, that is, the use was atypical and therefore it was important to understand how Kafka behaves for the intended use.

Kafka was new to me, and my experiences of some of the other technologies were sketchy, dated or both. I undertook the work at the request of the sponsor who knew of my work and research.

There was a massive amount to learn; and as we discovered lots to do in order to establish a useful testing process, including establishing environments to test the system. I aim to cover these in a series of blog articles here.

  • How I learned stuff
  • Challenges in monitoring Kafka and underlying systems
  • Tools for testing Kafka, particularly load testing
  • Establishing and refining a testing process
  • Getting to grips with AWS and several related services
  • Reporting and Analysis (yes in that order)
  • The many unfinished threads I’ve started

My presentations at the Agile India 2017 Conference

I had an excellent week at the Agile India 2017 conference ably hosted by Naresh Jain. During the week I led a 1-day workshop on software engineering for mobile apps. This included various discussions and a code walkthrough so I’m not going to publish the slides I used out of context. Thankfully much of the content also applied to my 2 talks at the conference, where I’m happy to share the slides. The talks were also videoed, and these should be available at some point (I’ll try to remember to update this post with the relevant links when they are).

Here are the links to my presentations

Improving Mobile Apps using an Analytics Feedback Approach (09 Mar 2017)a

Julian Harty Does Software Testing need to be this way (10 Mar 2017)

Can you help me discover effective ways to test mobile apps please?

There are several strands which have led to what I’m hoping to do, somehow, which is to involve and record various people actually testing mobile apps. Some of that testing is likely to be unguided by me i.e. you’d do whatever you think makes sense, and some of the testing would be guided e.g. where I might ask you to try an approach such as SFDPOT. I’d like to collect data and your perspectives on the testing, and in some cases have the session recorded as a video (probably done with yet another mobile phone).
Assuming I manage to collect enough examples (and I don’t yet know how much ‘enough’ is) then I’d like to use what’s been gathered to help others learn various practical ways they can improve their testing based on the actual testing you and others did. There’s a possibility a web site called TechBeacon (owned by HP Enterprises who collaborated with my book on using mobile analytics to help improve testing) would host and publish some of the videos and discoveries. Similarly I may end up writing some research papers aimed at academia to improve the evidence of what testing works better than others and particularly to compare testing that’s done in an office (as many companies do when testing their mobile apps) vs. testing in more realistic scenarios.
The idea I’m proposing is not one I’ve tried before and I expect that we’ll learn and improve what we do by practicing and sometimes things going wrong in interesting ways.
So as a start – if you’re willing to give this a go – you’ll probably need a smartphone (I assume you’ve got at least one) and a way to record you using the phone when testing an app on it. We’ll probably end up testing several and various apps. I’m involved in some of the apps, not others e.g. I help support the offline Wikipedia app called Kiwix (available for Android, iOS and desktop operating systems see and/or your local app store for a copy).
Your involvement would be voluntary and unpaid. If you find bugs it’d be good to try reporting them to whoever’s responsible for the app which gives them the opportunity to address them and we’d also learn which app developers are responsive (and I expect some will seemingly ignore us entirely). You’d also be welcome to write about your experiences publicly, tweet, etc. whatever helps encourage and demonstrate ways to help us collectively learn how to test mobile apps effectively (and identify what doesn’t work well).
I’ve already had 4 people interested in getting involved, which led me to write this post on my blog to encourage more people to get involved. I also aim to encourage additional people who come to my workshops (at Agile India on 6th March) and at Let’s Test in Sweden in mid May. Perhaps we’ll end up having a forum/group so you can collectively share experiences, ideas, notes, etc. It’s hard for me to guess whether this will grow or atrophy. If you’re willing to join in the discovery, please do.
Please ping me on twitter if you’d like to know more and/or get started.

Software Talks, are you listening?

I gave the closing keynote at the Special Interest Group in Software Testing’s Autumn conference. Here is a PDF of the slides Software Talks, are you listening? I hope you learn some useful ideas from these materials. I’ll be presenting an updated version of this topic at the EuroSTAR 2015 Mobile Deep Dive event on 6th November 2015 based on my ongoing work and research.

Update: my presentation received the highest score for the day.

Software Talks, Are You Listening EuroSTAR 2014 Keynote

Here are the slides from my keynote at the recent EuroSTAR Conference Software Talks Are You Listening EuroSTAR_DUBLIN_2014 (05 Dec 2014) I have released these under a Creative Commons License.

The aim of the keynote is to explain how adding Analytics to record information about the application, while it is running, can help us to improve the software, and the development and the testing practices.

Mobile Testers Guide to the Galaxy slides presented at the Dutch Testing Day

I gave the opening keynote at the Dutch Testing Day conference in Groningen, NL. Here are the slides Don’t Panic Mobile Testers Guide to the Galaxy (21 Nov 2013) compressed As you may infer from the filename I compressed the contents to reduce the size of the download for you.

These slides are an updated set from the material I presented at SQuAD in September 2013.

Free continuous builds to run your automated Android Selenium WebDriver tests

Last week I helped with various workshops for the project. The project has implemented virtual machine technology to enable automated web tests to run on various operating systems more easily, without needing physical machines for each platform.

One of the friction points with test automation is the ease of deployment and execution of automated tests each time the codebase is updated. So I decided to try using github and travis-ci to see if we could automatically deploy and run automated tests written using Selenium WebDriver that used Android as the host for the automated tests. If we could achieve this, potentially we’d reduce the friction and the amount of lore people would need to know in order to get their tests to run. I’d some experience of building Android code using travis-ci which provided a good base to work from, since building Android code on travis-ci (and on continuous builds generally) can be fiddly and brittle to changes in the SDK, etc.

From the outset we decided to implement our project in small discrete, traceable steps. The many micro-commits to the codebase are intended to make the steps relatively easy to comprehend (and tweak). They’re public at We also configured travis-ci to build this project from the outset to enable us to test the continuous build configuration worked and address any blockages early before focusing on customising the build to run the specific additional steps for Android Selenium WebDriver.

We used git subtree (and optional addition to git) to integrate the existing sample tests from the project whilst allowing that project to retain a distinct identity and to make that project easy to replace with ‘your’ code.

There were some fiddly things we needed to address, for instance the newer Android Driver seems to trigger timeouts for the calling code (the automated tests) and this problem took a while to identify and debug. However, within 24 hours the new example git project was ready and working.

I hope you will be able to take advantage of this work and it’ll enable you to run some automated tests emulating requests from Android phones to your web site.  There’s lots of opportunity to improve the implementation – feel free to fork the project on github and improve it 🙂

Slides from my talk at SFSCon 2013

I gave a brief presentation, in English, at

The topics include:

  • An introduction to software test automation and the Selenium project
  • Examples of how e-Government services differ in various web browsers and where the differences adversely affect some services for the users
  • A summary of pre-conference workshops for the project
  • Some suggestions to improve the testing and even the design of e-Government web services
  • Encouragement to get involved in the project.

Here are the slides Testing Web Applications (rev 15 Nov 2013) small