About Julian Harty

I've been working in technology since 1980 and over the years have held an eclectic collection of roles and responsibilities, including: The first software test engineer at Google outside the USA, where I worked for 4 years as a Senior Test Engineer on areas such as mobile testing, AdSense, and Chrome OS. A main board company director in England for a mix of companies involved in technology including software development, recruitment software, eCommerce, etc. Running the systems and operations for the European aspects of Dun & Bradstreet's Advanced Research and Development company, called DunsGate for 11 years. Creating and leading a small specialist software testing company called CommerceTest Limited in 1999. The company is currently resting while I work on other projects. Currently my main responsibility is Tester At Large for eBay. My main passion and driver is to find ways to help improve people's lives (albeit generally in minor ways) by helping adapt technology to suit the user, rather than watching users struggle with unsuitable software. I work on opensource projects, many hosted at http://code.google.com/u/julianharty and try to make my material available to as many interested people as practical, ideally for free and in forms they can take, adapt and use without restriction. One example of this work is on test automation for mobile phone applications, available at http://tr.im/mobtest I'm based in the South East of England. You can find me at conferences, events, and peer workshops globally. Julian Harty November 2010

Damn you auto-create

Inspired by the entertaining http://www.damnyouautocorrect.com/ web site, here are some thoughts on the benefits and challenges of having auto-create enabled for Kafka topics https://kafka.apache.org/documentation/#brokerconfigs

`auto.create.topics.enable`

At first, auto-create seems like a convenience, a blessing, as it means developers don’t need to write code to explicitly create topics. For a particular project the developers can focus on using the system as a service to share user-specified sets of data rather than writing extra code to interact with Zookeeper, etc. (newer releases of Kafka include the AdminClient API which deals with the Zookeeper aspects).

Effects of relying on auto-create: topics are created with the default (configured) partition and replication-counts. These may not be ideal for this topic and its intended use(s).

Adverse impacts of using auto-create

Deleting topics: The project uses Confluent Replicator to replicate data from Kafka Cluster to Kafka Cluster. As part of our testing lots of topics were created. We wanted to delete some of these topics but discovered they were virtually impossible to kill as the combination of Confluent Replicator and the Kafka Clusters were resurrecting the topics before they could be fully expunged. This caused almost endless frustration and adversely affected our testing as we couldn’t get the environment sufficiently clean to run tests in controlled circumstances (Replicator was busy servicing the defunct topics which limits it’s ability to focus on the topics we wanted to replicate in particular tests).

Coping with delays and problems creating topics: At a less complex level, auto-creation takes a while to complete and seems to happen in the background. When the tests (and the application software) tries to write to the topic immediately various problems occurred from time to time. Knowing that problems can occur is useful in terms of performance, reliability, etc. however it complicates the operational aspects of the system, especially as the errors affect producing data (what the developers and users think is happening) rather than the orthogonal aspect of creating a topic so that data can be produced.

Lack of clarity or traceability on who (what) created topics: Topics could be auto-created when code tried to write (produce) which was more-or-less what we expected. However they could also be auto-created by trying to read (consume). The Replicator duly setup replication for that topic. For various reasons topics could be created on one or more clusters with the same name; and replication happened both locally (within a Kafka Cluster) and to another cluster.  We ended up with a mess of topics on various clusters which was compounded by the challenges cleaning up (deleting) the various topics. It ended up feeling like we were living through the after-effects of the Sorcerer’s Apprentice!

From a testing perspective

From a testing perspective we ended up adding code in our consumer code that checked and waited for the topic to appear in Zookeeper before trying to read from it. This, at least, reduced some of the confusion and enabled us to unambiguously measure the propagation time for Confluent Replicator for topics it needed to replicate.

We also wrote some code that explicitly created topics rather than relying on the auto-create to determine how much effort was needed to remove the dependency on auto-create being enabled and used. That code amounted to less than 10 lines of code in the proof-of-concept. Production quality code may involve more code in order to: audit the creation, as well as log, and report problems and any run-time failures.

Further reading

“Auto topic creation on the broker has caused pain in the past; And today it still causes unusual error handling requirements on the client side, added complexity in the broker, mixed responsibility of the TopicMetadataRequest, and limits configuration of the option to be cluster wide. In the future having it broker side will also make features such as authorization very difficult.” KAFKA-2410 Implement “Auto Topic Creation” client side and remove support from Broker side

 

Six months review: learning how to test new technologies

I’ve not published for over a year, although I have several draft blog posts written and waiting for completion. One of the main reasons I didn’t publish was the direction my commercial work has been going recently, into domains and fields I’d not worked in for perhaps a decade or more. One of those assignments was for six months and I’d like to review various aspects of how I approached the testing, and some of my pertinent experiences and discoveries.

The core technology uses Apache Kafka which needed to be tested as a candidate for replicated data sharing between organisations. There were various criteria which took the deployment off the beaten track of many other uses of Apache Kafka’s popular deployment models, that is, the use was atypical and therefore it was important to understand how Kafka behaves for the intended use.

Kafka was new to me, and my experiences of some of the other technologies were sketchy, dated or both. I undertook the work at the request of the sponsor who knew of my work and research.

There was a massive amount to learn; and as we discovered lots to do in order to establish a useful testing process, including establishing environments to test the system. I aim to cover these in a series of blog articles here.

  • How I learned stuff
  • Challenges in monitoring Kafka and underlying systems
  • Tools for testing Kafka, particularly load testing
  • Establishing and refining a testing process
  • Getting to grips with AWS and several related services
  • Reporting and Analysis (yes in that order)
  • The many unfinished threads I’ve started

My presentations at the Agile India 2017 Conference

I had an excellent week at the Agile India 2017 conference ably hosted by Naresh Jain. During the week I led a 1-day workshop on software engineering for mobile apps. This included various discussions and a code walkthrough so I’m not going to publish the slides I used out of context. Thankfully much of the content also applied to my 2 talks at the conference, where I’m happy to share the slides. The talks were also videoed, and these should be available at some point (I’ll try to remember to update this post with the relevant links when they are).

Here are the links to my presentations

Improving Mobile Apps using an Analytics Feedback Approach (09 Mar 2017)a

Julian Harty Does Software Testing need to be this way (10 Mar 2017)

Can you help me discover effective ways to test mobile apps please?

There are several strands which have led to what I’m hoping to do, somehow, which is to involve and record various people actually testing mobile apps. Some of that testing is likely to be unguided by me i.e. you’d do whatever you think makes sense, and some of the testing would be guided e.g. where I might ask you to try an approach such as SFDPOT. I’d like to collect data and your perspectives on the testing, and in some cases have the session recorded as a video (probably done with yet another mobile phone).
Assuming I manage to collect enough examples (and I don’t yet know how much ‘enough’ is) then I’d like to use what’s been gathered to help others learn various practical ways they can improve their testing based on the actual testing you and others did. There’s a possibility a web site called TechBeacon (owned by HP Enterprises who collaborated with my book on using mobile analytics to help improve testing) would host and publish some of the videos and discoveries. Similarly I may end up writing some research papers aimed at academia to improve the evidence of what testing works better than others and particularly to compare testing that’s done in an office (as many companies do when testing their mobile apps) vs. testing in more realistic scenarios.
The idea I’m proposing is not one I’ve tried before and I expect that we’ll learn and improve what we do by practicing and sometimes things going wrong in interesting ways.
So as a start – if you’re willing to give this a go – you’ll probably need a smartphone (I assume you’ve got at least one) and a way to record you using the phone when testing an app on it. We’ll probably end up testing several and various apps. I’m involved in some of the apps, not others e.g. I help support the offline Wikipedia app called Kiwix (available for Android, iOS and desktop operating systems see kiwix.org and/or your local app store for a copy).
Your involvement would be voluntary and unpaid. If you find bugs it’d be good to try reporting them to whoever’s responsible for the app which gives them the opportunity to address them and we’d also learn which app developers are responsive (and I expect some will seemingly ignore us entirely). You’d also be welcome to write about your experiences publicly, tweet, etc. whatever helps encourage and demonstrate ways to help us collectively learn how to test mobile apps effectively (and identify what doesn’t work well).
I’ve already had 4 people interested in getting involved, which led me to write this post on my blog to encourage more people to get involved. I also aim to encourage additional people who come to my workshops (at Agile India on 6th March) and at Let’s Test in Sweden in mid May. Perhaps we’ll end up having a forum/group so you can collectively share experiences, ideas, notes, etc. It’s hard for me to guess whether this will grow or atrophy. If you’re willing to join in the discovery, please do.
Please ping me on twitter https://twitter.com/julianharty if you’d like to know more and/or get started.

Software Talks, are you listening?

I gave the closing keynote at the Special Interest Group in Software Testing’s Autumn conference. Here is a PDF of the slides Software Talks, are you listening? I hope you learn some useful ideas from these materials. I’ll be presenting an updated version of this topic at the EuroSTAR 2015 Mobile Deep Dive event on 6th November 2015 based on my ongoing work and research.

Update: my presentation received the highest score for the day.

Software Talks, Are You Listening EuroSTAR 2014 Keynote

Here are the slides from my keynote at the recent EuroSTAR Conference Software Talks Are You Listening EuroSTAR_DUBLIN_2014 (05 Dec 2014) I have released these under a Creative Commons License.

The aim of the keynote is to explain how adding Analytics to record information about the application, while it is running, can help us to improve the software, and the development and the testing practices.

Mobile Testers Guide to the Galaxy slides presented at the Dutch Testing Day

I gave the opening keynote at the Dutch Testing Day conference in Groningen, NL. Here are the slides Don’t Panic Mobile Testers Guide to the Galaxy (21 Nov 2013) compressed As you may infer from the filename I compressed the contents to reduce the size of the download for you.

These slides are an updated set from the material I presented at SQuAD in September 2013.

Free continuous builds to run your automated Android Selenium WebDriver tests

Last week I helped with various workshops for the testingmachine.eu project. The project has implemented virtual machine technology to enable automated web tests to run on various operating systems more easily, without needing physical machines for each platform.

One of the friction points with test automation is the ease of deployment and execution of automated tests each time the codebase is updated. So I decided to try using github and travis-ci to see if we could automatically deploy and run automated tests written using Selenium WebDriver that used Android as the host for the automated tests. If we could achieve this, potentially we’d reduce the friction and the amount of lore people would need to know in order to get their tests to run. I’d some experience of building Android code using travis-ci which provided a good base to work from, since building Android code on travis-ci (and on continuous builds generally) can be fiddly and brittle to changes in the SDK, etc.

From the outset we decided to implement our project in small discrete, traceable steps. The many micro-commits to the codebase are intended to make the steps relatively easy to comprehend (and tweak). They’re public at https://github.com/julianharty/android-webdriver-vm-demo/commits/master. We also configured travis-ci to build this project from the outset to enable us to test the continuous build configuration worked and address any blockages early before focusing on customising the build to run the specific additional steps for Android Selenium WebDriver.

We used git subtree (and optional addition to git) to integrate the existing sample tests from the testingmachine.eu project whilst allowing that project to retain a distinct identity and to make that project easy to replace with ‘your’ code.

There were some fiddly things we needed to address, for instance the newer Android Driver seems to trigger timeouts for the calling code (the automated tests) and this problem took a while to identify and debug. However, within 24 hours the new example git project was ready and working. https://travis-ci.org/julianharty/android-webdriver-vm-demo

I hope you will be able to take advantage of this work and it’ll enable you to run some automated tests emulating requests from Android phones to your web site.  There’s lots of opportunity to improve the implementation – feel free to fork the project on github and improve it 🙂