UI Automation, what is it good for? 

, , , , , ,
Floating orb in front of wooden panelling.
TL;DR: What automation at the UI level does and doesn’t give you.
UPDATE: I originally wrote this back in March 2015, lost it in my drafts and found it again recently so thought I get it out there. Don’t agree then let me know in the comments.

Automation fallacy

Every time I speak with different teams and organisations a theme constantly comes up, UI automation and how it’s going to solve all their problems. The thinking goes that if we can automate more of our tests – read test scripts –  then the Testers no longer have to check that item anymore. This then frees them up to do more interesting things like exploratory testing or that the Tester can be done away with altogether.

There is also a notion that automating all the regression checks will drop the regression test cycle from days to hours. This then supposable allows the team to move faster and release  quicker then before.

What everyone seems to miss is that Automated checks are generally built to check one thing and will tell you if that thing is still there or behaving as the script has been programmed to tell you. If anything else happens that wasn’t programmed into the check then it fails or stops dead, relaying on someone having to then go look and see what went wrong.

A Tester on the other hand can look for workarounds, workout what may have caused the issue or go find other issues based on the information they’ve just learned.

So should we give up on UI automation and face that we’ve got to do rounds and rounds of regression testing and hire more Testers? Well no, what we need to do is ask ourselves

Why are we automating?

It’s looks like a simple question and most people (including myself in the past) would be able to give you a list of answers but what we forget to question is by automating this check what does it tell me when it passes or fails? If it passes does that now mean I no longer have to check that feature or scenario again? If it fails what does that tell me? That I have to check that scenario manually?

When a check fails what do we expect the team to do? Stop everything and investigate the issue? Carry on as normal and hope someone else will check it? Ignore the issue altogether? Who is responsible for checking the issue? Developers, Testers, dedicated automation engineers?

There are a lot of reasons that people give as to why they want to automate their testing such as

  • Reduce test/regression testing
    • The reason for regression testing is to see if the changes you’ve made to your code base haven’t broken anything existing.
    • Unless you have automated all your UI Checks/regression suits then automation is not going to help you as much as you think it will
  • Spot issues/bugs faster
    • Automation doesn’t find new bugs it only tells you that the check you’ve scripted has broken in someway. You need to tell the script that if action A doesn’t produce result B then fail with an error message. What normally happens is the check fails in away your didn’t anticipate.  Don’t forget if you knew before hand how something would break you would probably have put in a fix. Thats why they are called defects something behaving the way you didn’t want/anticipate
  • Free up Testers
    • Potentially but that is if they trust the automation
  • Consistently check a feature the same way
    • This is one thing a automation check is very good at
  • Something that is laborious or difficult to setup and check
    • Another good candidate for automation. We use it to do policy testing of our apps as it’s time consuming and prone to error when trying to manually test
  • We’re doing Behaviour Driven Development (BDD)
    • BDD is not about automating but more about collaborating to understand and create features. The automation is just one small part of it and even then it’s not about testing the UI but the business logic which could be tested at the Unit level
    • If you ever hear a development team saying ’The BDD tests are failing’ then its good indicator that they are probably using BDD incorrectly
  • To release faster
    • Again because you need to do less testing, see Reduce test/regression testing
  • It’s a part of continues integration/delivery so we have to
    • No thought into what you are automating other then it’s what people say you have to do
  • Test manager or some other higher up tells you to
    • Someone thinks that just telling a development team to automate their testing will help them, see above
  • People within the development team or key stockholders don’t trust the developers work
    • The test team are being used as a safety net to check the developers work which tends to have a self-fulfilling prophecy for the developers who start using the test team as that

What does an Automated Check actually do?

Let’s start with an example from an mobile app but could very easily be any platform of your choice:

A simple automated scenario could be when the home page is loaded and I’ve selected an option then I expect to see items X, Y and Z.

Things this scenario will need to do are:

     Start the application
     Wait for it to load up
     Select a menu option
     Wait for the new screen to load
     Then check that the expected items are on screen.
animated gif of example automated check

Click to view animated gif of example automated check

So you can run this check over and over and know that as long as the sequence doesn’t change and the items you are checking for are there then the test will pass. What it isn’t going to tell you though is

  • Formatting issues with any of the screens loading up
  • It is starting to take longer for pages to load in
  • The ordering of the menu options has changed
  • There are new menu options
  • The items you are checking can be seen by the automation framework but nothing is actually visible on screen
  • There are new items on screen that the check is not looking for

All of the above could also be scripted into the check but would likely take quite a bit of effort and you can’t always predict how an app will behave and therefore not be able to script for it.

This is where a real Tester has the advantage. You don’t need to tell a Tester to look for these things they will do this without being prompted and not only that generally a lot faster then an automated check. They can also tell you if it doesn’t feel right or perform in away that would be acceptable for end users which can be very hard to quantify and therefore automate. They can also use the information that they’ve just learned and apply it to what else they can discover. An automated test isn’t going to be able to do this, not with the tools we are using at the moment

Where a Tester can’t match an automated check (or will find very hard to) is checking the same thing in the same way consistently and quickly. As long as they are no physical moving parts an automated check can normally carry out the scenario above in seconds only being delayed by waiting for things to install or load.

So should we stop automating our checks?

Before any team starts to think about automating their testing via the UI they should first, as a team, ask themselves:

Why are we automating?

It sounds like a simple question but as I explained earlier people tend to have differing views on what the automation is actually going to do for them. By talking about why they want to automate they are more likely to come up with solutions that will actually address the problems.

One of the main benefits that I’ve seen from automation especially at the UI is faster feedback that the app:

  • Can actually be installed on a real device/displayed in a browser
  • The app can start without crashing
  • The app can reach any endpoints that it relies on
  • The apps core feature, the one thing it is designed to do actually works for your users e.g.
    • BBC iPlayer: Can video actually be played
    • Google maps: can provide directions to a destination
    • Amazon: allows you to buy products
    • Facebook: The feed shows you what your friends and family are doing

To do this manually, every time a build is made, could take some time but not only that is very tedious and from my experience just doesn’t happen. What tends to happen in this scenario is that developers will wait and see what comes back when Testers finally do test the app. This could be some time from when the change was made and the Testers finally being able to test it.

The longer this feedback loop is the hard it is to fix due to the overhead in understanding what went wrong and what changed to cause the issue. This is exacerbated when working with legacy code especially when not written by the developer making that change.

By automating just the core journey the development team know very quickly that whatever was last committed hasn’t caused a catastrophic failure and that the apps core feature is still functioning. If there is a failure then you can back out the change (or ideally fix it) and get back to a working state. This helps the whole team know that the app works and improves the team overall confidence that if I install this app it’s actually going to be worth their time. There is nothing more frustrating especially in mobile development to get a build, find the device you want to test on and install it only to find it can’t carry out it’s main job for the user or worse crashes on start.

When things fail so easily and obviously it does nothing to instil confidence in the development team more so when your key stakeholders find the issues. This also allows you to start using your Testers for what they a really good at testing and not just checking your developers work.

Core Journey

We use the concept of PUMA  to decide what our core journeys are and ultimately what we should and shouldn’t automate. A generally rule of thumb is if it’s not a core journey then can it be covered by a unit/integration test not invoking the UI. If it still can’t then why would automating it help? Who would do it? How often does it need to run and how quickly do we need feedback that it’s broken? Could we monitor the app stats to check if is still working rather then automating it? If it does break how bad would your users be affected/perception be? Could it be controlled by a feature toggle that allows it to be switched off in the live environment?

So the next time someone asks why don’t you just automate your testing ask them “Why are we automating?” You might realise that the problem they perceive can easily be addressed by one simple automated check rather then 100’s of automated UI checks.

2 replies

Trackbacks & Pingbacks

  1. […] the time, money and skill question is one initial up-front cost of building out the automation see UI Automation, what is it good for? Unfortunately these test only find what you program them to find and not only that the more […]

  2. […] UI Automation, what is it go for? (AKA the Automation fallacy) on why automating at this level isn’t going to give you want you […]

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *