Why Browser-Automated Testing Consistently Fails

 

We’ve seen a lot of attempts at building browser-automated testing suites in our time. Most of them fail. The story typically goes something like this:

  • A growing team has been manually testing their application for some time
  • As they grow, they realize that manual testing is missing key user flows, and more importantly is significantly slowing down deployments
  • The team hire QA automation engineers or assigns developer engineers to build an automated test suite
  • The effort takes a few months
  • Over time, release schedules and firefighting get in the way of updating, maintaining, and expanding the testing suite — and the suite starts to degrade

At which point, one of two things happens:

  1. The test suite grows increasingly decrepit and obsolete until it is abandoned, or
  2. The number of tests and effort per build balloons and bloats until running the test takes hours and maintaining it delays your build cycle even more than previous manual testing

Why does this always come to pass? Much of the problem is the natural brittleness of browser-level tests–the effort in maintaining them is why Google suggests they take the least of your attention. But the main reason is that when building end to end tests, most teams are focusing on the wrong questions entirely.

A quick google search for “Improve QA testing” will bring you to a host of articles with titles like “___ Ways to Improve QA Testing.” I’m going to save you some time and provide you with the highlights here:

1. Incorporate testing early in the process

2. Outsource

3. Don’t forget to do it

4. Use this tool or that tool for test case management or execution

There’s a whole lot of advice on how to test your application, especially at the browser level. But bugs don’t get through because you don’t know how to test. Bugs get through because you don’t know what to test. Testing suite get bloated and broken because you don’t know what to test, and you end up building hundreds or thousands of tests in the hopes you’ll cover what you need. Tools can’t fix that for you, and outsourced firms can’t fix that for you, no matter how much money you throw at them.

Figure Out What to Test

This may seem obvious, but that’s precisely the problem: most teams think figuring out what to test is fairly self-evidence, or simply requires an engineer’s intuition: they’ll test what they think is important, flaky, bug-prone, etc.

But what should you test to make sure your application won’t break when your users use it? You should test what your users are actually trying to do.

In reality, your users tell you everything you need to know about what to test – you just need to start listening. No team succeeds with 10,000 end to end tests (and yes, we’ve seen 10,000), spitballing to try to catch everything. The teams that win–who can keep their automated testing suites maintained, relevant, and effective–are those who test for functionality along real user behavior.

Your product team is using analytics to understand what your users are doing, and improve workflows to improve the user experience. They’ve been doing it for years. Why isn’t engineering using the same analytics to build better tests?

The Path Forward

The first step to making this transition is to evaluate your current testing suite against your users’ common flows.

Which of your current automated end to end tests are covering actual user behavior, and which are irrelevant? Where are your gaps? ProdPerfect can help you quickly find the answers. We’ll tell you what your users are doing and what your tests are doing, so you can close the gaps and cut the chaff.

Leave a Reply

Your email address will not be published. Required fields are marked *