In a perfect world, QA teams would test their applications on every possible browser/OS combination in existence.
In the real world, given that there are dozens of possible browser/OS combinations from which users could access your app, few teams reach that ideal. Instead, they must settle for testing on only a subset of all possible platforms that their end users could run.
That raises the question: How many browser/OS combinations do you need to test on before you can feel confident in your application? In other words, how much test coverage is enough?
By "test coverage," I'm referring to the percentage of all possible browser/OS combinations (like Firefox on Windows 7, Firefox on Windows 10 and Firefox on Linux) on which you run tests. Each combination represents a distinct "platform" for the purpose of this discussion. One hundred percent coverage would mean you test for every possible platform that could exist on a modern device.
But 100 percent test coverage is not realistic. Not only do most teams lack the resources to approach that level of coverage, but the total number of browser/OS combinations is essentially infinite once you get beyond major browser releases and OS versions. There are around 300 Linux distributions out there, for example, so if you count each one as a distinct operating system, you'd have tons of combinations to test to achieve complete coverage just for Linux-based platforms.
And, if you are testing for a mobile app and add mobile devices to the equation, your list of possible platform configurations gets exponentially longer still.
The bottom line: No matter how you crunch your numbers, total test coverage, or anything approaching it, is just not going to happen.
That said, you still need to ensure that you are testing for enough browser/OS combinations to give yourself reasonable assurance that you've verified that your application runs properly within most of the environments that your end users use.
That's why it's important to set a goal for test coverage that will provide this assurance, while also being feasible to achieve. Here are some tips for doing that.
As Sauce Labs's Continuous Testing Benchmark Report explains, testing against at least five platforms is a solid goal as a general rule of thumb.
If that number seems hard to reach, keep in mind that only about 60 percent of Sauce Labs customers achieve that goal. So don't kill yourself if you're not there now. But it's a healthy number to shoot for, at least as a general rule.
While the 5+ platform goal described above is a good rule of thumb, it's important to contextualize your test coverage strategy by paying attention to how many possible platforms your customers could actually be using.
That number could vary widely depending on which types of operating systems your platform supports. For example, maybe you are writing an iOS-only app. In that case, your total number of possible platforms will be much lower than if your app supports Windows, Android and iOS. As a result, testing against only three browser/OS combinations for an iOS app might provide excellent test coverage.
In certain cases, you might also support only one browser. This is a rare thing to do these days in general, but if, for example, your app officially runs only a certain type of mobile phone using the browser provided by the manufacturer, then your total number of supported platforms will also be low.
When you run tests manually, there is an unavoidable tradeoff between the extent of your test coverage and the time and resources your team has to devote to testing. The greater your test coverage, the more time and money the tests cost.
Thus, if your tests are run manually (or mostly manually), then it's usually wise to be conservative about the number of platforms you test against. Otherwise, your resource costs are too high.
This issue largely disappears when you automate tests. With test automation, you can achieve higher rates of coverage without a proportional increase in time or resource expenditures.
It's hard – if not impossible – to quantify how likely your app is to experience bugs. Still, most developers and QA engineers would agree that some apps are more bug-prone than others. By extension, some benefit from more testing.
Factors like the size of your codebases (which you can measure crudely, based on total lines of code), the number of runtime variables (which can create configuration issues that lead to app problems on certain platforms) and the extent to which your software interacts with hardware (whose behavior can be device-specific, requiring more tests) all affect the likelihood of your app to experience bugs. Consider these factors when deciding how much test coverage to shoot for.
There is no one-size-fits-all rule for the level of test coverage that is appropriate for most organizations. Again, testing against five platforms is a good general guideline, but your testing needs will vary depending on how many platforms you are aiming to support, which type of app you develop, and other factors.
Chris Tozzi has worked as a journalist and Linux systems administrator. He has particular interests in open source, agile infrastructure and networking. He is Senior Editor of content and a DevOps Analyst at Fixate IO. His latest book, For Fun and Profit: A History of the Free and Open Source Software Revolution, was published in 2017.