Mobile app testing: How to extend your device coverage

by: | Apr 6, 2016

One of the biggest challenges of mobile app testing is the enormous and growing number of devices to test. With mobile phones alone, there are different brands, models, screen sizes, hardware features, and OS versions available out there. With Android, testing is especially challenging because of the large number of devices on the market, as compared to iOS.

OpenSignal, a startup that specializes in wireless coverage mapping, released an amazing visualization of the Android phone marketplace late last year — with the larger rectangles representing a larger installed base:

app testing coverage
Source: OpenSignal

For any company trying to serve the Android market, that’s a pretty daunting sight. My colleague Richard recently wrote a great post about 9 tips for designing Android screens. But the challenge of device proliferation extends to app testing. There are simulators and automated tests that predict the performance of hardware, but running tests on these simulators is not as reliable as tests conducted on a physical device. A simulator also can’t account for some hardware features — such as specific chip settings, processing power and device memory.

So, how do you approach device testing when testing on all devices can be prohibitively time consuming and expensive? How do we try to limit the number of devices but still maximize the coverage to find potential bugs?

Understand Your Target User

First, we can create some boundaries for testing by knowing the target user. At that moment any information about the target audience may be useful, particularly the region but other kinds of demographic information can help steer the scope of your testing. With some subsequent research about the installed base of different devices and your target demographics, you might be able to narrow the scope of your testing.

Identifying your target user and knowing their devices becomes even easier when you are updating your app or building a second app. In the Google Play Store and in iTunes Connect you can find app statistics that include which devices and which OS versions are most used. For example, this is one view of what Google Play offers developers, current installs by device and country:

google play app statistics

Source: Google Developers

If your app does not have a very targeted and specific audience, you can use more general data to help focus on the most widely used devices. Both Google and Apple provide data on the most commonly used OS versions and hardware. For example, as of March 7, here’s what Google provides about the OS versions currently in use:

google play OS versions

Source: Google Developers

Here’s a similar chart that Apple provides about current versions of iOS in the market:

apple iOS installed base

Source: Apple Developer

Create an App Testing Device Matrix

After gathering user data and focusing your QA plan around your target user, it’s time to define which devices to use for physical app testing. At ArcTouch, we create a “Device Coverage Matrix” that allows us to get the broadest possible coverage from our physical app testing — without testing every possible combination of device, screen resolution (pixel density) and OS.

We focus our tests on pixel density and OS version. We’ve learned that the differences in these device characteristics are what cause the vast majority of design bugs. And by covering the most common issues with the device coverage matrix, we can eliminate the most likely causes of user frustration.

Our matrix typically has three prioritized groups of devices: The most important devices we must test, the secondary devices that we will test, and the devices that are lower priority because they have the same OS version or pixel density as the other groups. We name these groups Tier 1, Tier 2 and Tier 3.

To start, we begin to create our actual matrix based on what we already know about the target user, which in the example below is an Android app for a specific audience of users.

First, we identify the most commonly used Android versions and pixel densities:

mobile app testing device matrix step 1

mobile app testing device matrix step 2

Next, we combine both tables into a matrix and add the devices that are most common based on our previous research.

mobile app testing device matrix step 3

Then, we use the logic from the Matrix to create a table that becomes our priority test list of Tier 1, Tier 2, and Tier 3 devices.

Often, our app engineering team is actively using and testing on Tier 1 devices during the development of the project. Our QA team tests on the Tier 1 and Tier 2 devices.

For Tier 3, we choose devices that are not as common as Tier 1 and Tier 2. But in the matrix, they are paired with devices from Tier 1 or 2 that have the same screen size or OS version. Depending upon the QA budget for a project, we may not test Tier 3 devices — but since they have common characteristics to those in Tier 1 and 2, we’ve limited the potential of finding bugs.

That matrix also helps us post launch. If a user encounters a bug, we can look at the matrix and find related devices to quickly reproduce, isolate, and fix the problem. Here’s an example of what our prioritized test list might look like:

mobile app testing device matrix step 4

Setting Boundaries for Mobile Device Testing

Test automation is another useful tool to increase test coverage and reduce the amount of manual effort, but it can’t be considered a replacement for manual testing. In a perfect world, we’d have endless time and unlimited access to thousands of physical devices to test apps for every conceivable corner case. But that’s never the reality. Like most aspects of product development, mobile app testing needs to be a finite process and there are limits to what you can physically test with a given amount of time. Using this decision table technique is a great way to get the most out of your mobile app testing and greatly mitigate the risk of users finding bugs after launch.