How to Test Mobile Applications
Published on September 27, 2021
Writer and tech entrepreneur Derek Sievers famously said that ideas are a multiplier of execution. An average idea paired with excellent execution will go further than a poorly executed brilliant idea.
A crucial part of the execution of any software product is ensuring it works as expected. MVP-stage startups might be able to get away with buggy software, but as more and more users begin to rely on a product, its stability becomes paramount.
Testing software before shipping it to customers is a vital part of the application development life-cycle. In this post, we'll look at different kinds of software testing strategies, their benefits, and when to adopt them.
Manual Testing and Its Downsides
There are different types and flavors of testing, but the most straightforward is manual testing. Launch your app, interact with its interface, and make sure it behaves correctly.
Unfortunately, manual testing doesn't scale. As your product grows, it experiences a combinatorial explosion in possible user journeys. Manually testing each scenario before every new release can end up taking days.
When manually testing, we look at the software with our eyes and use our brain to verify its behavior. Unfortunately, the biological hardware on which we run manual tests is not particularly fast.
If speed wasn't enough of an issue, humans are inherently flawed machines. We distract easily, are afflicted by all sorts of biases, and our performance depends on unrelated factors such as how long we slept the night before or our emotional state. We are bound to make mistakes and miss a detail.
Luckily, there are testing techniques that shift the load from humans to computers.
Developers build products with code that automates tasks for our users. Nothing stops them from writing code to automate the task of testing other code. We call these techniques automated testing.
Automated tests are faster and cheaper to run. They are easily repeatable, and everyone can run them. Automated tests scale much simpler than manual tests because they can run in parallel on multiple machines.
There are different types of automated tests operating at various abstraction levels. Software and test engineers combine them to produce extensive and reliable test suites that ensure their applications always behave as expected.
The most common form of automated testing is unit testing. These tests work on isolated logical units, usually a single function or a method in an object.
A unit test has three common stages.
- First, it arranges an input for the system under test.
- Then, it acts on its unit, which produces an output from the given input.
- Finally, it asserts that the output matches the expected system behavior.
Unit tests are fast to run and give developers quick feedback on the effectiveness of their changes.
Some developers lean heavily on the fast feedback-loop that unit tests provide with a practice called Test-Driven Development, TDD for short. In TDD, developers write tests before the application code. These tests necessarily fail, and the way in which they do provides a hint on how to start implementing the code that will make them pass.
Due to their sharp focus, though, unit tests cannot guarantee full coverage of the end-to-end application behavior. End-to-end behavior coverage is where integration and UI testing come into play.
As the name suggests, integration tests verify how individual units work together to achieve the desired outcome. Usually, these tests leverage the same infrastructure on which unit tests run. That is, they have access to the application's internals but verify the behavior of multiple units composed together, with little interest in the role each component plays.
To understand the difference between unit and integration tests, consider a common feature such as logging into the app. This functionality requires components such as a low-level networking module, an API client, and a view abstraction object. We can write dedicated unit tests for each component to verify the different facets of their behavior. These tests will instantiate a component and work on it in isolation, agnostic of the other parts of the system.
An integration test for the login will instead put all the pieces together and verify higher-level behaviors. One such test might ensure that when the view abstraction receives the signal to start the login with a username and password that the API client would report as incorrect, it will eventually present an error alert to the user.
Integration tests operate at a higher level of abstraction than unit tests because they verify how individual pieces collaborate. But there is an even higher vantage point from which to run tests: that of the user.
UI tests interact with the software through its user interface, simulating the actions a flesh-and-blood user would perform, such as tapping and swiping.
These tests treat the software as a black box: they interact with it but don't know its internals or access to them. That's not the case with unit and integration tests, which instead depend on having access to the individual components and are therefore referred to as white-box testing approaches.
UI tests are an excellent tool to verify the core user journeys from start to finish. On the other hand, because they simulate organic user interactions, these tests are slower and sometimes unstable, or flaky, due to factors such as networking delays or UI animations.
Together, unit, integration, and UI tests can provide thorough automated test coverage for most of the application's behavior. After establishing a latticework of unit, integration, and UI tests, developers can tactically deploy other testing techniques to further ensure the software's stability.
Other Testing Techniques
Snapshot tests verify an application's UI details by taking screenshots and comparing them against a pre-recorded baseline. They help prevent interface regressions and are also useful to ensure the layout renders as intended in edge cases such as with very long strings.
Performance tests run subsets of the application, from individual methods to complete flows, multiple times and compares the average execution time with a predefined baseline. Developers reach for performance testing to ensure computation-intensive parts of the applications don't take too long to run, which would hamper the user experience.
Pact tests are tests specifically targeted at the boundary between an application and its remote APIs. Because APIs and application usually live on separated codebases they may get out of sync. For example, the API might respond with a list where the application expects a single element. Pact testing detects and prevents this type of issue.
Client applications and API providers make a pact on the data format for each request and response, described through a shared code interface. Developers then write dedicated tests to ensure the contract is respected.
The advantage of pact testing is that it doesn't require interaction with the other party, meaning it bypasses external factors such as network latency and availability. Thanks to the upfront investment in defining the contract between API and consumers, developers can run more reliable, isolated tests at a much faster pace.
Manual QA and Exploratory Testing
The sad reality for many people working in quality assurance is that they too often operate as manual testing machines. They receive checklists of scenarios to verify and painstakingly run through them, a terrible underutilization of their talent and skills.
When the development team has invested in automated testing, QAs no longer need to churn through manual testing checklists. They are instead empowered to focus on exploratory testing.
In exploratory testing, a QA engineer proactively searches for ways to break the application. Unlike all the testing techniques we've seen so far, exploratory testing discovers but doesn't verify issues. To hunt for hidden misbehaviors is a much more challenging task than merely replaying a given sequence of steps looking for an expected result. More importantly, it's one that only creative humans can perform.
Exploratory testing is vastly valuable as it can prevent nasty issues from reaching users. Issues discovered in this way are often deep-rooted. Fixing them improves the quality of the software architecture itself, making it more stable for the future.
Where to Start with Automated Testing
With many different flavors of testing, it can be daunting to know where to start.
Whenever implementing new code, I recommend adding unit tests for it. There is no better time to write tests for a component than while building it.
Software developers spend most of their time modifying existing code, though, not writing brand new features. When working on an established project that doesn't have good test coverage, it's best to add UI tests to cover an area of the app before changing it. I suggest starting with UI tests because software lacking good test coverage often has components that are hard to test in isolation. With the UI tests in place, developers can be confident the desired behavior will be respected and start rewriting individual units to make them easier to test.
Adding tests to code that doesn't have them before changing it establishes a virtuous cycle. Each test you add makes adding further tests easier. Over time, you'll build an extensive test network.
Finally, remember that the value of tests is in the feedback they give. When in doubt, start with whichever test is easier to implement. Having that in place will give you the confidence to further iterate on your design.
As Spotify CEO Daniel Ek wrote in a shareholders report, "speed of iteration will trump quality of iteration." A thorough and varied automated test network will empower your team to iterate quickly without compromising your product's stability.
Gio Lodi is a writer of software and words. He lives in an Australian beach town with his wife and two children and works remotely as a mobile infrastructure engineer for Automattic. He's the author of Test-Driven Development in Swift. You can find him on Twitter at @mokagio and on his testing, automation, and productivity blog mokacoding.com.