An App That Matters: Getting Great Results with A/B Testing
In our previous post, we examined how to assemble a highly efficient and motivated team to create a remarkable mobile application. Now, let’s take a closer look at how to build useful features with your newly constructed team by using A/B testing.
What is Mobile App A/B Testing?
Mobile App A/B testing is one of the ways of usability optimization for mobile applications and a fantastic way to understand user engagement and satisfaction of new features. The key idea here is to test your hypothesis on a particular subset of users and scale it up to all others, if your hypothesis proves to be true. In that sense, future releases will include those features that have already been “approved” by users.
To deploy an A/B test, we must first formulate a hypothesis that we want to check and identify the result of. To do so, marketers and analysts analyze application metrics and find weaknesses that need to be improved. From there, users are divided into groups and then the test is run, with each group seeing only one implementation of the feature that we want to test. We collect the results, and adjust accordingly.
On our transportation app, we had a pop-up window that showed the vehicles stopping at a bus stop. By clicking on the route number a user could go further into the application and see the selected vehicle’s schedule. Our analysis revealed that users did not make this transition very often, although the schedule window was one of the most frequently visited.
Our assumption was that users perceived the plates with route numbers to be untapable. To solve this, our designers created new plate number tags that we thought users would be more likely to tap on. However, we didn’t want to rely merely on our assumptions. We needed to test if the new design worked.
Mobile App A/B Testing in action
We split the users into two groups: Group A, the control group, where users will not be exposed to any of the options being tested, and Group B (experimental), which will consist of users who will test our hypothesis. How we divide these groups, and to what percentage, can depend on the importance of the feature, business goals, number of installations, duration of the feature’s existence, and a wide range of other criteria. For this case, we placed 90% of users in Group A and 10% into Group B. While you can use more than two groups, to check a larger number of implementations, we recommend always using a control group that can act as a benchmark against the other features.
After running the campaign for 30 days, we observed the following results: the variation outperformed the control, it projected a 25% increase of the vehicle’s schedule usage.
Setting a clear time frame will help estimate the differences between the metrics on Group A and Group B. It is important to stick to this time frame, and you should not stop the test early, even if during initial phases, one group is confidently leading. The waiting is the hardest part.
Once waiting is done, if your hypothesis has won, well done. The goal of mobile app A/B testing has been achieved and you can release a new feature for all users. If the experiment failed, that’s totally fine. Your team did their best. Plus, you avoided the release of a suboptimal solution, which could have led to a decrease in the conversion rate. If your hypothesis failed, you will need to rerun testing with other options, but you know even more about the application and you can use this information to further optimize what you and your are building.Tap to see full screen infographics
It’s all connected
Running a good team and improving a feature are intertwined. If your team is not working well together, it will be difficult to implement not only a mobile app A/B testing, but also any feature.
For more information on building fantastic applications. Feel free to reach out with any questions you have. We’re always happy to help. Stay tuned for our next blog posts on mobile app development.