User Testing vs. A/B Testing

So you have a page, or a system, or a form or an app or anything, and you know you want to make it ‘better’.

And the question is…

Should we use User Testing or A/B Testing?

TL;DR: Both can be valuable, and you can do both. So long as you have an underlying indicator of the ‘better’ that you can track over the long-term.

A/B Testing

A/B testing is good because:

  • It shows you ‘for real’, at scale, what actually prompts your users to do the things you most want them to do
  • It can further polish processes that have been through thorough user testing; to a degree that’s just not possible with a small group of people
  • It can self-optimize content in real-time to maximize the impact of short spikes of campaign activity

A/B testing is bad because:

  • You need a lot of traffic to get significant results which can prevent testing of new or niche products
  • You can get distracted by polishing the details, and overlook major UX changes that would have significant impact
  • It can slow down the evolution of your project if you try to test *everything* before you ship

User Testing

User testing is good because:

  • You can quickly get feedback on new products even before they are live to the public; you don’t need any real traffic
  • It flags up big issues that the original designers might miss because of over familiarity with their offering (the ‘obvious with hindsight’ things)
  • Watching someone struggle to use the thing you’ve built inspires action (it does for me at least!)

User testing is bad because:

  • Without real traffic, you’re not measuring how this works at scale
  • What people say they will do is not the same as what they do (this can be mitigated, but it’s a risk)
  • If you don’t connect this to some metrics somewhere, you don’t know if you’re actually making things better or you’re stuck in an infinite loop of iterative design

An ideal testing process combines both

  1. For the thing you want to improve, say a webpage, decide on your measure of success, e.g. number of sign-ups
  2. Ensure this measure of success is tracked, reported on and analyzed over time and understand if there are underlying seasonal trends
  3. Start with User Testing to look for immediate problems/opportunities
  4. If you have enough traffic, A/B Test continually to polish your content and design features
  5. Keep checking the impact of your changes (A/B and User Testing) against your underlying measure of success
  6. Repeat User Testing
  7. A/B Testing can include ideas that come from User Testing sessions
  8. Expect to see diminishing returns on each round of testing and optimization
  9. Regularly ask yourself, are the potential gains of another round of optimization worth the effort? If they’re not, move onto something else you can optimize.

That’s simplified in parts, but is it useful for thinking about when to run user testing vs. A/B testing?