5 reasons why UX research studies fail

Johanna Jagow
Bootcamp
Published in
6 min readAug 3, 2023

--

A researcher at his desk setting up a UX research study

TL;DR: Sometimes our best intentions aren’t enough to make sure a UX research study is successful. This article covers five big study landmines and how to avoid them: from setting a clear study goal and recruiting the right participants to tweaking questions, balancing complexity and thinking about the right digital assets.

If you’ve ever launched a UX research study to participants, you know that tiny thrill of clicking the “Launch” button, seeing results come in (*hits refresh on repeat*) and finding out whether your efforts result in useful insights. If that sounds familiar, you might also know the feeling of reviewing your results and sometimes thinking, “Hold on, that’s not what I meant!” or other variations of not quite finding what you were looking for.

I’ve spent years launching, analysing, and socialising user studies as a hands-on researcher. More recently, I’ve seen the other side too in the role of someone coaching and educating people who do research. Reviewing hundreds of studies over the years, you can trust me: I’ve seen it all — the good, the bad, and the downright awkward.

In this article, I’m sharing the five biggest user study landmines I’ve seen and how they can trip you up, so you can walk away equipped with a few do’s and don’ts for your next study. Keep in mind that this article is focused on studies only, not the important things that can (and should) happen before and after you decide to run a piece of research in the first place.

1. Unclear goals

This is number one for a reason, as it’s certainly the biggest enemy of actionable research. If you are not clear on what specifically you need to accomplish with a study when you start building it, you’ll very likely end up being disappointed or overwhelmed by the outcome. One common example is scrambling lots of questions from your backlog into one study without a clear focus. Another is adding unnecessary details or questions and artificially “blowing it up” when your actual goal is quite simple. Testing the whole journey when actually you just need to assess whether participants can understand a specific wording? That’s a sign of not having a clear study goal.

The rule of thumb here is: If it’s just “nice to know”, leave it out. Rather focus on crafting a powerful study that nails down your burning questions.

2. Recruiting the wrong participants

This seems like a classic, as finding the right participants is often stated as one of the biggest challenges for researchers. I’ve reviewed studies in which, out of a lack of access to actual users, developers of a feature or app answered questions about its usability. Something like this will completely skew your results as it could never reliably reflect the end user experience. But even if we have a large pool of potential participants at our disposal, the challenge remains to find the right ones.

Without going down the whole rabbit hole of recruitment, an observation I made is that recruiting for too specific segments happens way more often than being too open. I’ve seen usability studies trying to recruit 35–45-year-old male homeowners with a car from a specific brand, using that brand’s existing app at least once a week, and living in a certain area of a country — asking about how easy or difficult it is to walk through a simple product configuration flow.

If you really do need to find out about the needs of a very narrow target group, consider recruiting just a few of those people and running heavily qualitative, in-depth conversations. If you want to test a journey for its usability though, trying to recruit multiple participants with that profile is guaranteed to lead to long field times, high costs, and missing out on insights that users outside of your ideal “bubble” might have for you.

3. Using research questions as study questions

“Would you buy this?” is not a good study question. “Do you like this design?” isn’t either. As much as I know how tempting it can be to ask the questions that keep us awake at night directly to participants, these are not for them to answer.

One real-life example is a company I worked with that wanted to find out whether people would buy a completely new product that’s coming with a subscription model. They never sold anything like it before and also didn’t do proper discovery research prior to the project. Presenting participants with emotional pictures of a happy family enjoying the product that so clearly makes their lives better, what do you think participants answered when they asked them “Would you buy this?”

The key problem is that if you ask such questions, participants will give you an answer that will look nice and useful from the outside. However, the underlying questions — “Is this relevant for existing customers or prospects?” “Does it truly solve a customer need?” “If so, what characteristics should the product have?” “How should we ‘package’ it, call it, describe it, make it available?” — are what actually helps you assess true intent and unfortunately are much more complex. They should be tackled through the usage of different methods, meaningful scenarios, calibrated questions and tasks, and working across different teams.

4. Overcomplicating, then oversimplifying

This might sound confusing at first, but I’ve seen this happen regularly with teams of different maturity levels. For instance, a small team I’ve been working with that has solid research expertise would dwell on study drafts for weeks, creating hugely complex tasks and questions. However, when the time came to take action, they had this feeling of “nothing makes sense anymore”, deleted most of their draft and resorted to extremely simplified tasks (e.g., “Explore the page”) and templated questions (e.g., SUS questionnaire) that didn’t align well with their research questions.

By all means, constructing scenarios, tasks, and questions is not easy. If you can, get a fresh pair of eyes to look at and review your study. If you can’t do that, my advice is to revisit your goals and desired outcomes as you build your study, to find a balance between being specific and not directing participants to perform for you, and to opt for a phased approach. Try running a few short studies one after another to check the effectiveness of your tasks and whether or not some of your research questions still remain unanswered. This will make your life much easier than overthinking your way to launching the one, “perfect” study.

5. Using digital assets that aren’t suitable

With clear goals in mind and the right calibrated questions, you can get actionable insights with just about any asset you have — even if it’s just an idea in your head or a hand-drawn sketch. When it comes to prototype testing, I have reviewed many usability studies that used ineffective digital assets though, and that can become a problem. Some examples are:

  • A prototype where key parts don’t function properly, e.g. usability testing a chatbot MVP that will only give testers nonsense answers.
  • Using the wrong fidelity level, e.g. a low-fidelity prototype to test a complex journey, like a checkout process where all data fields are pre-filled, and it only takes a few seconds to click through.
  • Using a mock-up landing page for a scenario that is completely unrealistic in real life, e.g. a product overview page of a retailer’s shop that is filled only with ads for your product.

I’ll likely write about this topic more in-depth soon, but one key takeaway is to think about the stage of the development process you are in and what goals you have (see how that point №1 is chasing us?). Do you only want to run a rough concept past participants and learn about their primary goals? Then a very simple, non-functioning prototype is okay. Do you need to check for usability issues at a later stage? Then your prototype needs to be highly functional and interactive, coming close to the final product.

I hope you found these five watch-outs useful. As they are by no means a complete list of what can go wrong when running research studies, feel free to add more and share your thoughts in the comments.

Hi, I’m Johanna, a freelance UX Advisor based in sunny Barcelona. 👋🏼 I partner up with companies on all things UX research, UX management, and ResearchOps. I also create resources and products to level up your UX game. More content is on the horizon so follow me here to catch every update!

--

--

Independent UX researcher and UX consultant based in Barcelona. I write about all things UX Research, UX Management, and ResearchOps. https://johannajagow.com/