Prototype testing isn’t as simple as you think

Johanna Jagow
Bootcamp
Published in
7 min readAug 17, 2023

--

A researcher playing hopscotch on a paper prototype
Generated with Microsoft Designer

Summary: This article covers five non-obvious challenges that can make or break your prototype studies: Which fidelity level matches your questions? How to make sure participants understand it’s just a prototype? How to create the right focus? What if the prototype isn’t ready? What else to consider before launching? Make sure your study doesn’t flop by applying the best practices described to tackle each of them.

Many websites suggest that running a research study with a prototype is the simplest thing in the world — á la “just throw a link in there, add some template questions and you’re good to go!” While this might work out in some cases, it’s certainly not a universal recipe for success. The devil, as often, is in the details and I’ve seen many such studies fail despite the fact that there are countless guides out there. In this article, I’m answering the biggest non-obvious questions that teams I’ve partnered with have asked me over the years about how to run prototype studies successfully.

Prototype testing is everywhere, and it’s without a doubt a powerful method for understanding user perceptions and refining products pre-launch. In UX research, a prototype is a design artefact being used similarly to a hypothesisan option that may or may not solve a user problem (source: NN/g). There are many different types of prototypes and their nature can vary greatly when it comes to the medium used (paper, digital, or even 3D ones for AR/VR Experiences) and specifications like the level of detail, complexity, and so forth. This article focuses on digital prototypes created with commonly used tools like Figma, InVision etc.

“What kind of prototype should I use for which questions?”

Choosing the wrong fidelity level is one of the top mistakes I observe reviewing studies with prototypes. Interestingly, I’ve seen many teams use the same fidelity level for every study and question. This is not an approach I’d recommend for two main reasons: it’s rarely sustainable (e.g., always creating very complex prototypes even for very early-stage concepts) and it can make you miss out on important parts (e.g. only using very basic prototypes, then jumping right into coding). In reality, fidelity levels are fluid so it’s not always easy to tell in which category a candidate falls. However, as a rule of thumb, this is what I recommend:

  • Low fidelity: These are best to demonstrate core product functionality, information architecture, and things like user flow. Usually, low-fi prototypes are not using real images, copy etc. which is why they should be used in early stages. The goal here is to look at concepts and primary user goals, not details. Appropriate research questions are: At the minimum, what functionality do users need? Which general flow do they expect? Are there major usability issues in our proposed solution?
  • Medium fidelity: These have added details such as a more realistic layout (white space, colours, buttons), more steps in the user flow, and assets that might resemble the end result more closely. Usually there are still completely underdeveloped areas too, for example non-clickable parts. The main research question to use for this level is: How can we improve and refine this product/service?
  • High fidelity: These are highly functional and interactive prototypes coming very close to the final product. Most of the necessary design assets and components are developed and integrated. They are often used in the later development stages as they are great for fine-tuning flows and testing whether something is safe to launch. Appropriate research questions are: Can users intuitively and successfully use this product? What are users’ reactions to the actual look and feel? Is there anything else we need to fix before handing off to development?

In an ideal world, we don’t only run one prototype study at one fidelity level. Rather see it as a process, iteratively going from low fidelity to refining and adding more details along the way like in this scheme:

A holistic flow for prototype testing in three steps: basic, more added details, and coming close to the final product
A holistic approach to prototype testing (own creation)

Please also consider that things like accessibility testing only really work with higher fidelity prototypes. Low-fi prototypes are meant to leave out many elements and details that could support users with impairments. That’s why testing a product’s accessibility requires a higher fidelity level.

“How can I make sure participants understand it’s just a prototype?”

Reflecting on this question, countless chats come to mind where frustrated researchers said things like “but I did add a big disclaimer that some parts aren’t clickable and people still complain and rage click on empty spaces. What else should I do?!” There are two big reasons why one disclaimer isn’t enough to make sure participants understand the nature of a prototype:

  • As opposed to people creating and using prototypes in their job, most participants aren’t used to interacting with them at all. Even if we tell them that a button might not be linked, it’s still a weird and unexpected thing to happen, so it’s best to adjust our expectations accordingly.
  • Attention spans are very limited and people forget things, it’s human. Our short-term memory lasts about 15-30 seconds and can only hold about 7 pieces of information at a time (source: Saul Mcleod, PhD). User studies present a lot of information all at once to participants, making it hard to focus and remember important parts when it counts.

This is why, depending on the level of detail, but especially for very early stage prototypes, I recommend adding a disclaimer at the beginning of the study and before the actual task starts. Place them on separate pages so it’s not just a big clump of text that can easily be skipped over. You can also consider asking participants to confirm understanding that they are about to use a prototype if you need to be extra sure. And if you still get the occasional participant who gets completely thrown off by the prototype, it’s best to move on to the next one and exclude said data from your results.

“How to make sure participants will focus on the right thing?”

Finding the right balance between guiding participants and letting them explore naturally during prototype testing is a big dilemma for researchers. Telling participants where to focus on would be biasing them towards something they might have not noticed otherwise. To not give them any direction can lead to a bulk of data that isn’t relevant for the researcher. Apart from that, there’s always the challenge that in controlled user studies, participants generally know their input is being watched and analysed and it’s likely they are not acting 100% naturally anyway.

The path to actionable results lies in crafting powerful scenarios before the actual task and being mindful constructing task descriptions. A little bit of storytelling can be helpful too, creating the right mood and setting, instead of putting participants in front of something that in this moment, they might just not be interested in. If for example you want to test a delivery slot booking feature at a grocery retailer’s app, you should give participants a realistic scenario like “Imagine you’re in a rush and you want to quickly book a delivery slot for next Tuesday between 5 and 6 pm. As you don’t know whether you’ll be home at the time, you need to understand if there are flexible booking options too”. This gives participants an actual, realistic challenge instead of just letting them look at a page and share anything they might like or dislike about it.

Additionally, put some thought into where participants should start the task in your prototype. Ever so often have I seen studies that generically start at the homepage despite having a research question that entirely focuses on something else. This can lead to big amounts of irrelevant feedback that no one really cares about in that context. Not only do I believe that that’s not a great use of the participant’s and researcher’s time, it’s also not necessary if you hone in to the point above. Crafting a meaningful scenario and task will get you to actionable insights more efficiently and allow you to add more detailed questions about the things you really do need to find out.

“The prototype is not ready … should I just run the study anyway?”

This is a quick one and my answer usually is “no”. Ideally, before coming to that point, the goals and scope of the study have been closely aligned with the required digital assets and whoever is creating them. If the prototype then isn’t ready for study launch, it’s recommended to postpone until it is. There are rare cases in which a shift of focus in your tasks can yield meaningful results although parts of the prototype aren’t ready (e.g. if you’ve planned to give multiple equal options to explore but now there are fewer available). However, in most cases, using a prototype that’s not ready will mean the results are compromised in obvious (participants being unable to select certain things) and non-obvious ways (biased results due to e.g. unrealistically easy-to-complete tasks).

“What else do I need to consider before launching?”

Navigating all the things to consider for prototype testing can be complex. One takeaway I always share is to have your bases covered in the form of key technical aspects to make sure your study doesn’t flop:

  • Optimise the prototype for fastest loading times to prevent study abandonment and skewed results due to participant fatigue.
  • Nail down your appropriate task flow and make sure it’s properly reflected in your prototype.
  • Use clear and straightforward language and avoid UX jargon or placeholders (lorem ipsum) as they might confuse participants.
  • Disable potentially leading elements like hotspots, page titles, comments, and your prototyping tool’s UI to reduce noise.

I hope you found this quick Q&A helpful. Feel free to share any other big challenges you’ve encountered for prototype studies in the comments!

Hi, I’m Johanna, a freelance UX Advisor based in sunny Barcelona. 👋🏼 I partner up with companies on all things UX research, UX management, and ResearchOps. I also create resources and products to level up your UX game. More content is on the horizon so follow me here to catch every update!

--

--

Independent UX researcher and UX consultant based in Barcelona. I write about all things UX Research, UX Management, and ResearchOps. https://johannajagow.com/