6 questions about usability testing from a newbie

My friend Jamie is starting to head up usability testing efforts in her company. As part of her learning process, she asked me a few questions about how I test. So I decided to publish the answers here.

One thing that I’ll say is that I’m increasingly wary of relying on usability testing to “validate” designs. I truly believe that to get a project right, you need to involve end users in feedback sessions, starting at the wireframe stage. Basically, talk to people every week. Involve everyone in your project team in the research by encouraging them attend sessions and ask questions. Or show them recordings of the feedback sessions.

Usability testing is still a must when it gets to that stage. But many problems can be avoided if you involve users constantly in the design process.

That said, here are my answers to Jamie’s questions!

Who from the project should be involved in the process of deciding what to test?
I usually sit down with a Product Manager, a Business Analyst, members of the User Experience Team, and any other critical stakeholder to discuss research goals, objectives, and timing. After that meeting, I create and circulate a one page research plan to summarize/formalize what we’ve agreed on.

How do you determine what to test?
It all comes out of this the discussion with PMs, BAs, UX, and stakeholders. In my world, I’d say everything should be usability tested! It should be an ongoing cycle.

Is there a typical duration for a test, or at least a good standard for software testing? And what’s a good number of tasks to have someone complete during a test?
I lumped these two questions together because they’re related. This is definitely an area where a researcher can’t be rigid. There’s no right answer to either of these. As a rule of thumb, however, I’d suggest no more tha 1½ hr & two scenarios. But I’ve done longer sessions with four scenarios and shorter lasting 15 minutes with a single scenario.

If the project team decides there is a TON to test, I’d say two things:

  1. You should have done research much earlier in the process
  2. Consider splitting things up into multiple testing initiatives. Make each session more focused rather than trying to do everything in a single effort.

Is it better to put a time constraint on a task to determine if the user “failed” the task, or is it better to just let them keep working, no matter how long it takes?
In planning for research, every element should be roughly timeboxed. Meaning if you have two scenarios, each should have a timeframe associate with it, and you should also know how long your introduction and follow-up questionnaires will take. Not to say you have to follow these timeframes exactly. But they should frame the overall discussion. This is how you ensure that you don’t waste the user’s time and end up keeping them longer than they anticipated.

That said, sometimes users just flounder. Even if that happens, you should have “back up” questions that fill time and still extract value from the session. Facilitating gets easier the more you do it. At first, struggling users can make a session super difficult. But as you get more experienced, you learn more and more how to draw users out of funks.

The best feedback usually comes when things go off-script. So just be patient and ask good questions! On the rare occasion that the session becomes painful, don’t feel bad about wrapping up early. Just tell them you got through everything you needed to get through and they can have the rest of their morning/afternoon back to him or herself!

Have any tips on intervention to help a user along with a task if they’re stuck on something?
This is related to the above. But specifically, keep asking open-ended questions. If after a few tries, they can’t come up with an answer, by all means “give the answer away.” Especially if it’s a critical step in completing a process and you need feedback on the next step. Just have a discussion centered around the right answer. Ask what you could do to have made it more obvious or if they have other ideas that could make it easier to interact with.

If you do put a fail time on a task, where do you get that number from? Is there a rule of thumb, like, the amount of time it takes me (the SME) to do it, times 5 or something?
I honestly haven’t done much time on task testing. But I’m not sure that there’s a magic equation (though I could be wrong…any quantitative researchers want to correct me?). Especially one based on how long it takes a SME to do something versus an actual end user. I’d say that it also comes down to how familiar a user is with a design – are you testing enhancements to an existing design? Or something brand new? If a user is interacting with a design for the first time, it’ll obviously take more time than it will after they learn how to use it.

Thanks for asking these questions, Jamie!

20. December 2013 by Michael Seidel
Categories: usability | Leave a comment

Leave a Reply

Required fields are marked *