UX Research Planning Checklist

Overview

Before you start your research you need to think about the best method to answer the type of questions you have. There are so many methods to choose from that it can be difficult to choose, but selecting the right research method is critical to getting the best quality insights out of research activity.

This procedure will assist you to select a research method that is the best match to the type of questions you have and one that matches the scale of the opportunity.

Figure 1Overview of the UX Research Methods Procedure.

1.1 Cheat sheet to help you evaluate a UX proposal or request a quote

It’s well worth understanding when to use each type of questions and which methods are the best match by following this procedure, especially if you need to make decisions frequently or need to evaluate a proposal from a vendor - but understanding all the research methods may not be necessary if you just need a quick answer to evaluate a proposal, request a quote or pick a vendor.

For answers to the most commonly proposed activities, use the table below.

Table 1: Commonly used UX questions.

Situation / Description / Recommended methods
You’re at the start of the project or you just need to know how to design a system that helps users with their problems / ‘Attitudinal studies’ are the best match when you don’t yet have a system that you are confident meets user needs.
Attitudinal studies are best partnered at a later stage with ‘behavioural studies’ below to validate that what participants said matches what they do in real life / ‘Ideation workshops’, ‘interviews’ or ‘contextual interviews’ (sometimes called contextual enquiry) can help you understand what solution you need to propose to meet real world needs, what users tasks are, and the common problems they face
I have an existing design or proposal and I need to check that it is easy to use and that users will not need to contact support before we build it / ‘Behavioural studies’ are the best match when you have an existing system or proposal because you can probe around issues by directly observing users using the system / Usability ‘lab testing’ isthe best method when you have a specific target user group, or ‘guerrilla testing’ if your system is for a general audience and don’t have the budget or time to recruit
I need it all! / Start with attitudinal studies first to understand what users’ need, then design a prototype that matches what you’ve learnt, then validate that users can use the proposed design with behavioural studies / ‘Ideation workshops’ will help you generate new knowledge from interacting participants. Once you are satisfied the design matches the needs you’ve learnt about, validate the design by testing with ‘usability lab testing’

1.1.1 Common mistakes to avoid

Inferring qualitative results from quantitative methods is considered very poor practice and can even undermine and evidence-based process by establishing hard-to-shift myths. For example, a Google Analytics report that visitations or usage is a quantitative value that counts the frequency of an action, but not its popularity, especially not for government processes that are typically mandatory and not taken as a result of desire or choice.

‘Popularity’ or ‘ease of use’ are qualitative insights that should employ qualitative research methods.

If you need to understand more about which methods are best fit for more situations, continue to use this procedure on the next page.

2 Procedure

2.1 Define the type of question

Not all types of research methods are suitable for all types of questions. Choosing a method that is not well matched to the type of questions you have risks reducing the quality of your results.

Common categories of research questions include usability, vocabulary, features, navigation, and satisfaction. The type of question is usually one or both of behavioural or attitudinal questions:

  • Behavioural questions are the best match when you need to know what people do
  • Attitudinal questions are the best match when you need to know what people say.

Use the table below to determine if your type of question is a behavioural or attitudinal.

Table 2: Common categories of UX questions.

Type of question / Description / Type
Usability and ease of use / Evaluate the extent to which end users are able use an existing design / Behavioural
Vocabulary and instructions / Evaluate the extent that the vocabulary is familiar and users can understand the instructions in an existing design / Behavioural
Features & functions / Explore what features and functions users need for a system that is yet to be designed / Attitudinal
Navigation and information architecture / Whether the navigation is easy to use and the content structure is intuitive / Behavioural for an existing system, or attitudinal for a content structure yet to be designed
Satisfaction and impressions / Gather first impressions of the system value proposition and perceptions of quality / Attitudinal

For any project you may have multiple types of questions, and some questions may require two or more research activities using different methods to answer satisfactorily. Let’s continue to select a research method.

2.2 Select a method

Now that you decided whether you have attitudinal or behavioural types of questions, you need to decide on a research method that is the best fit.

Commonly undertaken UX research methods roughly align with one or more categories of qualitative, quantitative and evaluation methods:

  • Qualitative methods are the best match when asking why and what to do about it
  • Quantitative methods are the best match when counting when, how many and how often
  • Evaluation methods are the best match when critiquing against a principle or convention.

Use the diagram below to decide where your questions fit on this diagram:

If you chose an attitudinal question, look at the top of the diagram to pick a method that will help you understand “What they say”

If you chose a behavioural question, look at the bottom of the diagram to pick a method that will help you understand “What they do”.

Figure 1:Pick a quantitative or qualitative method on the horizontal axis, and an attitudinal or behavioural method on the vertical axis.

2.2.1 When to use qualitative and moderated research

Qualitative research methods are resource-intensive but are often the best methods to explore questions about what users need, their attitudes and expectations and how and why users behave the way they do. Qualitative methods are often favoured for UX research because behaviour can be observed in a way that helps to build evidence to support findings.

Moderated qualitative research adds significant value because direct observation and facilitation assists deep understanding of users’ perceptions and their attitudes, and allows for probing around issues and clarifying responses.

Qualitative research methods are often the best fit when:

  • Informing assumptions of what users’ key goals and tasks are at various stages of a process
  • Validating that a proposed design, function or service meets is usable before building
  • Prioritising the most useful features that support user needs, and identifying if there features or functions that users expect that are missing from a proposal or existing system
  • Checking the extent that the terms and vocabulary are clearly understood
  • Checking the extent that that the navigation is intuitive and the process flow is sensible
  • Assessing the likelihood that users can complete their tasks accurately, quickly, and unassisted
  • Challenging existing assumptions and generating new knowledge from interaction and processes.

Common qualitative research methods

  • User workshops
  • Usability testing
  • Interviews
  • Field study, contextual interview or contextual enquiry
  • Facilitated card sorting.

2.2.2 When to use quantitative and remote research

Quantitative research activities are often the best methods to explore questions about how often, how many and when users perform particular actions.

Remote unmodulated quantitative research methods add most value when recruiting users would be difficult or cost prohibitive, or where large populations are too geographically remote to observe.

Remote research methods are often the best fit when:

  • Measuring the extent that a proposed content structure ‘information architecture’ is likely to help users finding content
  • Counting which existing features on an existing service users use most frequently
  • Estimating where problems might exist and determining where further investigation is needed
  • Gathering feedback from large populations on how they rate overall usefulness of the system
  • Understanding large population behaviours – actions performed and when, from where and using which devices.

Common quantitative research methods

  • Remote online surveys and feedback forms
  • Analytics and data-driven insights like ‘A/B testing’ or ‘single ease scores’
  • Remote online prototype testing
  • Remote online card sorting
  • Field study or contextual enquiry.

2.2.3 When to use evaluation methods

Evaluation methods are often the least resource intensive because they do not involve users directly. Evaluation can be useful where resources are not available to involve citizens in your research activity e.g., for low risk and low priority systems with small user communities of infrequent users.

Instead of including users in research, evaluation uses expert knowledge to review designs against common best practice design conventions, standards and principles.

An expert review (sometimes called ‘heuristic’ or ‘desktop’ review) is a critique of the extent a design complies with theoretical ‘rules of thumb’, in order to report on recommendations for improvements based on a critical evaluation. Expert review is often the best fit when:

  • Quickly determining if the information architecture and design follow best practice conventions
  • Auditing the system for compliance with accessibility and Governmental standards
  • Comparing the system to competitors or peers in the same space.

Common evaluation research methods

  • Expert heuristic reviews sometimes called ‘desktop reviews’
  • Comparative analysis
  • Accessibility audits.

Limitations of evaluation methods

Evaluations rely on expert knowledge and theory to generate insights and although principles, standards and theory can assist identifying areas where issues are most likely to be found, these methods miss out on opportunities to generate new knowledge from unfamiliar situations seen from the perspective of ‘fresh eyes’.

Evaluations are generally best used to complement rather than replacing research involving users of the system, because expert’s views are unlikely to represent the views of typical novice users.

Evaluations are a cost effective activity when existing user centric research is recent, robust and useful, or when the intended users of a system are specialists themselves.

2.3 Define the scale of opportunity

Opportunities for useful research almost always exceed budget and time limitations, so it’s important to understand up front which activities you should prioritise. Research can be varied in scale, either by including more or fewer participants, or using less time intensive methods. Understanding how complex your research questions are will help you to determine the scale of opportunity.

2.3.1 Determine the complexity of the research

When evaluating the opportunity for research consider the best available business knowledge for your project, and how certain you are about the assumptions you’ve made about users, who they are, what they are trying to achieve and the common issues they face.

Use the template below to apply your understanding of your project’s social, environmental and financial impact to determine whether your research is likely to be mostly simple, or mostly complex.

Table 3:Example template for understanding the scale of opportunity

Category / Simple / Moderate complexity / Highly complex
Size / Hundreds of users or people impacted / Thousands of users or people impacted / Hundreds of thousands of users or people impacted
Frequency / Rare interactions for a small amount of time less than once per month / Some interactions, multiple times per month / Frequent interactions many times per week
Evidence / Existing research is well received, thorough and contemporary / Existing research is thorough but disputed, and under 5 years old / No existing research
Processes / Easy to understand process with few tasks or simple online forms / Moderately difficult or time consuming processes / Online transactions or long multiple-step processes
Stakeholders / Stakeholders agree with what the system does, or agree that an existing system is performing well / Stakeholders have different expectations for what a new system could do / User needs are unknown, based on anecdotal evidence
Improvements / Improving this system may reduce frustration but issues are unlikely to prevent users completing tasks or complying with processes / Improving this system is likely to assist users complying with processes and completing tasks unassisted / Improving this system is likely to reduce errors, assist users complying with processes and completing tasks unassisted, and reduce support costs
Recovery from failure / An alternative course of action exists that is unlikely to be disruptive / A mandatory process is unlikely to proceed to schedule / Failure would cause government or users financial or reputational loss
Financial outlay / Level 3 or PME light project with a value of less than $50,000 / Level 3 to Level 2 project with a value of more than $50,000 and less than $500,000 / Level 3 to Level 1 project with a value of more than $250,000
Expected benefits / Return on investment in productivity gains, procedural efficiency or support savings is likely to be similar to the value of the research in time and cost / Return on investment in productivity gains, procedural efficiency or support savings is likely to be modestly more than the value of the research in time and cost / Return on investment in productivity gains, procedural efficiency or support savings is likely to be significantly greater than the value of the research in time and cost

2.3.2 Evaluate the opportunities for research

Now that you understand complexity use the diagram below to determine what type of activities are appropriate to match the scale of research opportunity.

If you’re an experienced UX researcher you may already have preferred tools.

If you need guidance use the diagram below find where your system is between mostly simple and mostly complex on the horizontal axis, and whether you’re planning something new or a significant change. In each quadrant you’ll find a recommendations for activities that might be appropriate.

Figure 2: Diagram showing types of activity matched to the scale of opportunity

Acronyms and terms / Description
UX / User Experience
WoVG / Whole of Victorian Government
DPC / Department of Premier and Cabinet
DJR / Department of Justice & Regulation
SUS / System Usability Scale
SES / Single Ease Score

TRIM ID: CD/16/417349

Enterprise Digital, Integration and Application Services

Page 1 | February 2017 | WoVG Digital Standards Framework