Method | Description | Strengths | Potential challenges |
---|---|---|---|
Qualitative preference methods (preference exploration) | |||
Focus-group discussions (FGDs) | Focus groups serve multiple purposes: they can stand alone as a valuable source of group and individual insight or can enhance understanding of findings from quantitative methods like surveys. In focus groups, participants discuss a topic in response to a short set of focused questions, usually ranging from 4 to 10 questions total. A moderator leads the conversation, ensuring that it stays on topic and managing discussion dynamics, while a notetaker captures non-verbal cues like facial expressions and body language. Focus groups excel in capturing diverse opinions from individuals who share certain characteristics, offering a rich understanding of preferences and viewpoints. | • Resources available: There are many existing, accessible resources for developing FGD guides • Efficient data collection: Provide an economical way to collect data that captures a wide range of responses • Provides rich insights: Offers detailed explanations for individual and collective preferences, behaviors, and decision-making • Stimulates group thinking: Participant comments stimulate the thinking of others, especially in the context of something preferred versus not preferred • Visual and prototype friendly: FGDs are well-suited for the use of visual aids or prototypes to enhance discussions • Can provide instant reactions: Allows for the observation and documentation of instant reactions to ideas or options, including nonverbal cues | • Depends on moderator skill: The quality of FGD data relies on the moderator’s skill in guiding the group • Logistical challenges: FGDs may encounter issues such as convening a group at the same time and securing an appropriate space • Confidentiality risk: Confidentiality cannot always be ensured in a group setting • Negative group dynamics: Dominant participants and/or power differentials may affect the expression of marginalized or “less acceptable” views • Potentially limited transferability: Findings may not easily apply to other populations or settings • Requires time and expertise: Transcription, translation, and analysis of FGD data can be time-consuming and requires expertise, particularly when dealing with complex data and study objectives |
Individual in-depth interviews (IDIs — semi-structured and unstructured) | IDIs are a powerful tool in health research to explore individual viewpoints. These interviews come in two types: semi-structured and unstructured. In semi-structured interviews, an interviewer follows a guide to steer the conversation. This guide includes key topics and open-ended questions to encourage thoughtful responses. Unstructured interviews have a general theme or topic of interest but do not follow a set list of questions, allowing for a more free-flowing discussion. Both approaches are excellent for gaining detailed reasons behind a participant’s preferences and choices. | • Resources available: There are many existing, accessible resources for developing IDI guides • Promotes participant openness: Encourages participants to freely share their insights and experiences in their own words • Enables flexible exploration: Allows for the collection of in-depth information beyond what was initially planned as concepts or relevant topics arise during data collection • Provides rich insights: Offers detailed explanations for individual preferences, behaviors, and decision-making • Private and comfortable: The one-on-one nature fosters a private and comfortable setting, which may improve trust and sharing | • Depends on interviewer skill: The quality of IDI data relies on the skill and judgment of the interviewer • Potential data comparison challenges: Comparing data between different respondents can be difficult due to lack of standardization and varying responses • Time-intensive: Conducting IDIs with each participant can be time-consuming for both participants and data collection teams • Potentially limited transferability: Findings may not easily apply to other populations or settings • Requires time and expertise: Transcription, translation, and analysis of IDI data can be time-consuming and requires expertise, particularly when dealing with complex data and study objectives |
Quantitative preference methods (preference elicitation) | |||
Allocation of points questions ask people to prioritize different features or attributes by giving them a set number of points, usually totaling 100. Participants distribute these points across the attributes, giving more points to what they value most and fewer to what they value less. They can even give zero points to attributes they do not find important. These questions are often part of a structured survey but can also be included in in-depth interviews for a more comprehensive mixed-methods approach. | • Simple to design: Creating the questions is straightforward • Easy to administer: Questions can be completed quickly and easily by most participants • Suitable for many contexts: Can be used in a wide range of settings, including those where paper-based methods are required • Improved precision for preference estimates: Provides more granular insight into preference differences between items or attributes compared to ranking and rating questions, but less than DCEs and BWS type 1 • Can identify unimportant attributes: Participants can clearly indicate item attributes they deem unimportant by allocating zero points to them • Enables equal preference indication: Participants can denote a tie by assigning identical point values to the various attributes they equally prefer • Simple analysis: The data generated are easy to analyze and interpret | • Risk of framing effects: The framing of the task can shape responses, necessitating careful consideration for how they are presented • Potential for higher cognitive burden: Allocation and assigning precise values may be challenging and more mentally taxing than ranking or ratings questions and may affect response quality • May be confusing and difficult: The process of translating preferences into a precise allocation of points could be confusing for some participants, especially for those with lower education • Limitation with paper-based methods: When only paper-based methods are available, completing the allocation of points can be more difficult as the total number of points remaining is not continuously updated • Potential for unreliable data with many attributes: With a large number of attributes (more than 5–7), it becomes challenging for participants to provide consistent rankings, which may affect the robustness of the data | |
Best-worst scaling (BWS) type 1 | BWS type 1 is a type of choice experiment that is used to find out which items (statements, features, criteria, outcomes, etc.) people value or like the most and least. While there are three variations of BWS, type 1 (object case) is the most versatile, commonly utilized, and likely to be the most applicable for use in resource-limited settings. Participants engage in a series of questions, referred to as “choice tasks,” to reveal their preferences. In each of these tasks, they are usually shown a set of 4 to 5 different items and are asked to pick both their top favorite (best) and least favorite (worst). | • Clear and intuitive: The task of selecting the best and worst from a set in each question is generally easy for most people to understand • Suitable for many items: Can effectively assess a large number of items or attributes, making it a good choice when many must be evaluated • Theory based: Grounded in random utility theory, which models real-world decision-making processes • High precision for preference estimates: Provides precise estimates of the relative importance and magnitude of difference in preference between items • Can identify hidden preference groups: Latent-class analyses can unearth “hidden” groups with similar preferences | • Risk of framing effects: The framing of hypothetical choice scenarios can substantially shape responses, necessitating careful consideration for how choice tasks are presented • Large sample sizes often needed: BWS type 1 typically requires larger sample sizes compared to other methods to ensure reliable and robust findings • Potential for high cognitive burden: Making multiple best-worst trade-offs can be mentally taxing for participants, which may affect response quality; however, it may be more suitable for those with low health literacy than DCEs • Cannot determine absolute importance: It does not allow for understanding of the overall importance of the attributes, including whether all, some, or none of the attributes is important to participants, without adding “anchoring” questions • Expertise and software required: Both the design and analysis stages are simpler than DCEs but still require expertise and the utilization of statistical or specialized software |
Discrete choice experiments (DCE) | DCE is a type of choice experiment used to find out what features or characteristics (i.e., attributes and attribute levels) people care about the most when given different options (i.e., profiles) that mimic products, services, or policies. Participants answer a series of questions, known as “choice tasks,” which help understand what they prefer. In each task, they usually see 2 or 3 different options that have differing features and pick the one they like the most. Some versions of DCEs allow participants to pick “none” if they do not like any options or to say if they would actually want or use their favorite option in real life if it were available. | • Mimics real-world decision making: DCEs require participants to make trade-offs between options with different features, mirroring the decisions people commonly face in real-life situations • Theory based: Grounded in random utility theory, which models real-world decision-making processes • High precision for preference estimates: Provides precise estimates of the relative importance and magnitude of difference in preference between attributes and attribute levels • Can determine willingness to trade: Enables assessment of the extent to which participants are willing to compromise on less preferred features for more preferred ones • Can identify hidden preference groups: Latent-class analyses can unearth “hidden” groups with similar preferences • Simulation capabilities: DCEs support simulations, which extrapolate participant preference data to predict real-world demand and uptake of different potential options | • Needs careful selection of attributes: The development of the different attributes and their levels requires thoughtful consideration to ensure that key drivers of preference and decision-making are accounted for • Risk of framing effects: The framing of hypothetical choice scenarios can substantially shape responses, necessitating careful consideration for how choice tasks are presented • Large sample sizes often needed: DCEs typically require larger sample sizes compared to other methods to ensure reliable and robust findings • Potential for high cognitive burden: Making multiple trade-offs can be mentally taxing for participants, especially those with low health literacy, which may affect response quality • Potential for unreliable data with many attributes: With a large number of attributes (more than 5–7), it becomes challenging for participants to evaluate and choose options carefully, which may affect the robustness of the data • Expertise and software required: Both the design and analysis stages of DCEs require a high level of expertise and the utilization of statistical or specialized software |
Ranking questions | Ranking questions ask participants to put items or attributes in order based on their personal preferences, according to how important or how desirable they are. These questions can take different forms: either as a paired comparison, where participants compare two items at a time and select the preferred option from the pair, or as a rank order, where participants list multiple items or attributes from the one they like most to the one they like least (or vice versa). Ranking questions are typically part of a structured survey but can also be included with in-depth interviews as part of a mixed-method approach. | • Simple to design: Creating the questions is straightforward • Suitable for many contexts: Can be used in a wide range of settings, including those where paper-based methods are required • Clear and intuitive: The questions are generally easy for most people to understand • Easy to administer: Questions can be completed quickly and easily by most participants • Quick relative importance assessment: Efficiently identifies the relative importance of different items or attributes • Simple analysis: The data generated are easy to analyze and interpret | • Risk of framing effects: The way questions are framed (e.g., rank items best to worst or vice versa) can influence the responses, potentially leading to different rank orders. • Cannot determine preference magnitude: It does not allow for understanding how much more one attribute is preferred over another, including possible ties (unless ties are explicitly allowed) • Cannot determine absolute importance: It does not allow for understanding of the overall importance of the attributes, including whether all, some, or none of the attributes is important to participants • Potential for unreliable data with many attributes: With a large number of attributes (more than 5–7), it becomes challenging for participants to provide consistent rankings, which may affect the robustness of the data |
Rating questions | Rating questions ask participants to give a score to show how much they prefer, value, or are satisfied with different items or attributes. These questions can use various types of scales, including the following: numerical (e.g., 1 to 5), Likert (e.g., from “strongly disagree” to “agree”), visual analogue (e.g., marking a point on a continuous line), semantic differential (e.g., from “bad” to “good”), and faces (e.g., from a sad to a happy face). Rating questions are usually part of a structured survey but can also be included with in-depth interviews as part of a mixed-method approach. | • Simple to design: Creating the questions is straightforward • Suitable for many contexts: Can be used in a wide range of settings, including those where paper-based methods are required • Clear and intuitive: The questions are generally easy for most people to understand • Easy to administer: Questions can be completed quickly and easily by most participants • Simple analysis: The data generated are easy to analyze and interpret | • Risk of “yeah-saying” bias: There is a chance participants may give answers they think are socially acceptable or to avoid conflict • Prone to “satisficing”: Some participants might rush through by consistently choosing only the best, middle, or worst options on the scale • Variable scale interpretation: The way people understand the scales can differ across cultures or settings, making it potentially hard to compare results • Limited discrimination between preferences: Since no trade-offs are involved, people might rate many items or attributes as important, making it hard to distinguish between them |
Mixed preference methods (qualitative and quantitative) | |||
Q-methodology is a way to understand people’s different opinions or preferences for a given topic. First, participants go through a step called “Q-sorting,” where they rank items or attributes from least liked, important, or agreed with (negative values) to the most liked, important, or agreed with (positive values) using a special chart (i.e., “Q-sort grid”). Though optional, many participants are also interviewed afterward to dive deeper into why they made their choices. The information from the Q-sorting is then analyzed to understand what preferences or viewpoints exist and group people based on similar or different preferences and opinions. | • Mixes qualitative and quantitative data: Combines the quantitative analysis of how people rank and sort items with a detailed understanding of why they do so • Suitable for many items: Can effectively assess a large number of items or attributes, making it a good choice when many must be evaluated • Reveals diverse viewpoints: With a well-selected group of participants, this method can reveal a wide range of opinions and preferences, showing areas of agreement and disagreement • Promotes deep thinking: The engaging process helps people think about what they prefer or agree with | • Needs careful selection of attributes: The development of the different items or attributes evaluated in the study requires thoughtful consideration • May be challenging for participants: The sorting process may be hard for some, and adding more items or attributes can make it even more complex • Overall process may be time-consuming: The multiple steps in both gathering and analyzing the data can require a substantial time commitment • Potentially complex analysis: Understanding the results is complicated and requires someone with expertise in applying the method |