For years, the research industry has faced a persistent challenge: the quality of our data hinges on the quality of our participants. While we've seen advancements in methodologies, the foundation of our work – the panel and sample sources, particularly in both qualitative and quantitative research – has often been overlooked. Now, as the market tightens and budgets get scrutinized, the conversation around the Return on Investment (ROI) of research is more crucial than ever.
The current economic climate, marked by layoffs and increased competition, demands that we, as researchers, not only justify our existence but also demonstrate the tangible value we bring. This means proving the ROI of every aspect of our projects, from methodology to participant selection.
Calculating the ROI of Participants: Beyond the Project Level
The attached article from Accelerant Research (Calculating the ROI of UX) provides a framework for calculating the ROI of UX efforts. We can adapt this model to quantify the ROI of qualified research participants across both qualitative and quantitative projects. Let's look at how we can calculate the cost of participants.
Let's illustrate this with a hypothetical example, applicable to both quantitative and qualitative studies: A researcher, with an annual salary of $130,000-$150,000, dedicates 4-6 weeks (let's say 5 weeks or 1/10th of a year) to a project involving 12 participants.
Here’s the breakdown:
Researcher's Time Cost:
- Annual Salary: $140,000 (average)
- Cost per project (1/10th of a year): $14,000
Participant Costs:
- Incidence Rate: 60% (meaning you need to screen more people)
- Cost per Recruit: $150
- Incentive per Participant: $175
- Number of Participants: 12
- Total Recruit Cost: (12 / 0.60) * $150 = $3000
- Total Incentive Cost: 12 * $175 = $2100
- Total Participant Costs: $3000 + $2100 = $5100
Total Project Cost: $14,000 (researcher time) + $5100 (participant costs) = $19,100
This calculation provides a starting point. But what if we introduce bad data? This is where the real costs begin to mount. Bad participants can:
- Skew results: leading to flawed insights (especially damaging in quantitative studies).
- Waste time and resources: requiring rework, additional recruitment, and re-analysis.
- Damage credibility: undermining the value of research within the organization.
- Lead to poor business decisions: based on faulty data.
The Formula for Quantifying Value
Instead of just ROI, let's use a formula that calculates the Net Value of research, which is easier to understand:
Net Value = (Benefit from Research - Cost of Research) / Cost of Research
- Benefit from Research: The positive outcomes directly attributable to the research. This can be increased revenue, cost savings, or reduced risk.
- Cost of Research: The total cost of the research project, including researcher time, participant costs, and any other related expenses.
Quantifying the Impact: Good vs. Bad Research – Project Examples
Let's look at two concrete scenarios.
Scenario 1: High-Quality Participants (Good Research)
Project: Quantitative usability testing for a new e-commerce checkout process.
Participants: 12 qualified participants recruited through a rigorous process (e.g., Accelerant Research).
Findings: Participants identify key usability issues, leading to design improvements.
Outcome: A 15% increase in conversion rates in the first quarter after launch.
Concrete Metric: Increase in Revenue.
Baseline: $1,000,000 revenue per quarter.
Benefit Calculation: Increased Revenue = $1,000,000 * 0.15 = $150,000
Net Value: ($150,000 (Revenue Increase) - $19,100 (Project Cost)) / $19,100 = 6.85 or 685%
Scenario 2: Low-Quality Participants (Bad Research)
Project: Same as above.
Participants: 12 participants recruited through a less rigorous process (e.g., a generic panel).
Findings: Participants are unqualified for the study. Their responses are inconsistent, and they introduce false or misleading data.
Outcome: The checkout process launches with significant usability problems that go undetected, leading to customer frustration and a drop in conversion rates. Design team and stakeholders are not able to make the correct decisions and require additional design time.
Concrete Metric: Decrease in Revenue, Wasted Resources (Design time, Dev time, launch delays), Lost Researcher Time, and overall project cost.
Baseline: $1,000,000 revenue per quarter, design team and stakeholders have 2-3 weeks of design time
Benefit Calculation:
- Decreased Revenue: $1,000,000 * 0.05 (5% decrease) = -$50,000 (negative impact)
- Wasted Design Time: $20,000 (Estimated cost of rework and retesting)
- Wasted Researcher Time: $5,000 (Assuming 1 week of wasted time re-analyzing bad data, and needing to explain results to stakeholders)
- Overall Project Cost: $19,100 (Project Cost)
Net Value: (-$50,000 (Revenue Decrease) - $20,000 (Wasted Resources) - $5,000 (Wasted Researcher Time) - $19,100 (Project Cost)) / $19,100 = -5.92 or -592%.
This means the research lost value significantly, and wasted time and money for the organization.
Show Rates and the Qualitative Advantage
Show rates are crucial for qualitative research. According to industry averages, no-show rates for qualitative studies can hover around 8-10%. In this scenario, if a researcher is conducting qualitative research and has an 8% no-show rate, they will need to recruit 13 participants to have 12 show up.
At Accelerant Research, we've consistently achieved a 97% show rate over the past four years. This is significantly above the industry standard, ensuring that projects stay on schedule and minimize wasted resources.
Let's factor in show rates into a qualitative example.
Scenario 3: High-Quality Participants (Good Qualitative Research) with a high show rate
Project: In-depth interviews for a new product development.
Participants: 12 qualified participants recruited through a rigorous process (e.g., Accelerant Research).
Findings: Participants provide crucial insights into unmet needs and preferences, guiding product development.
Show Rate: 97%
Cost Per Project: $19,100
Outcome: Development of a product that resonates with the target audience, leading to a successful product launch and early market adoption.
Concrete Metric: Increased Revenue.
Benefit Calculation: Let's say the product launch leads to $500,000 in revenue in the first year.
Net Value: ($500,000 (Revenue Increase) - $19,100 (Project Cost)) / $19,100 = 25.19 or 2519%].
This is a very high return on investment.
The Accelerant Research Approach
At Accelerant Research, we’ve been committed to providing high-quality data by focusing on the quality of participants. We've been ahead of the curve, recognizing the importance of a reliable panel and sample. We invested in our own proprietary panel, the Agora USA panel, over 17 years ago.
Here's how we ensure participant quality:
- Real-World Verification (Especially for Quantitative Studies): We have implemented longitudinal tracking studies where participants need to have legitimate purchases in order to participate, ensuring accurate data.
- Video Verification (Crucial for Qualitative and Valuable for Quantitative): Our recruiters and researchers create task-oriented scenarios specific to each study. Participants respond to these tasks in short, 5-minute verification videos. This process not only verifies their qualifications but also assesses their articulation and thoroughness, ensuring a high-quality standard across both qualitative and quantitative projects.
By investing in robust participant qualification methods, we mitigate the risks of bad data and maximize the Net Value of your research.
Conclusion
In this environment, the ability to demonstrate the value of research is paramount. By focusing on participant quality and implementing a clear Net Value framework, we can ensure that research remains a valuable asset for organizations, both in qualitative and quantitative methodologies. The examples above clearly shows that by investing more in research, you can see a significant rise in the Net Value of your research.