I Got Your Typing Tool Right Here… A Cautionary Tale About Segmentation Algorithms and Qualitative Research
It should be stated right off the bat that the team at Accelerant Research has enormous appreciation for quantitative segmentation research. We LOVE conducting these sophisticated quantitative research studies for our full-service clients which result in elegant, highly targetable, easily digestible segments of the target customer population. Large-scale segmentation studies have a long shelf life, internally, and provide a ton of strategic value to organizations. However, in our role as white-glove qualitative research recruiters we’ve noticed a disturbing trend in the insights industry when it comes to blindly relying on segmentation algorithms to identify segment members for participating in focus groups or other qualitative research studies.
For those with less experience in such research studies, a segmentation algorithm is a shortened, summary version of the larger-scale segmentation results, which can be inserted into future screening questionnaires to identify customer segmentations among the survey population. The segmentation algorithm is a fantastic, on-the-fly means to identify segment members, which can be used in a plug-and-play fashion for future quantitative surveys. However, using these algorithms for qualitative research is not so simple.
Often, we are handed a segmentation typing tool to use during our recruiting process in order to identify members of a given segment, but if we rely solely on this quantitative type of assessment, clients are often disappointed when individual recruits don’t behave in their interview as a member of their segment is expected to behave. Segmentation output is very elegant and strategically impactful, but the individual data points that comprise your segments are messy. The final segmentation analysis is based on hundreds or even thousands of cases, which is what makes them so powerful. However, when you deconstruct the segmentation and go back to look at individual survey participants, the results are far less clear – sometimes a small shift in a survey response (e.g., selecting 6 instead of 8 on a 10-point survey scale) can jettison a research participant from one segment to another. When recruiting participants for qualitative research, we’re right back to that messy, individual-level assessment of participants’ “fit” with a given segment. As such, relying strictly on a segmentation algorithm or typing tool to definitively define these segment members for qualitative research can be a recipe for disaster.
If we’re not careful, this can lead to an awkward disconnect in the backroom of a focus group facility, where participants are correctly segmented based on the algorithm, but in their interview, they say and do things that make them sound like they should be members of a different segment.
What can be a simple and highly effective tool in bringing your segments to life is sharing the segment profiles with your qualitative recruiters, in addition to the typing tool algorithm. What these profiles do is allow us to focus on recruiting participants that behave like the segment should, rather than blindly recruiting into a segment without the benefit of such context. When we use the segmentation algorithm as a starting point of identifying segment membership, and the segment profile information for refinement, we create a powerful one-two punch that ensures research participants who sit down for qualitative interviews are exactly the right audience. This type of recruiting rigor requires partnership between Accelerant’s recruiting team and the client to make sure these details are communicated properly, but when that partnership is in place, it makes for a fantastic client experience at the end of the day.
The consultative partnership described above is just one example of Accelerant’s approach to service that we take on each qualitative recruiting project (i.e., we sweat the details). We invite you to request a cost estimate from us as a first step and experience the difference that we provide for yourself. Simply give us a call (704-206-8500) or send us an email (firstname.lastname@example.org).
When an agency is hired to develop an ad campaign, the first objective they must achieve is to develop a creative brief that will serve as the foundation for the entire campaign, whether specific ads are executed in video, TV, radio, print, or other. But the question is “on what basis is the creative brief designed?”
Just like many large organizations do before leadership decides to allocate large amounts of budget to, say, designing new products and services, market research is conducted among members of a target population to guide and inform decisions about whether new products are ready for market rollout, or need to be revised, or need to be scrapped. This way, research is used as an insurance policy against large budget expenditures that will not pan out in driving revenue for the organization.
Ad campaigns also are large expenditures. As such, they should be market tested before their communications are made public. Surely, research studies like storyboard copy testing are carried out, but these “test stimuli” are already based on what the ad agency has delivered as a creative brief. Unless the brief is also market tested, the agency, and its client will begin to develop and monitor executions that may well be based on incorrect messaging strategies, thus rendering any executions sub-optimal.
Accelerant Research has designed a quantitative study that directly informs the development of a creative brief by integrating “tried and true” survey construction and multivariate analytic techniques as follows:
Informed Ballot and Multiple Regression
This technique is borrowed from political opinion polling surveys where the first question is “if the elections were held today, for whom would you vote?” Following this question is a set of intervening questions based on key political issues about which the candidate may be pro or con, e.g., “If you knew that Candidate X was tough on crime, would that make you more or less likely to vote for him/her?” Finally, the first question about for whom the respondent would vote is asked again. With these data in hand, important measurements may be performed.
First, a pre-post assessment may be made on comparing the % likelihood of voting for Candidate X. This analysis will show whether the array of intervening survey questions can effectively create more positive consideration toward the candidate, overall. Second, the intervening questions can be used as independent variables in a multiple regression analysis, with pre-post change in consideration as the dependent variable. By examining the relative beta weights of each intervening question, those with the strongest association with positive change in candidate consideration may be identified and cherry-picked to serve as the foundation for the candidates’ foundational political campaign.
Adapting Consideration Driver Research to a Creative Brief for Advertising
Applying the above outlined techniques to inform advertising campaigns is relatively simple and straightforward. Taking the “who will you vote for” question, it is modified to be something like “how likely are you to consider Brand X when you want to purchase Product/Service Y?” This single question, in this form, will serve as the pre- and post- measures of the amount of change in positive consideration. Regarding the intervening survey items, these are made up of a set of functional and emotional attributes about the brand and its products or services under study. Again, multiple regression analysis can be performed to isolate which functional and which emotional attributes drive the most positive change in consideration. Additionally, regression can also reveal the optimal mix of specific emotional and functional attributes that should be used to inform the foundational creative brief and associated ad campaign, i.e., what to say in an ad.
The figure below shows a standard visualization of a Consideration Driver study, based on mock data:
Consideration Driver research is uniquely designed to inform creative briefs. Organizations and advertising agencies both can embrace this methodology to ensure that brand messaging will have the benefit of being tested to inform the overarching strategies that become the foundation of subsequent ad executions. Feel free to contact Accelerant Research (email@example.com) for a more in-depth discussion of the ins and outs of this sort of work.
or those brands that leverage retail channels, in-store displays can represent among the highest impression messages and be a foundational cornerstone to marketing success. Goals for these displays can vary quite a bit: generating awareness, building brand equity, gaining entry into a consumer’s consideration set, or educating about products, to name just a handful. Ultimately though, the end play is often conversion to purchase.
Given their potential impact on both the brand and the bottom line, inviting consumers to provide feedback on displays during the design phase makes sense. Displays are, after all, created specifically for consumers, to catch their eye and help make their shopping easier. Why not let your customers offer their two cents? Early-stage feedback from target consumers before final forms are locked in can yield a wealth of high impact insights that can improve in-store appeal of your displays, refine the brand-story they convey, optimize their shop-ability, make navigation of their featured products more intuitive, and yes, improve their ability to convert browsers to buyers. Below is a rundown of qualitative research approaches that Accelerant Research has found to be especially impactful when it comes to retail prototype testing.
As with any qualitative work, setting the stage is key to a productive discussion, and this means engaging your participants before you invite them into a discussion. When it comes to display prototype research, a good first step is a self-guided shopping trip assigned before the core research event even takes place. During this self-scheduled “homework,” recruited participants are tasked with shopping your specific category and your specific products at one of the retailers that carries your brand and features your displays.
Such exercises allow for a natural shopping style without imposed time constraints and provide a wealth of information for your insights and marketing teams – photographs of displays, comments on packaging, videos of product selection, and collection of exhibits such as brochures and samples. Most importantly, they set the stage for a productive discussion: how well are your current displays working, what have your competitors got going on, and, from the perspective of your target customers, what are problems not yet solved and opportunities not yet realized in the aisle?
Following in-store shopping, your customers will then participate in moderated qualitative discussions where they share not only their thoughts on your current displays but then provide feedback on your new design prototypes with all that recent experiential context in mind. There are a few flavors to how these follow-up discussions can be designed depending on timeline, budget, stimuli available, and your team’s specific insights needs. Some of our preferred approaches for display prototype discussions are listed below:
If you’re considering conducting consumer listening on display prototypes your team is building, and we think you should, we invite you to reach out to us for more information. Give us a call (704-206-8500) or send us an email (firstname.lastname@example.org). We’d be happy to talk through your specific insights needs and make a recommendation tailored to fit. With our support and guidance in participant recruiting, technology/logistics management, and even moderating/full-service support, Accelerant Research can provide you impactful insights from your customers that will help you tailor your in-store presence to meet their needs best.
Imagine you are designing a study for a client who wants to have “readable” base sizes of certain key demographic groups represented in a survey, e.g., race and ethnicity. So, in order to accommodate, you set up the sample configuration such that Caucasians, African-Americans, Hispanics, and Asians each have a base size of 100 completes, such that your total sample size is n=400.
So far, so good. But then, your client wants you to test the significance of differences of each group against the total sample. Well, everything would be okay if each of these groups were equal in size in the population. Of course, they are not, so that means you can’t simply roll up the 400 respondents into one group and make straightforward comparisons to the separate groups. To solve for this issue, you decide to weight the data using population proportions of each group according to the latest census data available.
In essence, weighting data is like pulling taffy. For some groups, you only need to pull the taffy a little bit because their proportion in the sample is close to the population. In other groups, you will need to stretch the taffy more as they may be under-represented in the sample, relative to the population.
However, all kinds of trouble can occur at this stage of your otherwise well-designed study. You can apply weights to a data set that range way too large and way too small. You can apply the weight by assigning a proportion of one of the subgroups incorrectly. And you can apply the weight correctly and forget to read your crosstabs that show “Weighted Data.” When using weights, be warned that trouble is lurking around the corner if you are not careful and check your work before publishing the results to your client.
To begin, examine each individual weight being applied to each respondent’s data. If the weight being applied is greater than 2.0, you may be trying to pull that taffy too far, and it may snap. If the weight is close to 0.0, you are essentially eliminating that respondent’s data since anything multiplied by zero is zero. If you can stay within the range of 0.5 and 1.5, you are in good shape, and the taffy will be just right.
Whomever is handling your data processing, whether it is some crack technician that’s been running Quantum to produce crosstabs for years and years, or whether you are doing it yourself, double check your work. Believe us, these errors are made because they can be easily overlooked.
The worst error to make is by posting unweighted data to your report. Again, easy to do, but extremely costly to overcome. Your client will be hard pressed to process your invoice, and will probably never call you again for another study in the future. Check and double check your work. Better yet, have someone else check your work as most researchers I know can tell a story about having looked at something for so long, can not see errors they’ve made that are right under their nose.
Weighting data is surely the Achilles’ heel of market research. So, when you find yourself in a study in which applying weights is necessary, please be careful, stretch first, and don’t pull a muscle.