For those brands that leverage retail channels, in-store displays can represent among the highest impression messages and be a foundational cornerstone to marketing success. Goals for these displays can vary quite a bit: generating awareness, building brand equity, gaining entry into a consumer’s consideration set, or educating about products, to name just a handful. Ultimately though, the end play is often conversion to purchase.
Given their potential impact on both the brand and the bottom line, inviting consumers to provide feedback on displays during the design phase makes sense. Displays are, after all, created specifically for consumers, to catch their eye and help make their shopping easier. Why not let your customers offer their two cents? Early-stage feedback from target consumers before final forms are locked in can yield a wealth of high impact insights that can improve in-store appeal of your displays, refine the brand-story they convey, optimize their shop-ability, make navigation of their featured products more intuitive, and yes, improve their ability to convert browsers to buyers. Below is a rundown of qualitative research approaches that Accelerant Research has found to be especially impactful when it comes to retail prototype testing. As with any qualitative work, setting the stage is key to a productive discussion, and this means engaging your participants before you invite them into a discussion. When it comes to display prototype research, a good first step is a self-guided shopping trip assigned before the core research event even takes place. During this self-scheduled “homework,” recruited participants are tasked with shopping your specific category and your specific products at one of the retailers that carries your brand and features your displays. Such exercises allow for a natural shopping style without imposed time constraints and provide a wealth of information for your insights and marketing teams – photographs of displays, comments on packaging, videos of product selection, and collection of exhibits such as brochures and samples. Most importantly, they set the stage for a productive discussion: how well are your current displays working, what have your competitors got going on, and, from the perspective of your target customers, what are problems not yet solved and opportunities not yet realized in the aisle? Following in-store shopping, your customers will then participate in moderated qualitative discussions where they share not only their thoughts on your current displays but then provide feedback on your new design prototypes with all that recent experiential context in mind. There are a few flavors to how these follow-up discussions can be designed depending on timeline, budget, stimuli available, and your team’s specific insights needs. Some of our preferred approaches for display prototype discussions are listed below:
If you’re considering conducting consumer listening on display prototypes your team is building, and we think you should, we invite you to reach out to us for more information. Give us a call (704-206-8500) or send us an email (info@accelerantresearch.com). We’d be happy to talk through your specific insights needs and make a recommendation tailored to fit. With our support and guidance in participant recruiting, technology/logistics management, and even moderating/full-service support, Accelerant Research can provide you impactful insights from your customers that will help you tailor your in-store presence to meet their needs best. It should be stated right off the bat that the team at Accelerant Research has enormous appreciation for quantitative segmentation research. We LOVE conducting these sophisticated quantitative research studies for our full-service clients which result in elegant, highly targetable, easily digestible segments of the target customer population. Large-scale segmentation studies have a long shelf life, internally, and provide a ton of strategic value to organizations. However, in our role as white-glove qualitative research recruiters we’ve noticed a disturbing trend in the insights industry when it comes to blindly relying on segmentation algorithms to identify segment members for participating in focus groups or other qualitative research studies.
For those with less experience in such research studies, a segmentation algorithm is a shortened, summary version of the larger-scale segmentation results, which can be inserted into future screening questionnaires to identify customer segmentations among the survey population. The segmentation algorithm is a fantastic, on-the-fly means to identify segment members, which can be used in a plug-and-play fashion for future quantitative surveys. However, using these algorithms for qualitative research is not so simple. Often, we are handed a segmentation typing tool to use during our recruiting process in order to identify members of a given segment, but if we rely solely on this quantitative type of assessment, clients are often disappointed when individual recruits don’t behave in their interview as a member of their segment is expected to behave. Segmentation output is very elegant and strategically impactful, but the individual data points that comprise your segments are messy. The final segmentation analysis is based on hundreds or even thousands of cases, which is what makes them so powerful. However, when you deconstruct the segmentation and go back to look at individual survey participants, the results are far less clear – sometimes a small shift in a survey response (e.g., selecting 6 instead of 8 on a 10-point survey scale) can jettison a research participant from one segment to another. When recruiting participants for qualitative research, we’re right back to that messy, individual-level assessment of participants’ “fit” with a given segment. As such, relying strictly on a segmentation algorithm or typing tool to definitively define these segment members for qualitative research can be a recipe for disaster. If we’re not careful, this can lead to an awkward disconnect in the backroom of a focus group facility, where participants are correctly segmented based on the algorithm, but in their interview, they say and do things that make them sound like they should be members of a different segment. What can be a simple and highly effective tool in bringing your segments to life is sharing the segment profiles with your qualitative recruiters, in addition to the typing tool algorithm. What these profiles do is allow us to focus on recruiting participants that behave like the segment should, rather than blindly recruiting into a segment without the benefit of such context. When we use the segmentation algorithm as a starting point of identifying segment membership, and the segment profile information for refinement, we create a powerful one-two punch that ensures research participants who sit down for qualitative interviews are exactly the right audience. This type of recruiting rigor requires partnership between Accelerant’s recruiting team and the client to make sure these details are communicated properly, but when that partnership is in place, it makes for a fantastic client experience at the end of the day. The consultative partnership described above is just one example of Accelerant’s approach to service that we take on each qualitative recruiting project (i.e., we sweat the details). We invite you to request a cost estimate from us as a first step and experience the difference that we provide for yourself. Simply give us a call (704-206-8500) or send us an email (info@accelerantresearch.com). We’ve all been there; you’ve got a topic that, as a researcher, you find nuanced and fascinating or one that your client teams wax eloquent about for hours. Everyone is looking forward to getting lots of juicy detail about this topic your focus groups, but consumers find it, well, not necessarily riveting. Or the subject is so complex that participants just don’t know where to start breaking it down so they can effectively frame their answers. While a quality recruiter will screen to make sure your participants are articulate and have something meaningful to say on the issue at hand, initial answers can still skim along the surface or consist of overly simplified high-level summaries. The situation is not uncommon for qualitative and even the best of moderators has been there.
One area of research this seems to crop up on is journey mapping. Journeys can be complicated affairs having lots of steps, so it’s easier for consumers to gloss over the agonizing details (buying a car). They can be journeys taken out of necessity rather than personal interest (resolving a customer service issue). And there are journeys that some consumers just don’t find particularly thrilling (insurance shopping anyone?). Time to plan for a good projective exercise; one designed to help organize the process, encourage focus on the below surface motivators for each step taken, and recast the routine in a more action-oriented framework that helps consumers be as engaged on the topic as you are. A role-playing exercise that tasks customers with stepping back and narrating their journeys as observers may be a good solution. Start by asking customers to think about the steps they took during the journey and write each on a post-it note, trying to leave nothing out. Coach them to make sure the narrative is complete and has a continuous flow. Then, have them arrange their notes in sequence on a sheet of blank paper with their name written across the top. Once participants have completed this task, ask them to imagine themselves are the director of a blockbuster film, a film whose plot is the journey they just mapped out, and they are now providing commentary on the DVD release. Encourage them to work through the steps as scenes, what the goal of each ‘scene’ was, highlight the victory or defeat in each scene, and describe the purpose of the scene in the overall film. Then ask for a few volunteers to give a sample narration of their films before continuing discussion. The technique can be particularly useful for qualitative that includes journey mapping discussions where answers are routine or overly simplified; the nature of the exercise focuses on reframing events in terms of action and encourages participants to dig a little deeper for their motivations in each “scene,” even if it’s something they report, on a rational level, they did just because that’s the way they did it. If you’ve got a group who is giving surface-level summary answers in a lower involvement category, this is a chance to drive into the detail. If you’ve got a low energy group on your hands, this is a nice opportunity to stack the deck in your favor by calling on one of your livelier participants to start. Once a couple of narrations have been given, there are several different in-room follow-ups you can use. You can follow up by asking if anyone had any scenes not yet discussed that are part of their own films, heard, any scenes they wish they had included in their own films, or if there were any scene motivators shared by others with which they also felt a personal connection. Participants can ‘grade’ themselves on their journeys as if they were film critics (How many Rotten Tomatoes, Thumbs Up or Thumbs Down) and volunteer where they feel their movies fell short, or why it was Oscar-worthy. The projective assessment exercise lends itself to a group discussion of what information, tools, products or other resources would have been helpful to the heroes of their films. In terms of interventions, the exercise has some practical elements as well since it introduces a natural break for the moderator to visit the back room and check-in with the extended research team. The moderator can step away while participants are mapping out their film sequences, or toward the end of the session, after tasking participants with creating an “ideal” process as a group, deciding collectively which steps are worth taking, writing these on fresh post-its, and coming to a team consensus on what sequence they belong on the whiteboard. For complicated or less recent journeys, ones that might require some heavy back-thinking, the mapping portion of the exercise can be conducted before the groups and submitted in written form, then brought into the group for discussion. If there’s a lag between when participants arrive at the facility and when the group begins, handing the initial mapping portion of the assignment out as an exercise to work in the waiting room following completion of check-in documentation saves some in-room time, as part of the heavy lifting is now done. We invite you to reach out to us for more information about conducting qualitative research. Give us a call (704-206-8500) or send us an email (info@accelerantresearch.com). With our support and guidance in participant recruiting, technology/logistics management, and even moderating/full-service support, Accelerant Research can provide you with similarly successful and impactful insights. Imagine you are designing a study for a client who wants to have “readable” base sizes of certain key demographic groups represented in a survey, e.g., race and ethnicity. So, in order to accommodate, you set up the sample configuration such that Caucasians, African-Americans, Hispanics, and Asians each have a base size of 100 completes, such that your total sample size is n=400.
So far, so good. But then, your client wants you to test the significance of differences of each group against the total sample. Well, everything would be okay if each of these groups were equal in size in the population. Of course, they are not, so that means you can’t simply roll up the 400 respondents into one group and make straightforward comparisons to the separate groups. To solve for this issue, you decide to weight the data using population proportions of each group according to the latest census data available. In essence, weighting data is like pulling taffy. For some groups, you only need to pull the taffy a little bit because their proportion in the sample is close to the population. In other groups, you will need to stretch the taffy more as they may be under-represented in the sample, relative to the population. However, all kinds of trouble can occur at this stage of your otherwise well-designed study. You can apply weights to a data set that range way too large and way too small. You can apply the weight by assigning a proportion of one of the subgroups incorrectly. And you can apply the weight correctly and forget to read your crosstabs that show “Weighted Data.” When using weights, be warned that trouble is lurking around the corner if you are not careful and check your work before publishing the results to your client. To begin, examine each individual weight being applied to each respondent’s data. If the weight being applied is greater than 2.0, you may be trying to pull that taffy too far, and it may snap. If the weight is close to 0.0, you are essentially eliminating that respondent’s data since anything multiplied by zero is zero. If you can stay within the range of 0.5 and 1.5, you are in good shape, and the taffy will be just right. Whomever is handling your data processing, whether it is some crack technician that’s been running Quantum to produce crosstabs for years and years, or whether you are doing it yourself, double check your work. Believe us, these errors are made because they can be easily overlooked. The worst error to make is by posting unweighted data to your report. Again, easy to do, but extremely costly to overcome. Your client will be hard pressed to process your invoice, and will probably never call you again for another study in the future. Check and double check your work. Better yet, have someone else check your work as most researchers I know can tell a story about having looked at something for so long, can not see errors they’ve made that are right under their nose. Weighting data is surely the Achilles’ heel of market research. So, when you find yourself in a study in which applying weights is necessary, please be careful, stretch first, and don’t pull a muscle. |
Archives
March 2024
|