Last month, we stated that due to changes in business conditions, it may be a good opportunity to revisit the value of historical data collected in a customer satisfaction tracking study, and consider whether revisions could improve the value of the data collected while reducing costs. As an example of “how things can change” it was offered that the advent of various technologies continues to change consumers’ opinions and expectations of brands, and that these new conditions may change consumers’ purchasing behavior and underlying brand loyalties, causing them to “redefine” their criteria for customer satisfaction. As such, satisfaction measures that are currently tracked, but were identified as key drivers in a study conducted prior societal changes may have decreased in importance, while others may have increased importance and reached the status of key driver, but are not being tracked at all. Overall, the rank order of importance of key drivers may have changed in such a way as to render current tracking programs ineffective in doing what they are designed to do: monitor the effectiveness of various customer programs and initiatives in lifting overall customer satisfaction.
Assuming it’s time to re-assess and refresh key drivers, a number of excellent opportunities emerge to manage your research budget. First, by conducting a new key driver identification study, you can leverage its findings and include only those items found to be key drivers in your tracking survey, i.e., pare down the amount of data collected to only what is needed.
This is where key driver studies pay for themselves. That is, you can avoid spending research budget on measurements of things that do not have a strong relationship with overall customer satisfaction (or other business-based dependent measure). Eliminate the “nice-to-know” survey items and keep your tracking questionnaire as brief as possible. A good rule of thumb to follow for designing surveys for customers is to keep it at 10 minutes or less. This amount of time is quite adequate to fit several topics of survey items that map back to key drivers identified in the previous study, as well as accommodate important open-ended questions in which customers get to provide their opinions and feelings in their own words.
Another opportunity is to review the methodology that has been used in the past tracking efforts and consider whether a less expensive process of data collection can be deployed without a loss of research quality, e.g., switching from CATI to web or interactive voice response (IVR), provided that sample representativeness can be maintained. Methodology changes can sometimes cut the research expenditure in half while delivering the same value.
And yet another opportunity exists during these tracker episodes in the form of re-bidding the study with a new set of research suppliers. Indeed, a research manager’s fiduciary responsibility to his or her employer is to drive bang for the buck on behalf of the organization’s research spend levels. Indeed, there is nothing like an RFP for a tracking study that forces research firms to sharpen their pencils in pricing its offer of tracking services, both for the incumbent supplier as well as for prospective ones, too.
The third and final installment of “Have you thought about your tracker lately?” will be published next month. In it, the reader will see a step-by-step process laid out in which migrating from one method to another and/or one supplier to another can be done, while minimizing the loss of historical data. This migration, when done with some care, can enable an organization to maintain at least some, if not most, of its historical data provided in previous waves of tracking research.