The market research & insights industry is buzzing with excitement over newly emerging and horizon applications of artificial intelligence (AI), and rightfully so. Machine learning, when applied to open-end coding, data processing, survey programming/administration, participant sourcing, and even quantitative/qualitative interviewing is sure to have a lasting impact on helping us deliver faster, higher quality insights at lower costs. However, arguably the biggest impact that AI has had on the insights industry to-date is one that all research practitioners and end-users should be very afraid of, and one that we’re not talking nearly enough about…SURVEY BOTS.
In a nutshell, survey bots are programs/algorithms that allow for automated completion of online surveys. In most cases, bots are a means for unscrupulous programmers to rack up cash, rewards, and sweepstakes entries for participating in paid research studies without actually taking the time to share honest opinions. On the surface, these ‘participants’ look to have qualified for and legitimately completed a survey, but the validity of their survey data is complete garbage. These bots are becoming more prevalent and more sophisticated and should be an obsession of the insights industry to weed out. For a fun, fear-inducing exercise, Google the term Survey Bot. What you would hope to see in your results is a collection of market research and academic thought leadership on how to avoid and prevent such parasites. However, what you actually see is a laundry list of available tutorials, services, and software downloads for executing your very own bots and gaming the system.
It’s certainly a scary issue, but not an insurmountable one. Clients and suppliers need to remain vigilant and work together to minimize the impact of survey bots. A handful of effective and relatively painless techniques to do so include:
The quality control steps described above provide an example of Accelerant Research’s approach to service that we take on each project. We sweat the details. We invite you to request a cost estimate from us and experience the difference that we provide for yourself. Simply give us a call (704-206-8500) or send us an email (email@example.com).
You have chosen the cities where you, your clients, and your colleagues will travel for focus groups. Now, you must contact the facilities in these cities and provide them study specifications for recruitment purposes. What you may not realize is that this is a major turning point of the study. To say recruitment can make or break a study is an obvious understatement.
At Accelerant Research, we have some of the most highly skilled recruiters in our business. When we handle your recruit, participants show up and are vocal. These two factors, as well as the following are key to successful qualitative studies:
We invite you to request a cost estimate from us as a first step. If we are granted the opportunity to work with you, we are confident that the quality of service you receive will be a marked improvement.
Good luck and safe travels!
Among the unique amenities that set us apart from other facilities in Charlotte and the Southeast, and the rest are:
o Integrated 'Apple TV'
o Multi-Channel Receiver for presenting Blu-Ray, DVD, VHS, CD stimulus materials
o In-room Integrated HDMI/VGA ports for easy plugin to moderator’s laptop
o High Performance Microphone for pristine audio quality
o HD Wide Angle, digital video camera
o Recording studio quality soundproofing
o High-Speed Wired and Wireless Internet
o Internet speed: 15mbps down and 2mbps up
o Wireless setup: 15mbps down and 2mbps up
o Wireless network, supports up to 300mbps
CLICK HERE FOR A FEW PICTURES OF THEFACILITY
For more information about our facility or to book your next in-person project, send us an email at firstname.lastname@example.org or call us at 704.206.8500.
Last month, we issued the second of three installments of newsletters centered on identifying opportunities to drive more “bang for the research buck” on tracking studies and included some suggestions on how to execute that process. In this third and final installment on the subject, we will outline some of the practices required on preserving historical trends of customer satisfaction data and bridging the potential gap between results obtained from the previous research supplier to the new research firm commissioned to execute a better or more updated tracking program.
The key objective in managing these changes to an existing tracking study is to preserve historical trends that have been obtained in previous waves of research by mitigating the risks involved in this change. If a risk mitigation plan is not deployed, new results may not be comparable to previous ones, and the time, money, and resources spent on historical trends are wasted.
With that end in mind, the mitigation plan must be designed to identify all sources of variance (characteristics of the data and collection methods that may prevent newer data from being comparable to past data) and, one by one, eliminate or control for each source of variance to the extent possible. In the end, if there are still significant differences between past data and new data, an algorithm must be built to equate historical data to new data and enable comparability to historical trends.
The sources of variance in tracking study migration include:
The first three above can be considered “error variance sources” that could be eliminated. The last two sources should be considered “market-related variance sources” which cannot be eliminated but can be accounted for and controlled.
Maintaining the same data collection method and using the exact same questionnaire are key to mitigating risk and preserving historical trends because consistency in method of data collection and surveys can eliminate the first two sources of error variance. However, if the exact same surveys and exact same method are used by different suppliers, then finding significant differences between historical and new satisfaction scores can only be attributable to differences in interviewers (error variance source # 3), market-related variance sources notwithstanding.
In order to control for market-related sources of variance, we recommend conducting waves of a given tracking study in parallel – that is continue to allow the research supplier responsible for historical trends to collect data and have the new research supplier collect data in the exact same format and with the exact same population. If market-related sources of variance exist during the timing of this parallel data collection process, then those variances sources will have an equal effect on data collected by both suppliers. Therefore, these variance sources will be held constant and thus controlled for in comparing the data collected from one supplier to the other.
In terms of parallel testing outlined above, we recommend running a given tracking study in parallel for at least three months. While this may result in increased study costs, it bears the benefit of preventing the loss of historical trends, which are usually far more costly. To reduce these costs, the number of interviews administered by one of the suppliers need not be as many as the number of interviews from the other supplier.
Therefore, if all other study-related sources of error variance are eliminated and market-related sources of variance are held constant, and if significant differences are found in the parallel test, an in-depth analysis of that data is required. This analysis entails statistical testing of the scores or levels of satisfaction reported as well as the variance of each survey item’s data. While industry standard for significance testing is set at the 95% confidence level, in this case we recommend that the confidence level be reduced to 90% or even lower.
So, statistically test the differences between all attribute scores from both studies completed in parallel. Any significant differences between attribute scores should be examined and a complete EDA (Exploratory Data Analysis) should be completed on that study. However, it will also be necessary to test for differences in sample composition. This sample composition test should be completed prior to the samples being finalized, but interviewing techniques and completion rates may impact respondent composition between the studies.
At the completion of the three month parallel tracking study, a complete technical report containing any significant differences found between the two studies should be provided with complete explanations (ANCOVAs) for the differences. In the case of an attribute score being significantly different between the two studies, the historical data can be statistically adjusted.