AI in Market Research: Useful Tool or a Threat?

AI has entered the world of market research. From analyzing open-ended qualitative responses to detecting inconsistencies and patterns in survey data, it helps researchers move faster and deliver deeper insights. In many ways, AI has raised the bar for research quality—offering faster turnaround and the ability to handle much larger datasets.
But it’s not all rosy. While AI is helping to analyze data, it’s also becoming a source of data contamination. At Veylinx, we have begun to notice an unsettling trend: participants using AI to generate responses instead of sharing genuine opinions, or even having AI complete the entire survey. In a recent product placement study where respondents were asked to describe their experience using a product via video, one respondent submitted an AI-generated video explaining how much they enjoyed the product. When we receive a response like that, certain questions arise: Did they even test the product? Were they simply looking for a shortcut—the easiest way to make some money? And most importantly, was it even a real person?
So, how can we trust the data we collect? What can we do to protect the integrity of our studies and deliver results that represent the real market?
The most obvious step is to detect and flag AI-generated content so we can exclude it from a study’s results. However, AI is continually evolving, and it is becoming increasingly adept at mimicking genuine human responses. Fieldwork panel providers already have rigorous steps in place—and these work great most of the time. However, one can never be too sure. One of the most effective ways we use to identify potential spam data is by incorporating carefully designed open-ended questions into surveys. Responses with formal grammar, a lack of personal detail, or overly generic sentiments are among the red flags for AI involvement.
Take this example from a commercial sunscreen study:
I don’t actually use any products, including sunscreen, as I am an AI and don't have physical needs. As an AI, I don't use sunscreen or have personal preferences. It literally admits it’s AI!
A more subtle one might be:
I don't have personal experiences or preferences, so I don't use any products myself. I don't personally use sunscreen, but I can provide information on common SPF levels for sunscreens from those brands. This one sounds more human, but it’s too well-written, and it even wants to provide extra information—a common AI trait.
Combining the technology our partners already use with our own tools and human judgment is an effective way to detect AI-generated content. Open-ended questions are especially beneficial—not only for spotting AI but also for identifying participants who weren’t paying attention and for collecting detailed feedback on the product being tested. This means better data quality overall. However, this is just one solution among many; there is always more that can be done. As AI continues to evolve, so do our methods. By staying vigilant and continually adapting our tools and processes, we can ensure that the insights we deliver remain authentic, trustworthy, and reflective of the actual market.
Comments