DIAL LOG

[SPECIAL FEATURE]
HOW FLAWED RECALL & MEMORY BIAS POLLUTE MARKET RESEARCH AND WHAT CAN BE DONE ABOUT IT

recall and memory bias in market research

Dialsmith is publishing this series of articles in partnership with ESOMAR’s Research World Connect. This series is part of a broader program, developed and sponsored by Dialsmith, centered on exposing the challenges around recall- and memory-bias in market research. The program consists of a series of discussions with experts from the market research and academic communities and features a live panel session at IIeX North America and a webinar later this year.


PART II | HERE’S THE PROBLEM WITH RECALL BASED RESEARCH

In Part I of our series, we kicked things off by introducing you to our advisory team of research experts who we’ve embarked with on our journey to uncover the problems caused by memory and recall bias in market research, and by discussing what’s motivating each member of our team to take part in this effort. Here, in Part II, we dive right into the recall and memory bias issue by asking each of our experts to explain, from their vantage point, the problems they’ve experienced from using memory and recall based methods.

We begin with Dr. Elizabeth Loftus, one of the world’s most respected academic researchers and authorities on memory manipulation and false memories. For decades, Dr. Loftus has researched how memory is impacted by external and internal factors and is seen a pioneer in breaking down common perceptions around memory reliability and accuracy—here’s her TED Talk on the subject that’s garnered almost 3 million views. Quite apparent, we invited Dr. Loftus to participate because of her decades-long investigations with memory. But unlike market research, Dr. Loftus actively attempts to introduce false memories to identify where these memories come from and what triggers them. As such, Dr. Loftus concedes that recall and memory bias are not issues in her research as they are, in fact, the topic of study. “In our research, we already know what the right answer is but what fascinates us is figuring out what causes the wrong answer.” But market research is a different animal. “The problem (in market research) is when you get out there in the real world and start asking questions that you don’t know the right answer to, you have no other option than to take someone’s word for it. In that scenario, memory can cause all sorts of problems.”

“For example, there’s evidence by cognitive psychologists that shows that, when asked, people remember their school grades as better than they really were, or that they gave more to charity than they really did, and so on. It’s human nature for people to distort memories in a more positive direction, and that could certainly cause major problems if you’re doing a survey and you care about accuracy.”

Our second expert, Elizabeth Merrick, spends the majority of her time trying to get into the minds of customers. After dedicating years to uncovering customer insights at interactive multichannel retailer HSN, Merrick is now the head of customer insights analytics for Nest, Google’s Smart Thermostat company. Merrick has first-hand knowledge of the adverse ripple effects caused by recall- and memory-dependent research.

“When I worked for HSN, we knew what specific shows our customers were watching TV because we knew which product was being featured at that time. So, if they called in, we could extrapolate that they were calling because they were watching our show as that was the only prompt for them to buy the product we were advertising. Then, a couple of weeks later, we’d try to get diagnostics on our campaigns and there was, of course, the requisite recall, which resulted in a 30 percent mis-attribution rate. We had people who we know did not see our ads, tell us they had seen it. A lot of the time, it was because they had seen something similar and we just weren’t doing something memorable enough for them to distinguish or they got caught up in pleasing the researcher. But there was also consistently that group of people who would tell us without a doubt that they had seen our ad. And we saw that over and over again with this consistent 30 percent (mis-attribution) rate.

Merrick also brought up a recent study she conducted involving ad recall, during which, “It was painfully evident that our respondents were not able to properly attribute the ad to the right company. In follow-ups, participants would say, ‘I definitely saw it and it was definitely that brand’s ad.’ And so, I know it’s wrong on more than just the recall metrics of whether they had seen it or not, but even on the content of the ads themselves. That’s a real problem for us because then we’re making decisions on bad information. If you don’t know which responses you can trust, you end up having to throw a lot of that data out.”

The issues around recall bias in market research is nothing new to our third team member, Andrew Jeavons, who’s been critical of market research’s recall dependency since his days as CEO of Survey Analytics in 2012. Unlike memories themselves, Jeavon’s view on recall hasn’t changed over time.

“The brain makes a lot of stuff up. What we see isn’t really what’s out there. There’s all sorts of bits are missing. And maybe we’re biased towards constructing things that didn’t happen or distorting them because that’s the way our minds work.

Christopher Columbus whose map of the world was about as accurate as recall based research.

Yes, that Columbus.

I think we have to find a different way of structuring when or how we tap into peoples’ memories. Because the way we’ve been doing it up until now—with traditional questionnaires—leads to distortions.”

So, what are we to make of these experiences with recall memory? Here’s an analogy that seems to fit…

Conducting market research solely on recall memory is like navigating the globe using the Henricus Martellus World Map (y’know, the one Columbus used on his cruise to India). There’s useful factual data there, but there’s also plenty of errors and key details left to chance, and that’s not something we, in the market research world, should be satisfied with.

For our next post, we’ll continue this discussion, focusing on study results and what it looks like when recall-based research goes bad. Stay tuned.

PART III | THE RIPPLE EFFECT

PART IV | CAN WE FIX THIS?

PART V | “MEMORABLE” HIGHLIGHTS FROM OUR IIeX PANEL


Want to keep up with our investigation? You can by subscribing to our list and we’ll notify you when the next blog post is up. We also invite you to follow and add to this discussion on Twitter at @Dialsmith and/or #ExposingRecallMRx.

 

Columbus photo courtesy of layers-of-learning.com.