Buyer Beware: Marketing Professor On The Hunt For Fake Reviews
So how do you know the review you're reading is real and not fictional?
Ann Kronrod is Professor of Marketing, Entrepreneurship and Innovation (MEI). The Manning College of Business is developing a tool to help you.
Kronrod, whose research interests include linguistics and text analysis in marketing, has explored how the language used in an online review can reveal whether a reviewer has already tried a product and created an algorithm that can help detect fake reviews.
is the main author recently published article "Been There, Did It: How Episodic and Semantic Memory Influences the Language of Real and Fictional Reviews," in Journal of Consumer Research.
“People want to know if what they read is true,” said Kronrod, who conducted the study with two marketing assistant professors, Jeffrey K. Lee of the American University in Washington and Evan Gordelli of EDHEC. business in France.
Kronrod says the companies are also interested in litigation.
“If consumers buy a product and it differs from what the review says, they get mad at the company. This is the company's problem."
Born in Russia and raised in Israel, Krono has a lifelong love of languages (he is fluent in English, Russian, and Hebrew, and is also fluent in French, Italian, Arabic, and Spanish). According to him, the application of linguistics in the business world is "the perfect combination of creativity, strategy and philosophy."
Kronod recently sat down to discuss his research.
Q: How big is the problem of fake reviews?
A : There is an alarming statistic about the percentage of reviews we read that are not from consumers who have actually tried the product. For example, Amazon says it collects about 70% of the reviews posted on its platform. Even "verified purchase" reviews can be faked.
Q: Why would someone post a fake review?
A: Most of these audits are commissioned or invited by companies. The company gives you a free trial of the product and asks you to write a review. But in many cases, the request is not "Leave us a review", but "Leave us a positive rating." This changes immediately and creates a consumer bias. But this is not what my colleagues and I are looking for. We are looking for signs that people have tried the product. Leaving a review for a product you haven't tried is a lie by definition.
Q: Your article is about episodic and semantic memory. What will he say?
A : If you try the product, you will have certain memories of what happened when you tried it. So, for example, when you talk about your last vacation, you can talk about people you met or things you saw, and these will be things that you collect from your episodic memory, because this is the memory of episodes. during vacation. your experience.
But if you're trying to talk about your recent trip to the moon, you probably shouldn't. You have no episodic memory of this experience. What you have is the actual memory of other people's descriptions of that experience, called semantic memory, and that's where you get your language from. You will describe scenes, weightlessness, things you remember from other people's stories, or maybe from what you've seen in movies.
We assume that, other things being equal, these two types of memory will characterize your language differently.
Q: What are these methods?
Answer: material. People will be more specific if they have real memories of the experience. For example, they will say "apple" instead of "fruit". They will also use more words unique to the area and less common words.
We also noticed that people used the same common words over and over again in fake reviews. So if you were describing a hotel you didn't stay at, you would repeat the word "hotel" because you don't have any other words in your memory. There is less language diversity in fake reviews.
Fake reviews use words with less content, such as nouns, verbs, adjectives, and adverbs. Instead, they reuse the same word or use multiple linking words such as "a lot, a lot". They also repeat words from the instructions given for their repetition.
Question: What experiments did you conduct?
A: We conducted an experiment with 800 people, half of whom tried the product we created: a phone app with neck stretching exercises. The other half have not tried the program. We told them that there is a product that we have made. We then asked everyone to provide feedback on the software we downloaded and used to analyze language features. Thus, half of the people wrote reviews about the app after they tried it, and the other half wrote reviews without trying the app.
Then we wrote some code and developed an algorithm that taught the computer to distinguish between real and fake comments using these language features. We then tested our algorithm against two databases of published reviews such as Amazon's Downloaded Review Collection.
Q: What's next for this investigation?
Answer: several things. First, we got a call from a major retailer and asked if we would work with them and tell them about the real and fake reviews on their site. They have millions of reviews and we hope that we can implement our algorithm and help the company.
Another direction in which this cooperation can develop is checking the reviews left by the sellers themselves on the company's website. It's not that they don't know the product; They already know better than consumers. So these are more fake reviews left by experts than by people who haven't tried the product. We tried to see how they differ from the average person trying the product. Let's see where this takes us.
We are also trying to see if we can teach consumers to recognize fake reviews. People know they can't do anything about it. We cannot simultaneously engage in cognitive reading and analyze the linguistic features of reviews. It's too much for our poor brains.

Comments
Post a Comment