Skip to main content
Robotic hand about to eat a cupcake with a cherry

The New Food Critic: AI-Generated Restaurant Reviews Fool the Best of Us

The popularity of generative AI platforms has made it easy to brew fake reviews. The problem is that customers can’t spot differences between the real and fake ones. 

If online reviews are your guide to the hottest restaurant in your town, here is some food for thought. A new study suggests that it is nearly impossible for people to distinguish between human, customer-written reviews and those doctored by generative AI tools.

Balázs Kovács, a professor of organizational behavior at Yale School of Management and the study's author, has followed the evolution of online restaurant reviews since 2004, when Yelp entered the market.  

Back in the day, he says, most of the published reviews were written by humans with the authentic dining experience. However, generative AI platforms like OpenAI’s ChatGPT can cook up long reviews based on just a few human prompts. “And these tools are sophisticated and sound almost human-like.”

Kovács says that such platforms make it fairly easy for restaurants to generate fake reviews to bolster their ratings and for customers to write reviews without an actual dining experience. These reviews can then be posted on popular sites, which currently have few tools to vet them for authenticity.

However, the question that boggled him was: Can customers who turn to online reviews be fooled into believing that humans wrote AI-generated reviews?

“For long texts, which are about 20 pages long, it is easy to spot what is AI-generated from what is human-written. For short text, it isn’t. But online restaurant reviews fall in the mid-length range, another reason that made the research interesting,” he says.

To look for answers, Kovács turned to Yelp. He shortlisted nearly 100 reviews of restaurants across the USA posted site from the year 2019.  Kovács guesses that since 2019 predated the popularity of generative AI platforms, most reviews posted must have been written by actual customers who had real dining experiences.  

Next, he fed the reviews to GPT-4 and got AI versions of the reviews about the same length as the real ones. The AI reviews also included human-like elements such as quirks, typos, and all-caps for emphasis. Kovács refers to the AI versions as ‘fake reviews.’ “I call them fake because the AI tool didn’t go to the restaurant and try a chocolate cake or a beer,” he adds.

He then recruited a panel of online participants, each of whom was randomly shown 20 reviews—either real or fake. Kovács asked them to identify which were written by humans and by AI. As an incentive, he promised the participants a bonus if they could correctly classify 16 of the 20 reviews. Of the 151 participants, only 6 received the extra payment, indicating that most couldn’t spot the differences.

However, the study noted that the younger age group participants fared slightly better in identifying the fake ones, perhaps due to their better familiarity with generative AI platforms.

Subsequently, Kovács conducted another study to check whether AI detector platforms like CopyLeaks can identify AI-generated restaurant reviews. AI detection platforms are websites where users can post any content on these platforms, based on which it can inform if the text is human-written or AI-generated. However, Kovács found that none of these platforms could spot the differences.

He notes that GPT-4 can create text that often deceives detectors, especially when instructed to include intentional typos and colloquialisms.
 
“This is one more piece of evidence that AI is becoming good at generating human-like text. People can’t tell the difference between real and fake text,” Kovács says.

Traditionally, people relied on word-of-mouth publicity or dedicated food critics to make decisions. However, the popularity of online platforms such as Yelp completely changed how people scout for recommendations, Kovács says. A BrightLocal survey shows that 87% of consumers read online reviews for local businesses, and 79% consider them trustworthy as personal recommendations.

The results also ring alarm bells for online restaurant review applications where the reviews are posted. A fake review can damage the application's reputation and legitimacy, and soon, readers may lose trust in such platforms, Kovács says.

However, he believes that the most significant implications of the new findings are for consumers who believe in the authenticity of these reviews and make their decisions based on them. Kovács notes that this pervasive problem of fake reviews isn’t only pertinent to the restaurant industry; it extends to most decisions for which people turn to the online world for recommendations, whether it’s consumers buying their next vacuum cleaner or citizens electing a new president. “If the problem of fake content isn’t taken care of, nobody is going to know what to believe anymore,” Kovács concludes. 

To work with us and discover more pioneering insights with our distinguished faculty, reach out to us via email or subscribe to our newsletter for updates on LinkedIn or email.