Skip to main content

Framing the Ghost in the Machine: How to Build Consumer Trust in AI

Despite the growing prevalence and capabilities of predictive algorithms, consumers don’t necessarily trust what a computer tells them. New research out of YCCI demonstrates how to overcome this challenge.

Person's hands at laptop computer holding a cell phone with ai graphics overlaid

Predictive algorithms are becoming central to consumer decisions: they allow us to discover new music, books, and clothing; they deliver us information and food; they assist in finding romantic partners or medical diagnoses.

Despite this trend, though, “research has documented so-called ‘algorithm aversion’ or the general preference for humans’ recommendations or predictions,” says Taly Reich, associate professor at Yale SOM. Why this is the case and how such aversion might be overcome is the subject of a new paper that centers on the subject of trust. (The work is coauthored with Alex Kaju from HEC Montreal and Sam Maglio from the University of Toronto.)  “We propose that consumers are reluctant to trust algorithms that err but that those same errors, when seen as opportunities from which the algorithm can learn, enhance trust in and reliance on algorithms.”

In one of several experiments, for instance, participants were asked whether a trained psychologist or an algorithm would be better at evaluating somebody’s personality. Under one condition, no further information was provided. In another condition, identical performance data for both the psychologist and the algorithm explicitly demonstrated improvement over time. In the first three months, each one was correct 60% of the time, incorrect 40% of the time; in the first six months, they were correct 70% of the time; and in the course of the first year the rate moved up to 80% correct.

Absent of other information, participants chose a psychologist over an algorithm 75% of the time; they were, as Reich puts it, “actively avoiding the algorithm.” But when shown how the algorithm was able to learn, they chose it 66% of the time—more often than the human. “Participants overcame any potential algorithm aversion and behaved in a manner more consistent with algorithm appreciation—indeed, algorithm investment—by choosing it at a higher rate.”

The researchers also explored whether changing how you describe the computer’s predictive software has an impact on choice. Specifically, participants were asked whether they wanted to rely on themselves to judge the quality of a piece of art, or whether they wanted to rely on a computer to do it for them.  The computing software was described as either an “algorithm” or a “machine learning algorithm.” When given the choice of “algorithm,” the majority of people chose themselves. When given the phrase “machine learning algorithm,” the majority of people chose the computer. Simply suggesting an algorithm’s ability to learn through its name proved sufficient to overcome lack of trust.

These findings are particularly valuable given the unforgiving stance of most consumers toward algorithmic prediction. Prior research has demonstrated the way in which a single mistake can signal to people “that the algorithm is irrevocably flawed or simply broken,” Reich writes. This work, however, points out the way in which prior mistakes can be used constructively, “leveraged as a means by which to enhance algorithm appreciation rather than result in unilateral algorithm aversion.”

By either illuminating the specific ways in which algorithms can learn, or by implicitly leading consumers to the conviction that algorithms can learn, businesses can help foster trust in their predictive power.

The findings also bear clear practical implications for managers, particularly given the increasing reliance across sectors on algorithms and machine learning. By either illuminating the specific ways in which algorithms can learn, or by implicitly leading consumers to the conviction that algorithms can learn, businesses can help foster trust in their predictive power—a power that often exceeds that of people. With the right approach, managers will be able to “help consumers help themselves in the form of placing well-earned trust in algorithmic forecasts,” she writes. “Consistently, both of our interventions proved capable of steering consumers toward the belief that algorithms can learn from mistakes and, when they did, they won trust, support, and choice.”