Yale School of Management

Center for Customer Insights

Advancing the frontiers of consumer understanding

When the Truth Hurts

Efforts to resolve conflicts of interest usually revolve around disclosure. The notion is to make the conflict known, and everyone will be better off—or, so goes theory. But what if disclosure doesn’t work so neatly in practice?

May 14, 2014

Professional conflicts of interest are pervasive: realtors benefit from quick sales, doctors from pharmaceutical promotion, and credit raters from financial ties to the firms that they rate. The simple act of disclosure has become a standard tool for resolving these conflicts. But a recent article in the Journal of Personality and Social Psychology reveals an unrecognized and perverse effect of disclosure that can make matters worse.

The authors—Daylian Cain from Yale, along with lead author, Sunita Sah from Georgetown, and George Loewenstein from Carnegie Mellon—designed a series of experiments to test how disclosure of a conflict of interest affects decision-making. They discovered that when an advisor reveals a personal conflict of interest to an advisee who is trying to make a choice, this transparency can shift the outcome in favor of the advisor. Labeled the “burden of disclosure,” the authors note that disclosure, rather than a helpful precaution to advisees, can become an implicit and persuasive request to satisfy the advisor’s interest once that interest has been disclosed. (One analog is charity drives, which, through the pressure of the direct request, elicit donations from those who do not want to donate.)

The experiments, six in total, were designed around the roll of a die. Participants received random assignment to be either an advisor or a chooser. Advisors were given information about prizes associated with two distinct lotteries, “Die-roll A” and “Die-roll B,” with Die-roll A representing a lottery with clearly superior prizes. When left to their own devices, 98 percent of participants preferred Die-roll A. In the experiments, the choosers got advice on which Die-roll to choose. Some advisors faced a conflict of interest in which their payoffs increased when their advisees chose (the otherwise inferior) Die-roll B. In about half of these cases, the advisors had to disclose this conflict in a written statement: “First, I should let you know that I get a die-roll myself if you choose die B. I get nothing if you choose die A, so it is in my interest that you choose die B.”

Instead of a warning, disclosure can become a burdensome request to comply with distrusted advice.

In the first experiment, despite the superiority of die-roll A, 53 percent of choosers selected die-roll B when their advisor recommended it. In the case of disclosure—when the advisor presented the written statement above—81 percent of choosers complied with the advice to select die-roll B. Disclosure increased compliance. In a post-experiment survey, these choosers also reported being less pleased with their choice and, interestingly, less likely to trust their advisor, who they thought was giving a self-interested recommendation. At the same time, advisees who received disclosure sensed significantly increased pressure to help their advisor and were uncomfortable rejecting the recommendation.

The small stakes of this first experiment, which made compliance with social pressure relatively uncostly, were recognized as a limitation, leading to higher stakes in the second experiment. Surprisingly, there was no reduction in the burden of disclosure. Choosers who were informed of their advisor’s conflict of interest continued overwhelmingly (82 percent) to choose the inferior die-roll when they were advised to choose it.

The authors suggest that compliance is not driven by merely knowing that one can satisfy one’s advisor’s personal interest by complying—i.e., it is not as if advisees are truly keen on helping their advisors—but that advisees do not wish to be seen as knowingly refusing to help. The other experiments tested a variety of conditions—what happens if the conflict is disclosed by a third party, so that the advisor will be unsure about what the advisee knows? What if advisors are imbued with expertise? What if choosers are given a chance to privately revise their decisions?—and found that each condition altered the persuasive effects of disclosure. The burden of disclosure was particularly strong in cases when disclosure was done in-person, and when decisions were made in front of advisors. Third-party disclosure and the opportunity to privately revise decisions, meanwhile, reduced the negative effect of disclosure, though not entirely. The authors conclude that it is important to provide cover for advisees’ choices, away from the prying eyes of expectant advisors.

These results “provide grounds for pessimism about the likely impact of disclosure, both on the quality of advice given and its impact on advisees,” write the authors. But they are clear to assert that they are not “opposed to transparency.” Rather, they hope to raise awareness that transparency is often an insufficient cure for conflicts of interest, that it sometimes exacerbates the problem, and that it can be introduced in ways that make it either more or less effective. When done wrong, though, the authors conclude that “instead of a warning, disclosure can become a burdensome request to comply with distrusted advice.”