A broad array of practices is designed to rid professional advisers of bias. But underlying psychological factors make this a more difficult task than appears at first glance.
There are financial, insurance, career and academic advisers; there are relationship, nutrition, spiritual and parental advisers; there are advisers for fashion and feng shui and leadership. Nearly every decision we make can be advised if we would like it to be advised.
This has advantages: second opinions can be useful and reassuring. Two minds are often better than one. But this also has disadvantages: advisers, despite their best intentions, may not give the best advice; they might offer suggestions that they, personally, would not follow. Female obstetricians and gynecologists, for example, advise patients to undergo mammography screenings earlier and more often than they get themselves screened; financial advisers tend to be more cautious with client investments than with their own. (This leaves open the question of whether a more cautious or more carefree perspective is better.)
In general, “decision-makers are significantly more risk averse when choosing for others than when choosing for themselves.” write Jason Dana and Daylian Cain of the Yale School of Management in a forthcoming review article for Current Opinion in Psychology. Dana and Cain go on to explore why people with seemingly good intentions—not having typical conflicts of interest, which is an area the researchers study elsewhere—advise differently than they choose for themselves. “Our focus,” they write, “is on nonpecuniary, psychological factors that lead advice to diverge from choice.”
They find a number of factors that account for this divergence. To start, people appear to have a limited capacity for symhedonia—the positive emotion connected with others’ good fortune. People are instead more sympathetic to others’ losses. Advisers may thus be prone to weighing losses heavier than gains when choosing on behalf of others. Advisers also expect to be held accountable for the advice they give. While this accountability has self-evident benefits, people are generally blamed more for failure than they are credited for success. Fully aware of this imbalance, advisers will likely consider the fallout of a decision more seriously than its potential benefits.
Dana and Cain indicate how these underlying psychological factors contribute to a larger problem: many policies intuitively aimed to improve the quality of advice can, unintentionally, end up making it worse. For instance, making advisers more accountable for outcomes could simply make them more risk-averse, forcing them to shy away from better but riskier advice. Similarly, encouraging the development of close personal relationships can make advisers overly cautious and advisees more obsequious, with both parties making efforts to avoid straining the social bond. Studies have shown that the longer patients have visited with the same medical provider the less likely they are to seek second opinions and the more costly their care is.
In some ways, the final prognosis is bleak: “We are skeptical that advisers can rid themselves of the cognitive and motivational biases that skew advice,” conclude Dana and Cain. But they offer “a potential curative”: advisers should project their own tastes when giving advice; they should advise others to act as they would act themselves. To nudge advisers in this direction, advisees might frame consultations with the question “What would you do?” rather than “What should I do?”
Regardless of the strategy, the findings indicate the need to not simply avoid financial conflicts of interest among advisers, but to limit the influence of less visible psychological factors that contribute to conflicted advice. “By definition, a majority of us are in the majority a majority of the time,” write Dana and Cain. “Absent strong evidence that one is in the minority, it is probably an improvement to assume others want what we do when giving advice.”