Behavioral scientists and marketers will be familiar with the traditional approach of reducing friction as much as possible along the consumer journey. Often the first step firms take when addressing consumer experience is to deconstruct the customer journey to identify barriers that create friction and eliminate them. In theory, this leads to a better outcome for both the business and the consumer.
However, despite their popularity, frictionless experiences may not be ideal in every case.
Renée Richardson Gosline, Senior Lecturer in the Management Science group at MIT’s Sloan School of Management and the Lead Research Scientist in the Human-AI Interaction Group at MIT’s Initiative on the Digital Economy, conducts research at the intersection of behavioral science, consumer experience, and artificial intelligence/machine learning (AI/ML).
She recently joined us at YCCI’s Learning from Leaders webinar to explore scenarios in which more friction can actually be beneficial.
As Gosline explains, “Friction is not the same thing as a pain point.” Take, for example, work done on the “accuracy nudge,” a feature introduced to reduce misinformation on social media (Pennycook, McPhetres, Zhang, Lu, and Rand 2020). In this case, the accuracy nudge was a disclaimer that encouraged users to consider the importance of accuracy before a repost. The inclusion of this disclaimer led to a threefold increase in users’ truth discernment. A bit of “good friction” interrupted potential harm by reducing the spread of misinformation which, in turn, creates a more positive experience for the platform’s users.
Gosline highlights three scenarios where firms can use positive friction to enhance the customer experience. The first scenario she notes is to add friction when AI/ML could lead to bias or potential harm for human beings. She gives the hypothetical example of how an AI/ML algorithm would not be able to predict the likelihood of a person committing a crime. For responsible AI, people should be given the opportunity to opt in or out of surveillance, and even to change their minds. In doing so, we preserve consumer agency which prevents suboptimal outcomes by interrupting algorithmic bias.
The second scenario where firms can use friction for optimal outcomes would be when there is effort required to update human insights about customers. More specifically, firms should not be on autopilot, relying on algorithms when the human element is needed and moving factors need to be taken into consideration – for example, when managers need to be engaged and attuned to developments in the market. Gosline suggests adding friction in the form of experiments at various touch points to maintain and update insights about customers instead of relying on outdated algorithms.
The last scenario where good friction can be used is when deliberation amongst customers serves them well or when firms want to show customers that they are attentive to their needs. Gosline gives the example of a doctor’s office that uses AI/ML to come up with a patient diagnosis by the push of a button instead of taking the time and care to talk with them in person. This could lead to a negative impression or consumer experience. In this case, long deliberation time of a medical professional may serve the customer better than an experience optimized to be quick and frictionless.
Ultimately, firms that embrace responsible AI and machine learning may be well-positioned over time to build trust and competitive advantage, as opposed to firms that focus exclusively on frictionless customer experience.
You can watch Renée’s full talk here.