The Yale Marketing Seminar Series presents recent research papers in marketing. The goal is to bring researchers from other universities to the Yale campus to stimulate exchange of ideas and deepen understanding of marketing trends. These seminars are geared towards faculty and PhD students interested in marketing.
Thursday Series 11:35 a.m.-12:50 p.m., Edward P. Evans Hall, 165 Whitney Avenue, New Haven, CT, Swersey Classroom 4430.
This seminar series is organized by Associate Professor of Marketing Taly Reich. Lunch is provided.
March 6 (2230 Nooyi Classrom)
Navdeep Sahni (Stanford University)
Paper: Search Advertising and Information Discovery: Are Consumers Averse to Sponsored Messages?
We analyze a large-scale field experiment in which a search engine randomized 3.3 million US users into two groups: (1) sees a lower than usual level of search ads, and (2) sees a higher level which is the status quo. Our data reject that users are, overall, averse to search advertising targeted to them. At the margin, users prefer the search engine with higher level of advertising. The usage of the search engine (in terms of number of searches, and number of sessions) is higher among users who see higher levels of advertising, relative to users who see lower levels. This difference in usage persists even after the experiment ends. The increase in usage is larger for users on the competitive margin -- who, in the past, typed a competing search engine's name in the search query and navigated away from our focal search engine.
On the supply side, newer websites are more likely to advertise and higher level of advertising increases traffic to newer websites. Consumers also respond more positively to advertising when local businesses in their state create new websites. Quantitatively, the search engine revenue from ad clicks increases between 4.3% to 14.6% when the level of advertising changes from low to high. Taken together, patterns in our data are consistent with an equilibrium in which advertising compensates for important information gaps in organic listings: it conveys relevant new information, which is hard for the search engine to gather, and therefore missed by the organic listings algorithm. Viewing search ads, at the margin we study, makes consumers better off on average.
Nicholas Reinholtz (University of Colorado Boulder)
Paper:Can Consumers Learn Price Dispersion? Evidence for Dispersion Spillover Across Categories
Diana Tamir (Princeton)
Paper: Making predictions in the social world
The social mind is tailored to the problem of predicting other people. Imagine trying to navigate the social world without understanding that tired people tend to become frustrated, or that mean people tend to lash out. Our social interactions depend on the ability to anticipate others’ actions, and we rely on knowledge about their state (i.e., tired) and traits (i.e., mean) to do so. I will present a multi-layered framework of social cognition that helps to explain how people represent the richness and complexity of others’ minds, and how they use this representation to predict others’ actions. Using both neuroimaging, behavioral, and linguistic analysis methods, I demonstrate how the social mind might leverage both the structure and dynamics of mental state representations to make predictions about the social world.
Sara Algoe (University of North Carolina)
Paper: The role of expressed gratitude in building relationships: Implications for the grateful person, the benefactor, and incidental witnesses to the expression
The expression of gratitude (“thanks”) is pervasive in everyday life, from notes to loved ones to perfunctory communications among co-workers; it is even witnessed as publicly expressed from one person to another. In this talk, building on theory about expressed emotion and the value of high-quality relationships, I present evidence from a program of research regarding why and how expressions of gratitude influence the behavior of those who hear them. The evidence focuses on why expressions of gratitude are a unique type of positive emotion expression; specifically, it is an other-focused positive emotion, which comes through in its expressive behavior. In turn, expressing gratitude to one’s benefactor can draw that benefactor in to the relationship. The majority of the talk focuses on new findings showing 3rd-party witnesses of gratitude expressed from one person to their benefactor influences the witness, too. Specifically, witnesses are more helpful and affiliative toward grateful people as well as toward the benefactors to whom gratitude is expressed. I identify both behavioral and social-perceptual mechanisms for these effects (e.g., perceived expresser responsiveness, perceived moral goodness of the benefactor). This work has implications for the role of gratitude in building high-quality relationships among members of a social network and informs a new theory about the group level social functions of emotions.
Avi Goldfarb (University of Toronto)
Paper: Could Machine Learning be a General Purpose Technology? Evidence from Online Job Postings
There has been a great deal of speculation that machine learning might be a general-purpose technology. However, the commercial application of machine learning is relatively new and general-purpose technologies are typically identified with the benefit of many years of hindsight. For managers deciding on technology strategy, this classification will come too late. In this paper, we provide an approach to identifying a general-purpose technology before it has widely diffused, so that the classification can be used to inform decisions about technology adoption. Using data from online job postings, we compare machine learning to eight other emerging technologies in terms of breadth of industries with job postings, the importance and breadth of research roles, and the costs of innovation in organizational practices. Our results show that ML is more likely to be a general-purpose technology than other emerging technologies. This finding suggests that firms adopting ML should be patient, should expect to implement changes to organizational processes, and should recognize that their industries are likely to change as a result. In contrast, firms adopting other technologies should look for more direct and tangible benefits.
Gideon Nave(University of Pennsylvania)
Paper: We are what we watch: Movie contents predict the personality of their social media fans
The proliferation of online streaming services has increased the breadth of video content available, offering consumers a wide range of options to choose from. What determines consumers’ movie preferences? We address this question by investigating the associations between movie contents, quantified via user-generated plot keywords, and the Big Five personality traits of their Facebook fans. We find that plot keywords predict the personalities of movie fans above and beyond demographics and general movie characteristics such as quality and genre. Furthermore, fans’ personalities are associated with specific plot keywords and the movies’ psychological themes (e.g., movies with keywords related to negative emotions have neurotic fans, and movies with violent keywords have low-agreeableness fans). Our findings reveal robust links between personality and liking of specific movie contents, with implications for cinema advertising, branded entertainment and product placement in movies.
Birger Wernerfelt (MIT Sloan)
Paper: Internalization of Advertising Services: Testing a Theory of the Firm
In 1956, advertising agencies signed a consent decree aimed at ending a set of trade practices that for half a century had facilitated the bundling of their service offerings and thereby effectively precluded advertisers from owning and operating in-house agencies. Since then, large firms have internalized more and more of the services formerly performed by external agencies, perhaps as many as half. We use this phenomenon to test a theory of the firm, thereby simultaneously offering an explanation for it. The theory suggests that firms should internalize activities for which their competitive position implies (1) that frequent modifications are desirable and (2) that it is more important for human capital to be firm-specific as opposed to function-specific. It also predicts (3) that these two effects reinforce each other. We test these predictions in a cross-sectional data-set describing the extent to which firms in different environments internalize different advertising services. In addition to this test, we informally present some time-series data suggesting that both (1) and (2) have grown over time along with the level of internalization.
January 13 (Monday 11:35 AM – 12:50 PM, 4400 Lin Tech. Classrom)
Leslie K. John (Harvard Business School)
Paper: Privacy and Disclosure in the Digital Age
Why do people post salacious photos or incendiary comments on social media, when the damage to their relationships, reputation and careers could be permanent? Why do we prefer to hire people who reveal unsavory information about themselves relative to those who simply choose not to disclose? Why are people more likely to disclose the fact that they cheated on their taxes on a website called “How BAD r U?” that clearly offers no privacy protection than on a sober-looking site with privacy safeguards? And why did Target face consumer ire for sending pregnancy-related coupons to a teenage customer it (correctly) inferred to be pregnant? Yet, how come Amazon can make product recommendations conspicuously based on users’ behavior without seemingly provoking privacy backlash? These questions are all manifestations of the privacy paradox: the apparent disconnect between people’s privacy preferences and behavior.
This talk will describe recent research investigating answers to these questions, with a particular focus on a paper exploring the effect of temporary sharing on impression formation.
Hannah Perfecto (Washington University in St. Louis)
Paper: Not all Sunshine and Rainbows: Skewed stimulus sampling has distorted most BDT findings
BDT researchers frequently operationalize decision making as choice, yet consumers also often make decisions based on rejections. Similarly, researchers frequently focus on pleasant outcomes, yet consumers often find themselves with unappealing options. In an earlier paper (Perfecto, Galak, Simmons, & Nelson, 2017), I show that participants faced with rejection decisions or negative outcomes (but not both) take longer to decide and report the decision as harder to make. In the present work, I leverage this result to moderate a variety of well-known BDT findings, including the false consensus effect, anchoring, and the uncertainty effect, simply by changing the decision frame or options. By consistently relying on positive stimuli, researchers have likely overestimated the extent to which consumers employ heuristics and show biases when making judgments and decisions.
Tami Kim (University of Virginia)
Paper: Pettiness in Social Exchange
Technological innovations have introduced countless new forms of interaction between—and among—consumers and companies. For instance, the rise of new digital payment services such as Venmo and Square Cash, has allowed people in communal-sharing relationships to both closely monitor payment history and pay back amounts owed—down to the last cent. While these new digital payment platforms have certainly made the exchange of money between individuals efficient and accurate, we suggest that there is a downside to the behaviors enabled by these platforms. Specifically, we identify and document a novel construct—pettiness, or intentional attentiveness to trivial details—and examine its (negative) implications in interpersonal relationships and social exchange. We show that pettiness manifests across different types of resources (both money and time), across cultures with differing tolerance for ambiguity in relationships (the United States, Switzerland, Germany, and Austria), and is distinct from related constructs such as generosity, conscientiousness, and fastidiousness. Indeed, people dislike petty exchanges even when the (petty) amount given is more generous, suggesting that pettiness may in some instances serve as a stronger relationship signal than are actual benefits exchanged. Attentiveness to trivial details of resource exchanges harms communal-sharing relationships by making (even objectively generous) exchanges feel transactional. When exchanging resources, people should be wary of both how much they exchange and the manner in which they exchange it.
Ashley Whillans (Harvard Business School)
Paper: Time and Money Tradeoffs: How organizational factors shape the value we assign to time and happiness
Most working adults today report spending very little time with friends and family, potentially explaining rising rates of loneliness. The current research presents reward systems – how individuals are rewarded for their labor – as a key factor shaping decisions of how to allocate time and resources to friends and family versus to work colleagues. We argue that rewarding individuals for their performance on work-related tasks increases the perceived instrumentality of their work relationships, which further leads them to prioritize those relationships over personal relationships, such as friends and family. Across three laboratory studies (n = 1,079) and one large-scale archival data (n = 132,139), we show that exposure to performance incentives discourages individuals from spending leisure time with friends and family, instead encouraging them to spend more leisure time with work colleagues. We further document goal instrumentality as a mechanism for these results: performance incentives lead people to perceive their work relationships as highly instrumental in achieving their goals. These findings suggest that reward systems shape our perceptions of and interactions with critical social relationships.
Alain Lemaire (Columbia Business School Ph.D. Candidate)
Paper: The Role of Linguistic Match Between Users and Products
Most working adults today report spending very little time with friends and family, potentially explaining rising rates of loneliness. The current research presents reward systems – how individuals are rewarded for their labor – as a key factor shaping decisions of how to allocate time and resources to friends and family versus to work colleagues. We argue that rewarding individuals for their performance on work-related tasks increases the perceived instrumentality of their work relationships, which further leads them to prioritize those relationships over personal relationships, such as friends and family. Across three laboratory studies (n = 1,079) and one large-scale archival data (n = 132,139), we show that exposure to performance incentives discourages individuals from spending leisure time with friends and family, instead encouraging them to spend more leisure time with work colleagues. We further document goal instrumentality as a mechanism for these results: performance incentives lead people to perceive their work relationships as highly instrumental in achieving their goals. These findings suggest that reward systems shape our perceptions of and interactions with critical social relationships.
Joshua Lewis (University of Pennsylvania)
Paper: When Do People Pay to Improve Their Chances of Success?
Consumers often have to decide whether to incur a cost in order to increase their chances of success (or avoiding failure). For example, consumers have to decide whether to pay for a vaccine that increases their chances of avoiding illness, or whether to pay for a class that increases their chances of passing an exam or earning a promotion. In our research, we investigate how people make these kinds of decisions. In so doing, we offer a theory called prospective outcome bias. According to this theory, when people decide how much they value an improvement in their chances of success (e.g. from 80% to 90%), they place too much emphasis on how much they value the outcome of that improvement (e.g. the 90% chance of success) relative to the initial chance of success that they would have to begin with (e.g. the fact that they would be successful 80% of the time anyway). This produces predictable biases and mistakes in consumers’ valuations of improvements. First, we find that people are willing to pay more for a 10 percentage point improvement in their chances of winning when their chances are already high (e.g., 80%) than when their chances are relatively low (e.g., 10%). Second, and contrary to theories that posit risk aversion and loss aversion, we find that consumers generally overpay for opportunities to improve their chances of success. Third, we find that consumers will pay more to slightly improve their chances of attaining a large reward (e.g., a 1 percentage point improvement from 89% to 90% in their chances of winning $100) than to substantially improve their chances of attaining a small reward (e.g., a 10 percentage point improvement from 80% to 90% in their chances of winning $10). Together, these findings provide robust support for our theory of prospective outcome bias, while also suggesting that the way that consumers value goods that improve their chances of success is meaningfully different from how consumers value goods that are not framed as improvements.
October 21 (Monday 11:35 AM – 12:50 PM, 4230 Attwood Classrom)
Tesary Lin (University of Chicago)
Paper: Valuing Intrinsic and Instrumental Preferences for Privacy
In this paper, I propose a framework for understanding why and to what extent people value their privacy. In particular, I distinguish between two motives for protecting privacy: the intrinsic motive, that is, a “taste” for privacy; and the instrumental motive, which reflects the expected economic loss from revealing one’s “type” specific to the transactional environment. Distinguishing between the two preference components not only improves the measurement of privacy preferences across contexts, but also plays a crucial role in developing inferences based on data voluntarily shared by consumers. Combining a two-stage experiment and a structural model, I measure the dollar value of revealed preference corresponding to each motive, and examine how these two motives codetermine the composition of consumers choosing to protect their personal data. The compositional diﬀerences between consumers who withhold and who share their data strongly influence the quality of firms’ inference on consumers and their subsequent managerial decisions. Counterfactual analysis investigates strategies firms can adopt to improve their inference: Ex ante, firms can allocate resources to collect personal data where their marginal value is the highest. Ex post, a consumer’s data-sharing decision per se contains information that reflects how consumers self-select into data sharing, and improves aggregate-level managerial decisions. Firms can leverage this information instead of imposing arbitrary assumptions on consumers not in their dataset.
October 15 (Tuesday 1:00 – 2:20 PM, 4230 Attwood Classrom)
Omid Rafieian (University of Washington)
Paper: Adaptive Ad Sequencing
Digital publishers often use real-time auctions to allocate their advertising inventory. These auctions are designed with the assumption that advertising exposures within a user’s browsing or app-usage session are independent. Rafieian (2019) empirically documents the interdependence in the sequence of ads in mobile in-app advertising, and shows that dynamic sequencing of ads can improve the match between users and ads. In this paper, we examine the revenue gains from adopting a revenue-optimal dynamic auction to sequence ads. We propose a unified framework with two components – (1) a theoretical framework to derive the revenue-optimal dynamic auction that captures both advertisers’ strategic bidding and users’ ad response and app usage, and (2) an empirical framework that involves the structural estimation of advertisers’ click valuations as well as personalized estimation of users’ behavior using machine learning techniques. We apply our framework to large-scale data from the leading in-app ad-network of an Asian country. We document significant revenue gains from using the revenue-optimal dynamic auction compared to the revenue-optimal static auction. These gains stem from the improvement in the match between users and ads in the dynamic auction. The revenue-optimal dynamic auction also improves all key market outcomes, such as the total surplus, average advertisers’ surplus, and market concentration.
October 4 (Friday 12:05 – 1:20 PM, 4230 Attwood Classrom)
Caio Waisman (Northwestern University)
Paper: Online Inference for Advertising Auctions
Advertisers that engage in real-time bidding (RTB) to display their ads commonly have two goals: learning their optimal bidding policy and estimating the expected effect of exposing users to their ads. Typical strategies to accomplish one of these goals tend to ignore the other, creating an apparent tension between the two. This paper exploits the economic structure of the bid optimization problem faced by advertisers to show that these two objectives can actually be perfectly aligned. By framing the advertiser’s problem as a multi-armed bandit (MAB) problem, we propose a modified Thompson Sampling (TS) algorithm that concurrently learns the optimal bidding pol-icy and estimates the expected effect of displaying the ad while minimizing economic losses from potential sub-optimal bidding. Simulations show that not only the pro-posed method successfully accomplishes the advertiser’s goals, but also does so at a much lower cost than more conventional experimentation policies aimed at performing causal inference.
Zoe Lu (University of Wisconsin-Madison, PhD Student)
Paper: That money feels like mine: How a consumer-funded frame increases incentive effectiveness
Incentives are widely used in both the private and public sectors to motivate consumers toward certain behaviors. Fundamentally, most incentives can be seen as funded, at least partially, by money collected from the consumer (e.g., sales revenue, tax revenue, insurance premiums, university fees, etc.). However, incentive providers rarely, if ever, explicitly frame incentive as being funded by a source the consumer paid into. This work examines when and why doing so may make an incentive more effective. Seven studies, covering diverse incentives, and including two field experiments, demonstrate that that linking an incentive to a source the consumer paid into makes a contingent incentive (i.e., an incentive receivable after completing a certain task) more effective and a non-contingent incentive (i.e., an incentive received before completing a certain task) less effective in stimulating the incentivized behavior. Implications related to sales promotions, government stimuli, healthcare interventions, and other domains are also discussed.
Kelley Gullo (Duke University PhD Candidate)
Paper: Is Choosing My Dog’s Treats Making Me Fat? The Effects of Choices Made for Others on Subsequent Choices for the Self
Consumers often make choices for others, but those choices do not occur in isolation. They are often intermixed with choices for themselves. The present research examines how goal-related choices made for others might influence subsequent goal-related choices made for oneself. Drawing on the sequential choice and interpersonal relations literatures, six studies examine the effect of choosing for others on choosing for the self across various different types of relationships (children, pets, and friends) and goal domains (health and academic). The authors find that initial goal-related choices for others can affect subsequent goal-related choices for the self by influencing one’s own perceived goal progress. Specifically, when choosers have a close, non-competitive relationship with the other, making an initial virtuous (indulgent) choice for the other makes consumers more likely to make an indulgent (virtuous) subsequent choice for themselves (i.e., a balancing effect). However, this balancing effect is mitigated and, in some extreme cases, reversed when choosing for less close or more competitive others.
Alex Burnap(MIT Sloan Post Doctoral Fellow)
Paper: Design and Evaluation of Product Aesthetics: A Human-Machine Hybrid Approach
Aesthetics are critically important to market acceptance in many product categories. In the automotive industry in particular, an improved aesthetic design can boost sales by 30% or more. Firms invest heavily in designing and testing new product aesthetics. A single automotive “theme clinic” costs between $100,000 and $1,000,000, and hundreds are conducted annually. We use machine learning to augment human judgment when designing and testing new product aesthetics. The model combines a probabilistic variational autoencoder (VAE) and adversarial components from generative adversarial networks (GAN), along with modeling assumptions that address managerial requirements for firm adoption. We train our model with data from an automotive partner—7,000 images evaluated by targeted consumers and 180,000 high-quality unrated images. Our model predicts well the appeal of new aesthetic designs—38% improvement relative to a baseline and substantial improvement over both conventional machine learning models and pretrained deep learning models. New automotive designs are generated in a controllable manner for the design team to consider, which we also empirically verify are appealing to consumers. These results, combining human and machine inputs for practical managerial usage, suggest that machine learning offers significant opportunity to augment aesthetic design.
September 13 (Friday, 11:35 AM – 12:50 PM, Swersey Classroom 4430)
Jennifer Logg (Georgetown University)
Paper: Algorithm appreciation: People prefer algorithmic to human judgment
Even though computational algorithms often outperform human judgment, received wisdom suggests that people may be skeptical of relying on them (Dawes, 1979). Counter to this notion, results from six experiments show that lay people adhere more to advice when they think it comes from an algorithm than from a person. People showed this effect, what we call algorithm appreciation, when making numeric estimates about a visual stimulus (Experiment 1A) and forecasts about the popularity of songs and romantic attraction (Experiments 1B and 1C). Yet, researchers predicted the opposite result (Experiment 1D). Algorithm appreciation persisted when advice appeared jointly or separately (Experiment 2). However, algorithm appreciation waned when: people chose between an algorithm’s estimate and their own (versus an external advisor’s; Experiment 3) and they had expertise in forecasting (Experiment 4). Paradoxically, experienced professionals, who make forecasts on a regular basis, relied less on algorithmic advice than lay people did, which hurt their accuracy. These results shed light on the important question of when people rely on algorithmic advice over advice from people and have implications for the use of “big data” and algorithmic advice it generates.
Ayelet Fishbach(Yale School of Management Visiting Professor)
Paper: It’s About Time: Earlier Rewards Increase Intrinsic Motivation
Can immediate (vs. delayed) rewards increase intrinsic motivation? Prior research compared the presence versus absence of rewards. By contrast, this research compared immediate versus delayed rewards, predicting that more immediate rewards increase intrinsic motivation by creating a perceptual fusion between the activity and its goal (i.e., the reward). In support of the hypothesis, framing a reward from watching a news program as more immediate (vs. delayed) increased intrinsic motivation to watch the program (Study 1), and receiving more immediate bonus (vs. delayed, Study 2; and vs. delayed and no bonus, Study 3) increased intrinsic motivation in an experimental task. The effect of reward timing was mediated by the strength of the association between an activity and a reward, and was specific to intrinsic (vs. extrinsic) motivation—immediacy influenced the positive experience of an activity, but not perceived outcome importance (Study 4). In addition, the effect of the timing of rewards was independent of the effect of the magnitude of the rewards (Study 5).
David Godes (University of Maryland)
Paper: Extremity Bias in Online Reviews: A Field Experiment
In a range of studies across many platforms, submitted online ratings have been shown to be characterized by a distribution with disproportionately-heavy tails. These have been referred to as "u-shaped distributions" or "j-shaped distributions." Our focus in this paper is on understanding the underlying process that yields such a distribution. We develop a simple analytical model to capture the most-common explanation: differences in utility associated with posting extreme vs moderate reviews. We compare the predictions of this model with those of an alternative theory based on customers forgetting about writing a review over time. The forgetting rate, by assumption, is higher for moderate reviews. The two models yield stark theoretical differences in the predicted dynamics of extremity bias. To test our predictions, we conduct a large-scale field experiment with an online travel platform. In this experiment, we vary the time at which the platform sent out a review solicitation email. This manipulation allows us to observe the extremity dynamics over an extended period both before and after the firm's solicitation email. The results support consistently the forgetting-based explanation over the extant utility theory.
Stephan Seiler (Stanford)
Paper: The Impact of Soda Taxes: Pass-through, Tax Avoidance, and Nutritional Eﬀects
We analyze the impact of a tax on sweetened beverages, often referred to as a “soda tax,” using a unique data-set of prices, quantities sold and nutritional information across several thousand taxed and untaxed beverages for a large set of stores in Philadelphia and its surrounding area. We ﬁnd that the tax is passed through at a rate of 75-115%, leading to a 30-40% price increase. Demand in the taxed area decreases dramatically by 42% in response to the tax. There is no signiﬁcant substitution to untaxed beverages (water and natural juices), but cross-shopping at stores outside of Philadelphia completely oﬀsets the reduction in sales within the taxed area. As a consequence, we ﬁnd no signiﬁcant reduction in calorie and sugar intake.
Anthony Dukes (USC Marshall)
Paper: Interactive Advertising: The Case of Skippable Ads
The skippable ad format, commonly used by online content platforms, requires viewers to see a portion of an advertiser’s message before having the option to skip directly to the intended content and avoid viewing the entire ad. Under what conditions do viewers forego this option and what are its implications for advertisers and the platform? We develop a dynamic model of a viewer receiving incremental information from the advertiser. This model identifies conditions under which the viewer (i) skips the ad or (ii) engages with the advertiser. Our model incorporates the advertising market and assess implications of skippable ads on the platform’s profit and advertisers’ surplus. Relative to the traditional ad format, we find that there are unambiguously more advertisements and viewers on the platform with skippable ads. Under reasonable conditions, the skippable ad format is a strict Pareto improvement, which raises the surplus of advertisers and the profit of the platform. The source of the additional surplus is that skippable ads allow the viewers to use private information about the advertiser to make more efficient decisions about their ad viewing choices.
Günter J. Hitsch (Chicago Booth)
Paper: Generalizable and Robust TV Advertising Effects
We provide generalizable and robust results on the causal sales effect of TV advertising based on the distribution of advertising elasticities for a large number of products (brands) in many categories. Such generalizable results provide a prior distribution that can improve the advertising decisions made by firms and the analysis and recommendations of anti-trust and public policy makers. A single case study cannot provide generalizable results, and hence the marketing literature provides several meta-analyses based on published case studies of advertising effects. However, publication bias results if the research or review process systematically rejects estimates of small, statistically insignificant, or “unexpected” advertising elasticities. Consequently, if there is publication bias, the results of a meta-analysis will not reflect the true population distribution of advertising effects. To provide generalizable results, we base our analysis on a large number of products and clearly lay out the research protocol used to select the products. We characterize the distribution of all estimates, irrespective of sign, size, or statistical significance. To ensure generalizability we document the robustness of the estimates. First, we examine the sensitivity of the results to the approach and assumptions made when constructing the data used in estimation from the raw sources. Second, as we aim to provide causal estimates, we document if the estimated effects are sensitive to the identification strategies that we use to claim causality based on observational data. Our results reveal substantially smaller effects of own-advertising compared to the results documented in the extant literature, as well as a sizable percentage of statistically insignificant or negative estimates. If we only select products with statistically significant and positive estimates, the mean or median of the advertising effect distribution increases by a factor of about five. The results are robust to various identifying assumptions, and are consistent with both publication bias and bias due to non-robust identification strategies to obtain causal estimates in the literature.
Ethan Kross (University of Michigan)
Although we all have an inner monologue that we engage in from time to time, people often refer to themselves in strikingly different ways when they engage in this introspective process. Whereas people typically use 1st person singular pronouns to refer to themselves during introspection (e.g., Why am I feeling this way?), they at times use their own name and other non-1st-person pronouns as well (e.g., Why is Ethan feeling this way or Why are you feeling this way?). In this talk I will review evidence from a multi-disciplinary line of research, which suggests that far from representing a simple quirk of speech or epiphenomenon, these linguistic shifts serve a powerful and potentially primitive self-control function, enhancing people’s ability to reason wisely and control their thoughts, feelings and behaviors under stress.
Hal Hershfield (UCLA Anderson)
Paper: When does the present end and the future begin?
Through the process of prospection, people can mentally travel in time to summon in their mind’s eye events that have yet to occur. Such depictions of the future often differ than those of the present, as do choices made for these two time periods. Whether it concerns saving money, living healthily, or treating the environment in a sustainable way, many individuals have difficulty making trade-offs between the present and the future. These and other findings presuppose an identifiable division between the two: At some point in the progression of time, the present must yield to the future. Still, the field to date has offered little insight by way of defining the line that separates the present from the future. The basic scientific appeal and practical implications of prospection and intertemporal decision making beg two related questions: When do people believe that the present ends and the future begins, and do such perceptions affect decision-making? We examine these questions across five studies.
Specifically, we assess the distribution of such perceptions across people (Study 1), the reliability of those perceptions (Study 2), the extent to which those perceptions predict intertemporal choices (Studies 3a-b) beyond related constructs (Study 3b), and whether those perceptions can be manipulated to affect the decisions people make in both laboratory (Study 4) and field settings (Study 5). This research sheds light on a foundational but unexplored prerequisite for thinking and acting across time.
Bowen Ruan (University of Wisconsin-Madison)
Paper: The Pursuit of Mere Completion
We study a motivation for mere completion and distinguish it from a motivation to seek the desired end state of a goal. We show that such a motivation can lead to the one-away effect that people counternormatively value something nearly complete (e.g., a loyalty card with all but one stamp) more than something complete (a loyalty card with all stamps). This is because when something is nearly complete, people anticipate the utility of mere completion and factor it into their valuation of the object; whereas when something is complete, they have experienced this utility and no longer consider it when valuing the object. We discuss and test practical implications of the phenomenon.
Marissa Sharif (University of Pennsylvania)
Paper: Keeping Rewards Rewarding: Quota Piecemeal Incentives Lead to Greater Persistence
We find that people are less motivated over time to complete an incentivized task with both piecemeal incentive structures, in which smaller incentives are acquired gradually through repeated smaller efforts (e.g., $0.15 for each completed task), and lump sum incentive structures, in which larger incentives are acquired after a longer period of work (e.g., $15 after completing 100 tasks), because they enjoy pursuing the incentivized task less over time. We demonstrate that when people are repeatedly incentivized for completing a task through a piecemeal incentive structure, they satiate on the incentive, leading them to enjoy receiving the incentive less over time. Because enjoyment of receiving a reward can influence people’s enjoyment of the task, people enjoy completing the incentivized task less over time with piecemeal incentives, and persist less at the task as a result. With lump sum incentives, we find that people satiate on the prospect of receiving the incentive and thus experience less anticipatory enjoyment about receiving the incentive over time. We propose anticipatory enjoyment about a future incentive can also influence people’s enjoyment of the incentivized task; as a result, people also enjoy completing the task less over time with lump sum incentives, leading them to persist less over time. Further, we demonstrate how a quota-piecemeal incentive structure, in which participants have to reach a certain target before receiving piecemeal incentives (e.g., no incentive for the first 25 tasks, then $0.20 for every completed task afterwards), can reduce satiation to both the prospect of receiving the incentive and the incentive itself, leading to more enjoyment of the task over time and persistence compared to both piecemeal and lump sum incentive structures.
Anja Lambrecht (London Business School)
Paper: Spillover Effects and Freemium Strategy in Mobile App Market
“Freemium” whereby a basic service level is provided free of charge but consumers are charged for more advanced features has become a popular business model for firms selling digital goods. However, it is not clear whether the launch of a free version helps or hurts the sale of an existing paid version. The free version may allow consumers to sample the product before making a purchase decision and subsequently increase downloads of the paid version but it may also cannibalize downloads of the paid version. We use a comprehensive data set on game apps from Apple’s App Store that tracks the launch of both the paid and the free versions of individual apps on a daily level, to identify whether a freemium strategy stimulates or hurts downloads of an existing paid version. We estimate the spillover effects between the free version and the paid version of the same app accounting for the endogeneity of the launch of the free version as well as app-level product heterogeneity. We find that the launch of a free version increases downloads of the paid version of the same app and present multiple robustness checks for these results. We then present suggestive evidence that the results are due to consumers sampling the free version and rule out alternative explanations.
Minkyung Kim(Yale School of Management)
Paper: A Structural Model of a Multi-tasking Salesforce with Private Information
We develop the first structural model of a multi-tasking salesforce. The model incorporates two novel features, relative to the extant structural literature on salesforce compensation: (i) multi-tasking given a multi-dimensional incentive plan; (ii) salesperson's private information about customers. While the model is motivated by our empirical application that uses data from a microfinance bank where loan officers are jointly responsible and incentivized for both loan acquisition and repayment, it is more generally adaptable to salesforce management in CRM settings focused on customer acquisition and retention. The compensation plan, which is multiplicative in acquisition and retention performance induces contemporaneous complementarities that balance effort on both tasks and intertemporal effort dynamics that align salesperson's incentives better with the firm even in the presence of salesperson's private information about customers. Our estimation strategy extends two-step estimation methods used for unidimensional compensation plans for the multi-tasking model with private information and intertemporal incentives. We combine flexible machine learning (random forest) for the identification of private information and the first stage multi-tasking policy function estimation. Estimates reveal two latent segments of salespeople - a "hunter'' segment that is more efficient on loan acquisition and a "farmer'' segment that is more efficient on loan collection. Counterfactual analyses show (i) that joint responsibility for acquisition and collection leads to better outcomes for the firm than specialized responsibilities even when salespeople are matched with their more efficient tasks; and (ii) that aggregating performance on multiple tasks using an additive function leads to substantial adverse specialization of "hunters", where they specialize on acquisition at the expense of the firm, compared to the multiplicative form used by the firm.
Elizabeth Friedman (Yale School of Management)
Paper: Apples, Oranges and Erasers: The Effect of Considering Similar versus Dissimilar Alternatives on Purchase Decisions
When deciding whether to buy an item, consumers sometimes think about other ways they could spend their money. Past research has explored how increasing the salience of outside options (i.e., alternatives not immediately available in the choice set) influences purchase decisions, but whether the type of alternative considered systematically affects buying behavior remains an open question. Ten studies find that relative to considering alternatives that are similar to the target, considering dissimilar alternatives leads to a greater decrease in purchase intent for the target. When consumers consider a dissimilar alternative, a competing non-focal goal is activated, which decreases the perceived importance of the focal goal served by the target option. Consistent with this proposed mechanism, the relative importance of the focal goal versus the non-focal goal mediates the effect of alternative type on purchase intent, and the effect attenuates when the focal goal is shielded from activation of competing goals.
September 25 (Tuesday, 11:35 AM – 12:50 PM, Atwood Classroom 4230)
Kate Barasz (Harvard Business School)
Paper: I Know Why You Voted for trump: (Over)inferring Motives Based on Choice
People often speculate about why others make the choices they do. This paper investigates how such inferences are formed as a function of what is chosen. Specifically, when observers encounter someone else’s choice (e.g., of political candidate), they use the chosen option’s attribute values (e.g., a candidate’s specific stance on a policy issue) to infer the importance of that attribute (e.g., the policy issue) to the decision-maker. Consequently, when a chosen option has an attribute whose value is extreme (e.g., an extreme policy stance), observers infer—sometimes incorrectly—that this attribute disproportionately motivated the decision-maker’s choice. Seven studies demonstrate how observers use an attribute’s value to infer its weight—the value-weight heuristic—and identify the role of perceived diagnosticity: more extreme attribute values give observers the subjective sense that they know more about a decision-maker’s preferences, and in turn, increase the attribute’s perceived importance. The paper explores how this heuristic can produce erroneous inferences and influence broader beliefs about decision-makers.
October 2 (Tuesday, 11:35 AM – 12:50 PM, Lin Tech., Classroom 4400)
Artem Timoshenko (MIT)
Paper: Identifying Customer Needs from User-generated Content
Firms traditionally rely on interviews and focus groups to identify customer needs for marketing strategy and product development. User-generated content (UGC) is a promising alternative source for identifying customer needs. However, established methods are neither efficient nor effective for large UGC corpora because much content is non-informative or repetitive. We propose a machine-learning approach to facilitate qualitative analysis by selecting content for efficient review. We use a convolutional neural network to filter out non-informative content and cluster dense sentence embeddings to avoid sampling repetitive content. We further address two key questions: Are UGC-based customer needs comparable to interview-based customer needs? Do the machine-learning methods improve customer-need identification? These comparisons are enabled by a custom dataset of customer needs for oral care products identified by professional analysts using industry-standard experiential interviews. The analysts also coded 12,000 UGC sentences to identify which previously identified customer needs and/or new customer needs were articulated in each sentence. We show that (1) UGC is at least as valuable as a source of customer needs for product development, likely more-valuable, than conventional methods, and (2) machine-learning methods improve efficiency of identifying customer needs from UGC (unique customer needs per unit of professional services cost).
Rachel Gershon(Washington University in St. Louis)
Paper: The Reputational Benefits and Material Burdens of Prosocial Referral Incentives
Selfish incentives typically outperform prosocial incentives. However, this research identifies a context where prosocial incentives are more effective: customer referral programs. Companies frequently offer “selfish” incentives to customers who refer, incentivizing those customers directly for recruiting friends. However, companies can alternatively offer “prosocial” incentives that reward the referred friend instead. In multiple field and incentive-compatible experiments, this research finds that prosocial referrals, relative to selfish referrals, result in more new customers. This pattern occurs for two reasons. First, at the referral stage, customers expect to receive reputational benefits when making prosocial referrals within their social network, thereby boosting performance of prosocial referrals. Second, at the uptake stage, the burden of signing up is high, and therefore referral recipients prefer to receive an incentive themselves. Due to the combination of reputational benefits at the referral stage and action burdens at the uptake stage, prosocial referrals yield more new customers overall. The high frequency of selfish referral offers in the marketplace suggests these forces play out in ways that are unanticipated by marketers who design incentive schemes.
Melanie Brucks (Stanford)
Paper: The Creativity Paradox: Soliciting Creative Ideas Undermines Ideation
When developing product ideas and original marketing content, firms and marketers often organize ideation activities to harvest a rich set of new ideas. We explore a popular paradigm for guiding these activities—explicitly requesting creative ideas—in the context of consumer idea generation contests. We demonstrate that this common practice can paradoxically undermine ideation, decreasing the total number of novel ideas that contestants generate (i.e., ideas rated as surpassing the threshold of average novelty). A single paper meta-analysis across six incentive-compatible ideation contests on different products (toy, office supply, toiletry, and mobile app) involving close to 2,000 contestants estimated that soliciting creative ideas resulted in 1.49 fewer novel ideas per contestant, which amounted to a 20% decrease in productivity and a loss of 500 unique novel ideas in total. This productivity loss occurs because soliciting creative ideas prompts people to self-impose a high standard, which leads to a unique cognitive process that restrains (instead of expands) their thinking. This research also offers important solutions for marketers to ensure the productivity of ideation and fuel innovation.
October 15 (Monday, 11:35 AM – 12:50 PM, Isaacson Classroom 4210)
Andrew Rhodes (Toulouse School of Economics)
Paper: Repeat Consumer Research
Consumers often buy products repeatedly over time, but need to search for information on firms' prices and product features. We characterize the optimal search rule in such an environment, and show that it may be optimal to `stagger' search over time. Specifically, consumers may search intensively upon entering the market, buy several different products early on, settle on one for a period of time, but then after a while restart their search process. We also solve for equilibrium prices, and relate both their level and dispersion to the market search cost. We consider various extensions, including the case where firms may offer discounts to past customers.
Kalinda Ukanwa (University of Maryland)
Paper: Discrimination in Service
The goal of this research is threefold: 1) to uncover the mechanism by which service discrimination can emerge from seemingly rational service policy; 2) to investigate how service discrimination interacts with competition and consumer word-of-mouth to affect profits; 3) to help firms avoid losing profits due to discrimination. By employing a mixed-methods approach, the authors find that variation in consumer quality (i.e., their profitability to the firm) and measurement error in detecting consumer quality moderate the magnitude of service discrimination. Large error in measuring consumer quality exacerbates service discrimination, while large intra-group variation in consumer quality attenuates discrimination. This research shows that discrimination can be profitable in the short-run but can backfire in the long-run.
Agent-based modeling demonstrates that non-discriminatory service providers, on average, earn higher long-term profits than discriminatory service-providers, in spite of a seeming short-term profit advantage for discrimination. The authors provide managerial recommendations on reducing service discrimination’s profit-damaging effects. This research emphasizes the long-term benefits of switching to a service policy that does not use group identity information. However, for firms that must persist in using group identity information, this research has additional recommendations, which include increasing investment in methods of measurement error reduction and increasing exposure to consumers of different populations. By doing so, a firm could reduce service discrimination while improving its long-term profits and societal well-being.
October 25 (Thursday, 11:35 AM – 12:50 PM, Isaacson Classroom 4210)
Emma Levine (Chicago Boothl)
Paper: Conflicts between honesty and benevolence
Although honesty is typically conceptualized as a virtue, it often conflicts with other equally important moral values, such as benevolence. In this talk, I explore how communicators and targets navigate conflicts between honesty and benevolence. In the first half of the talk, I demonstrate that communicators and targets make egocentric moral judgments of deception. Specifically, communicators focus more on the costs of deception to them—for example, the guilt they feel when they break a moral rule, whereas targets focus more on whether deception helps or harms them. As a result, communicators and targets make asymmetric judgments of prosocial lies of commission and omission. Communicators often believe that omitting information is more ethical than telling a prosocial lie, whereas targets often believe the opposite. Consequently, communicators miss opportunities to provide targets with comfort and care across a wide range of domains, including health and employment discussions. In the second half of the talk, I present a series of economic game experiments demonstrating that communicators’ focus on following a rule of honesty leads them to avoid information about the social consequences of their communication. Taken together, this research provides new insight into the causes and consequences of lie aversion, and the mechanisms that allow people to harm others while feeling moral.
October 29 (Monday, 11:35 AM – 12:50 PM, Isaacson Classroom 4210)
Shunyuan Zhang (Carnegie Mellon)
Paper: Can Lower-quality Images Lead to Greater Demand on AirBnB?
We investigate how AirBnB hosts make decisions on the quality of property images to post. Prior literature has shown that the images play the role of advertisements and the quality of the images have a strong impact on the present demand of the property – as compared to lower quality amateur images, high quality professional images can increase the present demand by 14.3% on matched samples (Zhang et al. 2018). However, the reality is that there exist a large number (approximately two-thirds) of amateur (low-quality) images on AirBnB. One possible explanation is that these images are costly for the hosts, as most of them are amateur photographers. However, this does not completely explain the result – in 2011, AirBnB started offering highest quality professional images for free to all the hosts by sending their professional photographers to the property and shoot, process and post the photos for the hosts. To AirBnB’s surprise, only 30% of the hosts used the AirBnB professional photography program. We posit that the host’s decision on what quality of images to post depends not only on the advertising impact of images on the present demand and on the cost of images, but also on the impact of images on the future demand. Thus, some hosts would be hesitant to post high-quality images because they can create unrealistically high expectations for the guests, especially if the actual property is not as good as what the images portray and if the hosts are unable to provide a high level of service to match those expectations. This would result in the satisfaction level of guests to decrease, who would then leave a bad review or not write any review at all; and since the number/quality of reviews is one of the key drivers in generating new bookings, this will adversely affect the future demand.
In this paper, we attempt to disentangle the aforementioned factors that influence the host’s decision on the type of photographs to post and explore policies that AirBnB can employ to improve the hosts’ adoption of professional photos and thereby improve the profitability of both the hosts and AirBnB. To do so, we build a structural model of demand and supply, where the demand side entails modeling of guests’ decisions on which property to stay, and the supply side entails modeling of hosts’ decisions on what quality of images to post and what level of service to provide in each period. We estimate our model on a unique one-year panel data consisting of a random sample of 958 AirBnB properties in Manhattan (New York City) where we observe hosts’ monthly choices of the quality of images posted and their and service that they provided. Our key findings are: First, guests who pay more attention to images tend to care more about reviews, revealing an interesting trade-off problem for the hosts. Second, hosts incur considerable costs for posting above-average quality of images. Third, hosts are heterogenous in their abilities in investing service effort. In counterfactual analyses we simulate AirBnB properties assuming they all start with entry state and low-level images. We then compare the impact of the current policy (offering free high-level images to hosts) and of a proposed policy (offering free medium-level images to hosts) on the average property demand. We show that the proposed policy, though dominated by the current policy in the short-run (for the first four periods), outperformed the currently policy in the long-run (7.6 % vs 12.4%). The interpretation is that, medium-level images, compared to high-level images, despite forming a smaller expected utility for the consumers, has a greater effect on property demand in the long-run as they, with lower risks of creating a dissatisfactory gap, help hosts to obtain new reviews. Moreover, individual hosts who might end up using amateur (low-level) images to avoid the dissatisfactory gap under the current policy, now use free medium-level images to make more revenues under the proposed policy. In the second counterfactual, we explore an alternative policy in which AirBnB were to offer a menu of image quality choices for free. The menu includes both high- and medium- level of property images (images examples are provided) and allow the hosts to self-select which program they want. Comparing with the proposed policy in the first simulation, we find that this policy performs the best in the long-run by improving average property demand by 16.2%.
November 1 (Thursday, 11:35 AM – 12:50 PM, Isaacson Classroom 4210)
Kaitlin Wooley (Cornell)
Paper: The Dissimilarity Bias: The Effect of Dissimilarity on Goal Progress Perceptions and Motivation
Perceived goal progress directly influences consumer goal pursuit, yet determinants of progress perceptions are largely unexamined. Six studies identified a novel factor that biases consumers’ perceptions of goal progress: categorization. When pursuing a goal, consumers categorize completed and remaining actions that are dissimilar into separate units, anchoring their estimates of progress on the proportion of units completed versus remaining, rather than on the absolute amount of progress made. On the other hand, consumers categorize completed and remaining actions that are similar together into the same unit, anchoring their estimates of goal progress closer to the objective amount of progress made versus remaining. As a result, dissimilarity (vs. similarity) between completed and remaining actions leads consumers to infer greater progress when they are farther from their goal, and to infer less progress when they are closer to their goal. This research documents this bias for behaviors ranging from exercising to studying and demonstrates implications for motivation and goal attainment in incentive-compatible contexts.
November 2 (Friday, 11:35 AM – 12:50 PM, Jones Classroom 2220)
Paper: Peeking into the On-Demand Economy
Today, an increasing number of digital platforms have emerged to match customers, almost in real time, with a potentially global pool of freelancers, leading to the rise of the on-demand economy. In addition to creating new dynamics of labor allocation, the on-demand economy has also led to new models of computation— it has enabled the human-in-the-loop computing— and new forms of knowledge creation—people all over the world are contributing to scientific studies in dozens of fields, either by making scientific observations as amateur scientists or by participating in online experiments as subjects. Despite its already significant impacts, the on-demand economy has still been considered as a black-box approach to soliciting labor from a crowd of on-demand workers. Little is known about these workers and their aggregated behavior. In this talk, using the crowdsourcing Internet marketplaces as an example, I present my attempts and findings on opening up this black box with a combination of experimental and computational approaches, with focuses on understanding who the on-demand workers are, how to model their unique working behavior, and how to improve their work experience.
November 5 (Monday, 11:35 AM – 12:50 PM, Gould Classroom 4220)
Abigail Jacobs (UC Berkeley)
Paper: Adoption and diffusion in populations of networks
In-depth studies of sociotechnical systems primarily focus on single instances: network surveys are expensive, and platforms vary in important ways. With single examples, we cannot in general know how much of observed network structure is explained by historical accidents, random noise, or meaningful social processes. Leveraging settings where we have, or can cleverly construct, multiple instances of a network, the comparative approach makes previously untested theories testable. In this talk I present two projects where we can leverage this comparative perspective: first, I show how adoption and online behavior in Facebook’s first two years varied with the social contexts of college students; second, I show how demand for prescription opioids increases through social connections. I show how this approach can be used to infer patterns of adoption, diffusion, and the emergence of norms in sociotechnical systems, and how the structure of sociotechnical systems can encode and reinforce these processes.
November 29 (Thursday, 11:35 AM – 12:50 PM, Jones Classroom 2220)
Rima Touré-Tillery (Kellogg)
Paper: You Won’t Remember This: The Effect of Expected Forgetting on Self-Control
Expected forgetting refers to people’s beliefs that they will be unable to bring to mind in the future something they are seeing, doing, or experiencing in the present. Because autobiographical memories are the foundation of the self-concept, we propose an action or choice people expect to forget will seem less consequential for their self-concept (i.e., less self-diagnostic) than one for which they have no such expectation. In turn, we expect these differential perceptions of self-diagnosticity to influence self-control—that is, the ability to subdue impulses in the face of temptation, in order to achieve longer-term goals. In five studies, we find that when expected forgetting is high (vs. control), people are less likely to exercise self-control when making food choices (study 1) and prosocial decisions (study 2), because they perceive their actions as less self-diagnostic (studies 3 – 5).
Eddie Ning (Berkeley)
Paper: How to Make an Offer? A Stochastic Model os the Sales Process
Many firms rely on salespersons to communicate with prospective customers. Such person-to-person interaction allows for two-way discovery of product fit and flexibility on price, which are particularly important for business-to-business transactions. In this paper, I model the sales process as a game in which a buyer and a seller discover their match sequentially while bargaining for price. The match between the product’s attributes and the buyer’s needs is revealed gradually over time. The seller can make price offers without commitment, and the buyer decides whether to accept or wait. Players incur flow costs and can leave at any moment. The discovery process creates a hold-up problem for the buyer that causes him to leave too early and results in inefficient no-trades. This can be alleviated by the use of a list price that puts an upper bound on the seller’s offers. A lower list price encourages the buyer to stay while reducing the seller’s bargaining power. But in equilibrium the players always reach agreement at a discounted price. The model thus provides a novel rationale for the pattern of “list price - discount” observed in sales. I examine whether the seller should commit to a fixed price or allow bargaining. When the seller’s flow cost is high relative to the buyer’s, both players are willing to participate in discovery if and only if bargaining is allowed. In such a case, bargaining leads to a Pareto improvement, which explains the prevalent use of bargaining in sales. If the buyer has private information on his outside option, the model predicts that, counter-intuitively, the buyer with a higher net value for the product pays a lower price. The paper expands the bargaining literature by adding a discovery process that introduces a hold-up problem as well as making the product value stochastic.
Paper: The Impostor Syndrome from Luxury Consumption
The present research proposes that luxury consumption can be a double-edged sword: while luxury consumption yields status benefits, it can also make consumers feel inauthentic, producing the paradox of luxury consumption. Feelings of inauthenticity from luxury consumption emerge due to the perceived dominance of extrinsic motivation over intrinsic motivation for consuming luxury. This phenomenon is more pronounced among consumers with low levels of psychological entitlement. It is moderated by conspicuousness of the product and consumption, and by the perceived malleability of cultural capital.
Sydney Scott (Washington Univ. at St. Louis)
Paper: Consumers Prefer “Natural” More for Preventatives than for Curitives
We demonstrate that natural product alternatives are more strongly preferred when used to prevent a problem than when used to cure a problem. This organizing principle explains variation in the preference for natural across distinct product categories (e.g., food vs. medicine), within product categories (e.g., between different types of medicines), and for the same product depending on how it is used (to prevent or to cure ailments). Contrary to prior work which finds that natural products are judged as superior on all dimensions, the prevent-cure difference in the natural preference occurs because natural products are viewed as safer but less potent than synthetic alternatives. Importantly, consumers care relatively more about safety than potency when preventing as compared to when curing, which leads to a stronger natural preference when preventing. Consistent with this explanation, when natural alternatives are described as more risky and more potent, reversing the standard inferences about naturalness, then natural alternatives become more preferred for curing than for preventing. This research sheds light on when the marketing of “natural” is most appealing to consumers.
February 13 (Tuesday, 11:30 AM – 12:45 PM, Gould Classroom 4220)
Nikhil Naik (Harvard)
Paper: Understanding Predictors of Physical Urban Change Using 1.6 Million Street View Images
For more than a century, economists and sociologists have advanced theories connecting the dynamics of cities’ physical appearances to their location, demographics, and built infrastructure. However, research has been limited by a lack of high-throughput methods to quantify urban appearance and change. In this paper, I introduce a machine learning (ML) algorithm that evaluates urban appearance and change from time-series Google Street View images. The ML algorithm—trained with aggregate perception of thousands of online users from a crowdsourced game—computes Streetchange, a metric for the change in appearance of buildings and streets between two images of the same location captured several years apart. The algorithm is robust to changes in weather, vegetation, and traffic across images. Validation studies show strong agreement between Streetchange and urban change estimates obtained from human experts and administrative data. Using Streetchange, I calculate physical change between 2007–2014 for 1.65 million street blocks from five major American cities. My collaborators and I use aggregate census tract-level Streetchanges between 2007–2014 and census data from 2000 to determine the socioeconomic predictors of physical urban growth. We find that a dense population of highly educated individuals—rather than housing costs, income, or ethnic makeup—is the best predictor of future neighborhood growth, an observation that is compatible with economic theories of human capital agglomeration. In addition, neighborhoods with better initial appearances and physical proximity to the central business district experience more substantial upgrading. Together, our results underscore the importance of human capital and education in defining urban outcomes and illustrate the value of using visual data and machine learning in combination with econometric methods to answer economic questions.
February 15 (Thursday, 10:30 AM – 11:45 AM, Gould Classroom 4220)
Paper: Deep learning and deep trust: what machine learning can teach us about trust and persuasion in the digital age.
Whether purchasing a product, voting for the president of the United States, or embarking on an entrepreneurial venture, establishing trust with the individuals and firms that we engage with is a precondition for providing them with our time and our resources. Accordingly, understanding how firms establish trust among consumers and how politicians establish trust among their constituents can teach us about one of the foundational aspects of persuasion and influence. Deep learning methods, through an ability to link abstract concepts, such as trust, to features of images and texts, provide us with an unprecedented opportunity to understand how political figures and firms establish trust among the public. In this talk, I will present a paper entitled “Political image analysis with deep neural networks” which employs deep learning methods and a database of 296,460 Facebook photos to gain insights about how politicians in the U.S. Congress establish trust among their constituents through the symbols, objects and people that they present in the images that they post on social media.
Amin Sayedi (Univ. of Washington)
Paper: Learning Click-through Rates in Search Advertising, joint work with J. Choi (Columbia)
Prior literature on search advertising primarily assumes that search engines know advertisers' click-through rates, the probability that a consumer clicks on an advertiser's ad. This information, however, is not available when a new advertiser starts search advertising for the first time. In particular, a new advertiser's click-through rate can be learned only if the advertiser's ad is shown to enough consumers, i.e., the advertiser wins enough auctions. Since search engines use advertisers' expected click-through rates when calculating payments and allocations, the lack of information about a new advertiser can affect new and existing advertisers' bidding strategies. In this paper, we use a game theory model to analyze advertisers' strategies, their payoffs, and the search engine's revenue when a new advertiser joins the market. Our results indicate that a new advertiser should always bid higher (sometimes above its valuation) when it starts search advertising. However, the strategy of an existing advertiser, i.e., an incumbent, depends on its valuation and click-through rate. A strong incumbent increases its bid to prevent the search engine from learning the new advertiser's click-through rate, whereas a weak incumbent decreases its bid to facilitate the learning process. Interestingly, we find that, under certain conditions, the search engine benefits from not knowing the new advertiser's click-through rate because its ignorance could induce the advertisers to bid more aggressively. Nonetheless, the search engine's revenue sometimes decreases because of this lack of information, particularly, when the incumbent is sufficiently strong. We show that the search engine can mitigate this loss, and improve its total profit, by offering free advertising credit to new advertisers.
February 23 (Classroom Room 4430)
Ovul Sezer (UNC)
Paper: Misguided Self-presentation: When and why humblebragging and backhanded compliments backfire
The ability to present oneself effectively to others is one of the most essential skills in social life. Consumers constantly make decisions to prompt favorable impressions in others, as countless critical rewards depend on others’ impressions. In this research, I identify unexamined yet ubiquitous self-presentation strategies—humblebragging and backhanded compliments—that people use in an effort to manage the delicate balancing act of self-presentation. Using datasets from social media and diary studies, I document the ubiquity of these strategies in real life across several domains. In laboratory and field experiments, I simultaneously examine the underlying motives for these self-presentation strategies and others’ perceptions of these strategies—allowing for an analysis of their efficacy—as assessed by the opinions targets hold of the would-be self-presenter. I provide evidence from both lab and field to show that humblebragging and backhanded compliments backfire, because they are seen as insincere.
March 6 (Tuesday, 11:35 AM – 12:50 AM, Gould Classroom 4220)
Allison Chaney (Princeton Post-doc)
Paper: The Social Side of Recommendation Systems: How Groups Shape Our Decisions
Recommendation systems occupy an expanding role in everyday decision making, from choice of movies and household goods to consequential medical and legal decisions. This talk will explore a sequence of work related to recommending decisions for people to take. First we will examine the results of a large-scale study of television viewing habits, focusing on how individuals adapt their preferences when consuming content with others. Next, we will leverage our insights about the social behavior of individuals to incorporate social network information into a model for providing personalized recommendations. Finally, we will consider the impacts of recommendation algorithms like these on human choices and the homogeneity of group behavior.
April 13 (Classroom 4400, Lin Tech.)
Harikesh Nair (Stanford)
Paper: Modern Data Driven eCommerce: JD.com in China
I provide an overview and discussion of data driven eCommerce businesses, with a focus on China. I will also discuss applications of data-science at JD.com, China's second largest eCommerce firm.
Brad Shapiro (Univ. of Chicago)
Paper: Promoting Wellness or Waste? Evidence from Antidepressant Advertisingi
Direct-to-Consumer Advertising (DTCA) of prescription drugs is controversial and has ambiguous potential welfare effects. In this paper, I estimate costs and benefits of DTCA in the market for antidepressant drugs. In particular, using individual health insurance claims and human resources data, I estimate the effects of DTCA on outcomes relevant to societal costs: new prescriptions, prices and adherence. Additionally I estimate the effect of DTCA on labor supply, the economic outcome most associated with depression. First, category expansive effects of DTCA found in past literature are replicated, with DTCA particularly causing new prescriptions of antidepressants. Additionally, I find evidence of no advertising effect on the prices or co-pays of the drugs prescribed or on the generic penetration rate. Next, lagged advertising is associated with higher first refill rates, indicating that the advertising marginal are not more likely to end treatment prematurely due to initial adverse effects. Despite first refill rates being higher for those that are more likely advertising-marginal, concurrent advertising drives slightly lower refill rates overall, particularly among individuals who stand to gain least from treatment. Finally, advertising significantly decreases missed days of work, with the effect concentrated on workers who tend to have more absences. Back-of-the-envelope calculations suggest that the wage benefits of the advertising marginal work days are more than an order of magnitude larger than the total cost of the advertising marginal prescriptions.
September 8 (11:15 AM – 12:30 PM)
Shai Davidai (New School)
Paper: The second pugilist’s plight: Why people believe they are above average, but are not especially happy about it
People’s tendency to rate themselves as above average is often taken as evidence of unrealistic optimism and undue self-regard. Yet, everyday experience is occasioned with feelings of inadequacy and insecurity. How, can these two experiences be reconciled? We suggest that people hold two complementary beliefs: that they are above average and, at the same time, that the average is not a relevant or meaningful standard of comparison. In seven studies (N = 1,566), we find that people look to others who are above-average as a benchmark for social comparison (Studies 1A an 1B), that “the average person” is not even considered a relevant low standard of comparison, and that despite perceiving themselves as better than average, people don’t see themselves as better than their self-chosen standards of comparison (Study 2). We show that this is due to increased cognitive accessibility of above-average standards of comparison (Studies 3A and 3B), and that people hold themselves to such high standards even when recalling instances of personal inadequacy (Study 4A) or when they expect their own performance to be lower than average (Study 4B). We discuss the implications for self-enhancement research and the importance of examining who people compare themselves to in addition to how people believe they compare to others.
September 12 (Tuesday, 11:45 AM – 1:00 PM, Gould Classroom 4220)
Liang Guo (CUHK)
Paper: Testing the Role of Contextual Deliberation in the Compromise Effect
One of the most well-known examples of preference construction is the compromise effect. This puzzling anomaly can be rationalized by contextual deliberation (i.e., information-acquisition activities that can partially resolve utility uncertainty before choice). In this research we investigate the theoretical robustness and empirical validity of this explanation. We conduct five experiments with more than one thousand participants, and show that the compromise effect can be positively mediated by response time, cannot be mitigated by context information, but can be moderated by manipulating the level of deliberation (i.e., time pressure, preference articulation, task order). These findings are consistent with the predictions of the theory of contextual deliberation.
Eleanor Putnam-Farr (SOM Post-doc)
Paper: Balancing Promise with Performance: Field Testing Optimal Strategies for Maximizing Ongoing Participation
In order to be successful, marketers need to both attract and retain customers, but firms often focus disproportionately on maximizing acquisition (e.g. enrollment) because the outcome is relatively easy to measure and understand. However techniques that maximize enrollment may have differing effects on performance and ongoing participation. Marketers who want to maximize ongoing participation need to balance the value that comes from maximizing initial interest (often by generating high expectations) against the potential cost that comes from making overly high claims that result in dissatisfaction. This is made even more challenging in domains where the customer participation determines the ultimate “product” performance and thus maximizing the initial interest may actually increase the motivation and subsequent performance. Using high headline numbers (i.e. explicit and prominent quantification of maximum potential benefits) may attract participants and set higher targets that result in higher performance, but it might also lead to lower satisfaction and program abandonment. In a large-scale field experiment (N=8,918), and multiple follow up experiments, we explore this tension and show that 1) consumers do use headline numbers as anchors for program expectations, but 2) may separate program expectations from personal performance targets. This results in 3) lower ongoing participation from consumers who were shown a high headline number rather than piece rate reward language during recruitment but 4) these results can be moderated by focusing people on realistic performance or more achievable results.
September 19 (Tuesday, 11:30 AM – 12:30 PM, Attwood Classroom 4230)
Jungju Yu (SOM Doctoral Candidate)
Paper: A Model of Brand Architecture Choice: A House of Brands vs. A Branded House
Some firms use different brands for distinct product categories, while others use the same brand. In this paper, we study for which product markets a firm should use the same brand. To answer this question, we propose a framework of market-relatedness to conceptualize the relationship between markets through supply-side (similar production technology) and demand-side (similar target customers). We apply this framework to a model of reputation with features of both adverse selection and moral hazard to analyze an interaction between information spillover across markets and investment incentives.
We show that umbrella branding is optimal if product markets are closely related on either supply-side or demand-side. However, surprisingly, we find that independent branding is optimal if product markets are closely related on both supply-side and demand-side. We also provide implications for customer relationship management and innovation for quality in each product market.
September 22 (Gould Classroom 4220)
Ellen Evers (UC Berkeley)
Paper: Elicitation Based Preference Reversals in Willingness to Pay and Choice
A key assumption in the empirical application of rational decision theory is that of procedure invariance; preferences are independent of how they are elicited. This means that measuring preferences among a bundle of goods should yield the same ordinal rankings, regardless of whether preferences are measured using a valuation strategy (e.g., willingness-to-pay) or choices. In 13 studies we demonstrate violations of procedure invariance, such that consumers more strongly prefer affective over functional goods in choices as compared to willingness-to-pay. These preference reversals result from a combination of two processes. The first is that decision-makers are more likely to rely on affect when making a choice. The second is that they place a relatively greater weight on both the long-term value of a product and its necessity when indicating their willingness to pay, while placing a relatively greater weight on instant gratification and wants when making choices. Contrary to the necessary empirical assumption that preferences are consistent across measurements, we find that participants treat different measurement techniques as entirely different situations.
September 28 (Thursday, Gould Classroom 4220)
Uri Gneezy(Columbia University)
Paper: Using violations of the Fungibility Principle to increase incentives effectiveness
A small change to an incentive structure can have a dramatic impact—positive or negative—on outcomes. We suggest a way to increase the perceived value of the incentives without increasing the budget used. This increase relates to the observation that separate mental accounts violate the economic principle of fungibility by which money in one mental account offers a perfect substitute for money in another account. We propose that targeting incentives to a specific, highly desirable mental account could increase the effectiveness of the intervention.
We test this targeting hypotheses in a behavioral intervention aimed at increasing walking for taxi drivers in Singapore. Participants received a reward for each of three months in which they met a physical activity goal. In one treatment, the reward was cash. In the targeted treatment, we used the same amount of money, but associated the reward with a specific averse cost for these drivers—paying for the daily rent of the taxi. We find that cash motived people to be more active, but, as predicted, associating the reward with the aversive expense was more effective. Importantly, this difference remained after the incentives were stopped, resulting in a stronger habit for the participants in the targeted incentives treatment.
October 12 (Thursday, Isaacson Classroom 4210)
Ryan Dew (Columbia)
Paper: Bayesian Nonparametric Customer Base Analysis with Model-based Visualizations
Marketing managers are responsible for understanding and predicting customer purchasing activity, a task that is complicated by a lack of knowledge of all of the calendar time events that influence purchase timing. Yet, isolating calendar time variability from the natural ebb and flow of purchasing is important, both for accurately assessing the influence of calendar time shocks to the spending process, and for uncovering the customer-level patterns of purchasing that robustly predict future spending. A comprehensive understanding of purchasing dynamics therefore requires a model that flexibly integrates both known and unknown calendar time determinants of purchasing with individual-level predictors such as interpurchase time, customer lifetime, and number of past purchases. In this paper, we develop a Bayesian nonparametric framework based on Gaussian process priors, which integrates these two sets of predictors by modeling both through latent functions that jointly determine purchase propensity. The estimates of these latent functions yield a visual representation of purchasing dynamics, which we call the model-based dashboard, that provides a nuanced decomposition of spending patterns. We show the utility of this framework through an application to purchasing in free-to-play mobile video games. Moreover, we show that in forecasting future spending, our model outperforms existing benchmarks.
October 16 (Monday, 12:30 – 1:45 PM, Nooyi Classroom 2230)
Liu Liu (NYU)
Paper: Visual Listening In: Extracting Brand Image Portrayed on Social Media
Marketing academics and practitioners recognize the importance of monitoring consumer online conversations about brands. The focus so far has been on user generated content in the form of text. However, images are on their way to surpassing text as the medium of choice for social conversations. In these images, consumers often tag brands and depict their experience with the brands. We propose a “visual listening in” approach to measuring how brands are portrayed on social media (Instagram) by mining visual content posted by users, and show what insights brand managers can gather from social media by using this approach. Our approach consists of two stages. We first use two supervised machine learning methods, support vector machine classifiers and deep convolutional neural networks, to measure brand attributes (glamorous, rugged, healthy, fun) from images. We then apply the classifiers to brand-related images posted on social media to measure what consumers are visually communicating about brands. We study 56 brands in the apparel and beverages categories, and compare their portrayal in consumer-created images with images on the firm’s official Instagram account, as well as with consumer brand perceptions measured in a national brand survey. Although the three measures exhibit convergent validity, we find key differences between how consumers and firms portray the brands on visual social media, and how the average consumer perceives the brands.
Franklin Shaddy (U Chicago)
Paper: Seller Beware: How Bundling Affects Valuation
How does bundling affect valuation? This research proposes the asymmetry hypothesis in the valuation of bundles: Consumers demand more compensation for the loss of items from bundles, compared to the loss of the same items in isolation, yet offer lower willingness-to-pay for items added to bundles, compared to the same items purchased separately. This asymmetry persists because bundling causes consumers to perceive multiple items as a single, inseparable “gestalt” unit. Thus, consumers resist altering the “whole” of the bundle by removing or adding items. Six studies demonstrate this asymmetry across judgments of monetary value (Studies 1 and 2) and (dis)satisfaction (Study 3). Moreover, bundle composition—the ability of different items to create the impression of a “whole”—moderates the effect of bundling on valuation (Study 4), and the need to replace missing items (i.e., restoring the “whole”) mediates the effect of bundling on compensation demanded for losses (Study 5). Finally, we explore a boundary condition: The effect is attenuated for items that complete a set (Study 6).
Tong Guo (U Michigan)
Paper: The Effect of Information Disclosure on Industry Payments to Physicians
U.S. pharmaceutical companies paid $2.6 billion to physicians in the form of gifts to promote their medicine in 2015. Offering financial incentives to prescribers creates concerns about potential conflict of interest. To curb the inappropriate financial relationships between healthcare providers and firms, several states instituted disclosure laws wherein firms were required to publicly declare the payments that they made to physicians. In 2013, this law was rolled out to all 50 states as part of the affordable Care Act. A consequence of the public disclosure is that all players in the market - patients, physicians, rival firms, and payers (insurance companies and the government) - can observe which physicians are being targeted by which firms as well as the amount of marketing expenditure directed towards each physician. We investigate the causal impact of this increased transparency on subsequent payments between firms and physicians. Combining machine learning with quasi-experimental difference-in-difference research design, we find control \clones" for every physician-product pair in the treated states using the Causal Forest algorithm (Wager and Athey 2017). The algorithm is computationally efficient and robust to model mis-specifications, while preserving consistency and asymptotic normality. Using a 29-month national panel covering $100 million-dollar payments between 16 anti-diabetics brands and 50,000 physicians, we find that the monthly payments declined by 2% on average due to disclosure. However, there is considerable heterogeneity in the treatment effects with 14% of the drug-physician pairs showing a significant increase in their monthly payment. Moreover, the decline in payment is smaller among drugs with larger marketing expenditure and prescription volumes, and among physicians who were paid more heavily pre-disclosure and prescribed more heavily. Thus, while information disclosure did lead to reduction in payments on average (as intended by policy makers), the effect is limited for big drugs and popular physicians. We further explore potential mechanisms that are consistent with the data pattern. This paper takes the first step towards shedding light on the role of public disclosure policy in solving conflict-of-interest issues in the pharmaceutical industry, especially in reducing payments made by pharmaceutical firms to physicians.
Xiao Liu (Stern)
Paper: Large Scale Cross-Category Analysis of Consumer Review Content on Sales Conversion Leveraging Deep Learning
Consumers often rely on product reviews to make purchase decisions, but how consumers use review content in their decision making has remained a black box. In the past, extracting information from product reviews has been a labor intensive process that has restricted studies on this topic to single product categories or those limited to summary statistics such as volume, valence, and ratings. This paper uses deep learning natural language processing techniques to overcome the limitations of manual information extraction and shed light into the black box of how consumers use review content. With the help of a comprehensive dataset that tracks individual-level review reading, search, as well as purchase behaviors on an e-commerce portal, we extract six quality and price content dimensions from over 500,000 reviews, covering nearly 600 product categories. The scale, scope and precision of such a study would have been impractical using human coders or classical machine learning models. We achieve two objectives. First, we describe consumers’ review content reading behaviors. We find that although consumers do not read review content all the time, they do rely on review content for products that are expensive or with uncertain quality. Second, we quantify the causal impact of content information of the read reviews on sales. We use a regression discontinuity in time design and leverage the variation in the review content seen by consumers due to newly added reviews. To extract content information, we develop two deep learning models: a full deep learning model that predicts conversion directly and a partial deep learning model that identifies content dimensions. Across both models, we find that aesthetics and price content in the reviews significantly affect conversion across almost all product categories. Review content information has a higher impact on sales when the average rating is higher and the variance of ratings is lower. Consumers depend more on review content when the market is more competitive, immature or when brand information is not easily accessible. A counterfactual simulation suggests that re-ordering reviews based on content can have the same effect as a 1.6% price cut for boosting conversion.
November 10 (12:00 – 1:15 PM)
Monic Sun (Boston University)
Paper: A Model of Smart Technologies
We study the optimal pricing and design of smart technologies that are based on artificial intelligence (AI) and can learn consumers’ preferences over time. In particular, we allow the technology to help consumers in two ways. First, based on initial learning, it can help predict the consumers’ next consumption occasion and make recommendations accordingly. Second, it can help consumers save the operational cost of using the service under a repeated consumption occasion. We attempt to characterize our results in a two-period model with dynamic pricing. When the firm is only moderately smart, it adopts a conservative pricing strategy and the main effect of smart technologies are to help consumers save operational cost, which could benefit both the consumer and the firm. As the firm becomes better at predicting the next purchase occasion, it starts to raise second period price upon learning consumer preferences, and consumers as a result are more reluctant to use the service in the first period and give the firm an opportunity to learn. Anticipating the consumer’s reactions, the firm finds it optimal to lower the first-period price. Under certain conditions, the reduction of this price can dominate the increase in the firm’s second-period price and lead to a lower total profit across the two periods.
From a product design perspective, our preliminary analysis suggests that it is not always profitable to increase the smartness of a firm’s technology even when doing so does not involve direct costs. It is also important to note that the “price” in our model can be interpreted as either a direct price that consumers have to pay to the firm or a form of advertising exposure, so that a higher “price” means that the consumer would need to tolerate a larger amount of ads which would bring the firm more revenue from third-party advertisers. Correspondingly, our model has implications not only for the pricing and design of smart technologies and their interaction with consumers, but also for platforms on which advertisers aim to target the users of smart technologies through collaborating with the owners of such technologies.
Stijn van Osselaer (Cornell)
Paper: The Power of Personal
Since the time of the industrial revolution, technology has improved the well-being of both producers, whose incomes could rise through greater productivity, and consumers, e.g., through greater availability and lower prices of consumer goods. However, this has come at the cost of alienation between consumers and producers (and between consumers and production in general). I will discuss some early results from a budding research program investigating the effects of reducing this alienation (e.g., by identifying producers to consumers and vice versa). I will argue that more recent developments in technology can lead to further alienation and objectification of consumers, but may also be used to bring producers and consumers closer together by making business more personal.
December 1 (12:00 – 1:15 PM)
Garrett Johnson (Northwestern)
Paper: The Online Display Ad Effectiveness Funnel & Carryover: Lessons from 432 Field Experiments
We analyze 432 online display ad field experiments on the Google Display Network. The experiments feature 431 advertisers from varied industries, which on average include 4 million users. Causal estimates from 2.2 billion observations help overcome the medium's measurement challenges to inform how and how much these ads work. We find that the campaigns increase site visits (p<10^-212) and conversions (p<10^-39) with median lifts of 17% and 8% respectively. We examine whether the in-campaign lift carries forward after the campaign or instead only causes users to take an action earlier than they otherwise would have. We find that most campaigns have a modest, positive carryover four weeks after the campaign ended with a further 6% lift in visitors and 16% lift in visits on average, relative to the in-campaign lift. We then relate the baseline attrition as consumers move down the purchase process—the marketing funnel—to the incremental effect of ads on the consumer purchase process—the 'ad effectiveness funnel.' We find that incremental site visitors are less likely to convert than baseline visitors: a 10% lift in site visitors translates into a 5-7% lift in converters.
Jing Li (Harvard University – Job Market Candidate)
Paper: Compatibility and Investment in the U.S. Electric Vehicle Market
Competing standards often proliferate in the early years of product markets, potentially leading to socially inefficient investment. This paper studies the effect of compatibility in the U.S. electric vehicle market, which has grown ten-fold in its first five years but has three incompatible standards for charging stations. I develop and estimate a structural model of consumer vehicle choice and car manufacturer investment that demonstrates the ambiguous impact of mandating compatibility standards on market outcomes and welfare. Firms under incompatible standards may make investments in charging stations that primarily steal business from rivals and do not generate social benefits sufficient to justify their costs. But compatibility may lead to underinvestment since the benefits from one firm's investments spill over to other firms. I estimate my model using U.S. data from 2011 to 2015 on vehicle registrations and charging station investment and identify demand elasticities with variation in federal and state subsidy policies. Counterfactual simulations show that mandating compatibility in charging standards would decrease duplicative investment in charging stations by car manufacturers and increase the size of the electric vehicle market.
February 13 (Monday, Gould Classroom 4220)
Katalin Springel (UC Berkeley – Job Market Candidate)
Paper: "Network Externality and Subsidy Structure in Two-Sided Markets: Evidence from Electric Vehicle Incentives"
In an effort to combat global warming and reduce emissions, governments across the world are implementing increasingly diverse incentives to expand the proportion of electric vehicles on the roads. Many of these policies provide financial support to lower the high upfront costs consumers face and build up the infrastructure of charging stations. There is little guidance theoretically and empirically on which governmental efforts work best to advance electric vehicle sales. I model the electric vehicle sector as a two-sided market with network externalities to show that subsidies are non-neutral and to determine which side of the market is more efficient to subsidize depending on key vehicle demand and charging station supply primitives. I use new, large-scale vehicle registry data from Norway to empirically estimate the impact that different subsidies have on electric vehicle adoption when network externalities are present. I present descriptive evidence to show that electric vehicle purchases are positively related to both consumer price and charging station subsidies. I then estimate a structural model of consumer vehicle choice and charging station entry, which incorporates flexible substitution patterns and allows me to analyze out-of-sample predictions of electric vehicle sales. In particular, the counterfactuals compare the impact of direct purchasing price subsidies to the impact of charging station subsidies. I find that between 2010 and 2015 every 100 million Norwegian kroner spent on station subsidies alone resulted in 835 additional electric vehicle purchases compared to a counterfactual in which there are no subsidies on either side of the market. The same amount spent on price subsidies led to only an additional 387 electric vehicles being sold compared to a simulated scenario where there were no EV incentives. However, the relation inverts with increased spending, as the impact of station subsidies on electric vehicle purchases tapers off faster.
February 15 (Wednesday, Attwood Classroom 4230)
Paul Ellickson (University of Rochester)
Paper: Private Labels and Retailer Profitability: Bilateral Bargaining in the Grocery Channel
We examine the role of store branded ``private label'' products in determining the bargaining leverage between retailers and manufacturers in the brew-at-home coffee category. Exploiting a novel setting in which the dominant, single-serve technology was protected by a patent preventing private label entry, we develop a structural model of demand and supply-side bargaining and seek to quantify the impact of private labels on bargaining outcomes. We find that, while bargaining parameters are relatively symmetric across retailers and manufacturers, the addition of private labels alters bargaining leverage by improving the disagreement payoffs of the retailers (relative to the manufacturers), thereby shifting bargaining outcomes in the retailer's favor.
Kristin Diehl (University of Southern California)
Paper: Savoring an Upcoming Experience Affects Ongoing and Remembered Consumption Enjoyment
Five studies, using diverse methodologies, distinct consumption experiences, and different manipulations, demonstrate the novel finding that savoring an upcoming consumption experience heightens enjoyment of the experience both as it unfolds in real time (ongoing enjoyment) and how it is remembered (remembered enjoyment). Our theory predicts that the process of savoring an upcoming experience creates affective memory traces that are reactivated and integrated into the actual and remembered consumption experience. Consistent with this theorizing, factors that interfere with consumers’ motivation, ability, or opportunity to form or retrieve affective memory traces of savoring an upcoming experience limit the effect of savoring on ongoing and remembered consumption enjoyment. Affective expectations, moods, imagery, and mindsets do not explain the observed findings.
Soheil Ghili (Northwestern University)
Paper: Network Formation and Bargaining in Vertical Markets: The Case of Narrow Networks in Health Insurance
Network adequacy regulations” expand patients’ access to hospitals by mandating a lower bound on the number of hospitals that health insurers must include in their networks. Such regulations, however, compromise insurers’ bargaining position with hospitals, which may increase hospital reimbursement rates, and may consequently be passed through to consumers in the form of higher premiums. In this paper, I quantify this effect by developing a model that endogenously captures (i) how insurers form hospital-networks, (ii) how they bargain with hospitals over rates by threatening to drop them from the network or to replace them with an out-of-network hospital, and (iii) how they set premiums in an imperfectly competitive market. I estimate this model using detailed data from a Massachusetts health insurance market, and I simulate the effects of a range of regulations. I find that “tighter” regulations, which force insurers to include more than 85% of the hospital systems in the market, raise the average reimbursement rates paid by some insurers by at least 28%. More moderate regulations can expand the hospital networks without causing large hikes in reimbursement rates.
Rebecca Hamilton (Georgetown University)
Paper: Learning that You Can’t Always Get What You Want: The Effect of Childhood Socioeconomic Status on Decision Making Resilience
Much of the literature on consumer decision making has focused on choice, implicitly assuming that consumers will be able to obtain what they choose. However, the options consumers choose are not always available to them, either due to limited availability of the options or to consumers’ limited resources. In this research, we examine the impact of childhood socioeconomic status on consumers’ responses to choice restriction. Building on prior work showing that perceived agency and effective coping strategies may differ by socioeconomic status, we hypothesize that consumers socialized in low socioeconomic status environments will be more likely to exhibit two adaptive strategies in response to two different forms of choice restriction. In three studies in which participants encountered unavailability of their chosen alternative, we find that participants of various ages with low (vs. high) childhood socioeconomic status display greater persistence in waiting for their initial choices yet greater willingness to shift when the alternative they have chosen is clearly unattainable. We discuss the theoretical implications of these results and how they contribute to a deeper understanding of the long-term effects of socioeconomic status on consumer behavior.
Kanishka Misra (University of California, San Diego)
Paper: Dynamic Online Pricing with Incomplete Information Using Multi-Armed Bandit Experiments
Consider the pricing decision for a manager at large online retailer, such as Amazon.com, that sells millions of products. The pricing manager must decide on real-time prices for each of these product. Due to the large number of products, the manager must set retail prices without complete demand information. A manager can run price experiments to learn about demand and maximize long run profits. There are two aspects that make the online retail pricing different from traditional brick and mortar settings. First, due to the number of products the manager must be able to automate pricing. Second, an online retailer can make frequent price changes. Pricing differs from other areas of online marketing where experimentation is common, such as online advertising or website design, as firms do not randomize prices to different customers at the same time.
In this paper we propose a dynamic price experimentation policy where the firm has incomplete demand information. For this general setting, we derive a pricing algorithm that balances earning profit immediately and learning for future profits. The proposed approach marries statistical machine learning and economic theory. In particular we combine multi-armed bandit (MAB) algorithms with partial identification of consumer demand into a unique pricing policy. Our automated policy solves this problem using a scalable distribution-free algorithm. We show that our method converges to the optimal price faster than standard machine learning MAB solutions to the problem. In a series of Monte Carlo simulations, we show that the proposed approach perform favorably compared to methods in computer science and revenue management.
Selman Erol (MIT, Post Doc)
Paper: Network Hazard and Bailouts
I introduce a model of contagion with endogenous network formation in which a government intervenes to stop contagion. The anticipation of government bailouts introduces a novel channel for moral hazard via its effect on network architecture. In the absence of bailouts, the network formed consists of small clusters that are sparsely connected that serve to minimize contagion. When bailouts are anticipated, firms are less concerned with the contagion risk their counterparties face. As a result, they are less disciplined during network formation and form networks that are more interconnected, exhibiting a core-periphery structure wherein many firms are connected to a smaller number of central firms. Interconnectedness within the periphery increases spillovers. Core firms serve as a buffer when solvent and an amplifier when insolvent. Thus, in my model, ex-post time-consistent intervention by the government increases systemic risk and volatility through its effect on network formation with ambiguous welfare effects.
Derek Rucker (Northwestern University)
Paper: Power and Prosocial Behavior
Prior work has found a low-power state to produce a communal orientation that increases consumers’ propensity to engage in prosocial behaviors. The present research demonstrates that the presence of opportunity costs eliminates and, in fact, suppresses the communal orientation that typically accompanies low-power states. In contrast, for participants in a high-power state, where an agentic orientation is more prevalent, opportunity costs appear to have little effect. As a consequence, replicating prior research, when opportunity costs were not salient, states of low power produce more prosocial behavior than states of high power. However, significantly qualifying past findings, when opportunity costs are salient low-power states produce less prosocial behavior than high-power states. Evidence consistent with this hypothesis is found across various prosocial behaviors (i.e., donations to charitable organizations, and gifts for other). Taken together, the present work changes our understanding of the relationship between consumers’ state of power and prosocial behavior.
August 24 (Wednesday, Nooyi Classroom 2230)
Jeff Galak (Carnegie Mellon)
Paper: (The Beginnings of) Understanding Sentimental Value
Sentimental value, or the value derived from associations with significant others or events in ones life, is prevalent, important, and, yet, under-researched. I present a broad overview of a new research program designed to define this construct, begin to understand its antecedents, and demonstrate some important consequences for individuals and for firms.
Jan Van Mieghem (Northwestern)
Paper: Collaboration and Multitasking in Processing Networks: Humans versus Machines
One of the fundamental questions in operations is to determine the maximal throughput or productivity of a process. Does it matter whether humans or machines execute the various steps in the process? If so, how do we incorporate this difference in our planning and performance evaluation? We propose some answers by discussing two examples: a theoretical analysis and an empirical study.
Rosanna Smith (Yale Doctoral Student)
Paper: “The Curse of the Original: When Product Enhancements Undermine Authenticity and Value”
Companies often introduce enhancements to their products in order to both stay relevant and increase product appeal. However, companies with iconic status (e.g., Converse, Levi’s, Coca-Cola) are often confronted with a unique challenge: When they introduce product enhancements, even those that unequivocally improve functionality or quality, they may encounter fierce consumer backlash. In the present studies, we identify cases in which product enhancements can backfire and decrease consumer interest. We draw on the concept of authenticity and propose that product enhancements in these cases may be seen as a violation of the original vision for the product and, therefore, may be perceived as less authentic. Across four studies, we document this “curse of the original” as well as its associated psychological mechanisms and boundary conditions.
Paper: ““Closer to the Creator: Temporal Contagion Explains the Preference for Earlier Serial Numbers”
Consumers demonstrate a robust preference for items with earlier serial numbers (e.g., No. 3/100) over otherwise identical items with later serial numbers (e.g., No. 97/100) in a limited edition set. This preference arises from the perception that items with earlier serial numbers are temporally closer to the origin (e.g., the designer or artist who produced it). In turn, beliefs in contagion (the notion that objects may acquire a special essence from their past) lead consumers to view these items as possessing more of a valued essence. Using an archival data set and five lab experiments, the authors find the preference for items with earlier serial numbers holds across multiple consumer domains including recorded music, art, and apparel. Further, this preference appears to be independent from inferences about the quality of the item, salience of the number, or beliefs about market value. Finally, when serial numbers no longer reflect beliefs about proximity to the origin, the preference for items with earlier serial numbers is attenuated. The authors conclude by demonstrating boundary conditions of this preference in the context of common marketing practices.
October 7 (Noon – 1:30 PM, Attwood Classroom 4230)
Jia Liu (Columbia University)
Paper: A Semantic Approach for Estimating Consumer Content Preferences from Online Search Queries
We develop an innovative topic model, Hierarchically Dual Latent Dirichlet Allocation (HDLDA), which not only identifies topics in search queries and webpages, but also how the topics in search queries relate to the topics in the corresponding top search results. Using the output from HDLDA, a consumer’s content preferences may be estimated on the fly based on their search queries. We validate our proposed approach across different product categories using an experiment in which we observe participants’ content preferences, the queries they formulate, and their browsing behavior. Our results suggest that our approach can help firms extract and understand the preferences revealed by consumers through their search queries, which in turn may be used to optimize the production and promotion of online content.
Joe Alba (University of Florida)
Paper: Belief in Free Will: Implications for Practice and Policy
The conviction one holds about free will serves as a foundation for the views one holds about the consumption activities of other consumers, the nature of social support systems, and the constraints that should or should not be placed on industry. Across multiple paradigms and contexts, we assess people’s beliefs about the control consumers have over consumption activities in the face of various constraints on agency. We find that beliefs regarding personal discretion are robust and resilient, consistent with our finding that free will is viewed as noncorporeal. Nonetheless, we also find that these beliefs are not monolithic but vary as a function of identifiable differences across individuals and the perceived cause of behavior, particularly with regard to physical causation. Taken together, the results support the general wisdom of libertarian paternalism as a framework for public policy but also point to current and emerging situations in which policy makers might be granted greater latitude.
Tony Ke (MIT)
Paper: Optimal Learning before Choice
A Bayesian decision maker is choosing among multiple alternatives with uncertain payoffs and an outside option with known payoff. Before deciding which one to adopt, the decision maker can purchase sequentially multiple informative signals on each of the available alternatives. To maximize the expected payoff, the decision maker solves the problem of optimal dynamic allocation of learning efforts as well as optimal stopping of the learning process. We show that the optimal learning strategy is of the type of consider-then-decide. The decision maker considers an alternative for learning or adoption if and only if the expected payoff of the alternative is above a threshold. Given several alternatives in the decision maker's consideration set, we find that sometimes, it is optimal for him to learn information from an alternative that does not have the highest expected payoff, given all other characteristics of all the alternatives being the same. If the decision maker subsequently receives enough positive informative signals, the decision maker will switch to learning the better alternative; otherwise the decision maker will rule out this alternative from consideration and adopt the currently most preferred alternative. We find that this strategy works because it minimizes the decision maker's learning efforts. It becomes the optimal strategy when the outside option is weak, and the decision maker's beliefs about the different alternatives are in an intermediate range.
Christopher Hsee (University of Chicago)
November 18 (Noon – 1:30 PM)
Scott Schriver (Columbia University)
Paper: Optimizing Content and Pricing Strategies for Digital Video Games
The video game industry has experienced a wave of disruption as consumers rapidly shift to acquiring and consuming content through digital channels. Incumbent game publishers have struggled to adapt their content and pricing strategies to shifting consumption patterns and increased competition from low cost independent suppliers. Recently, game publishers have pursued new business models that feature downloadable content (DLC) services offered in conjunction with or as a replacement for traditional physical media. While service-based models can potentially extract additional surplus from the market by allowing for more customized content bundles and pricing than with physically distributed media, exploiting these opportunities poses a challenge to firms who must attempt to optimize their offerings over a formidably complex decision space.
In this paper, we develop a structural framework to facilitate the recovery of consumer preferences for game content and the optimization of firm content/price strategies. Our approach is to leverage rich covariation in observed content consumption and DLC service subscriptions to infer consumer content valuations and price sensitivities. We devise a joint model of video game activity and demand for downloadable content, where consumers sequentially make (discrete) DLC subscription choices followed by (continuous) choices of how much to play. Our model accounts for forward-looking consumer expectations about declining content prices and attendant concerns for dynamic selection bias in our demand estimates. We document evidence of heterogeneous preferences for content and significant effects of DLC availability on game usage. Our counterfactual experiments suggest that compressing the DLC release cycle and moving to a recurring fee structure are both viable ways to increase revenues.
Drazen Prelec (MIT)
Paper: Aggregating information: The meta-prediction approach
The twin problems of eliciting and aggregating information arise at many levels, including the social (wisdom-of-the-crowd) and the neural (ensemble voting). The first, elicitation problem involves crafting incentives that ensure that incoming signals are honestly reported; the second, aggregation problem involves selecting the best value from a distribution of reported values. Current aggregation methods take as input individuals' beliefs about the right answer, expressed with votes or, more precisely, with probabilities. However, one can show that any belief-based algorithm can be defeated by even simple problems. An alternative approach that is guaranteed to work in theory, assumes that individuals with different opinions share an implicit possible worlds model whose parameters are unknown, but which can be estimated from their predictions of the opinion distribution. Using this additional information, Bayes' rule selects individuals who would be least surprised by known evidence. Their answer defines the best estimate for the group. I will review some old and some new evidence bearing on this approach, and discuss extensions to market research.
Sunita Sah (Cornell University)
Paper: Conflict of Interest Disclosure and Appropriate Restraint: The Power of Professional Norms
Conflicts of interest present an incentive for professionals to give biased advice. Disclosing, or informing consumers about, the conflict is a popular solution for managing such conflicts. Prior research, however, has found that advisors who disclose their conflicts give more biased advice. Across three experiments, using monetary incentives to create real conflicts of interest, I show that disclosure can cause advisors to significantly decrease or increase bias in advice based on the context in which the advice is provided. Drawing from norm focus theory and the logic of appropriateness literature, this investigation examines how professional norms cause advisors to either succumb to bias (by believing that disclosure absolves them of their responsibility—caveat emptor) or restrain from bias (by reminding advisors of their responsibility towards advisees). Professional norms significantly alter how disclosure affects advisors—increasing bias in advice in business settings but decreasing bias in medical settings. These findings not only disconfirm previous assumptions regarding conflict of interest disclosures but also highlight the importance of context when understanding the potential and pitfalls of disclosure.
Przemek Jeziorski (UC Berkeley)
Paper: Adverse Selection and Moral Hazard in a Dynamic Model of Auto Insurance
We measure risk-related private information and investigate its importance in a setting where individuals are able to modify risk ex-ante through costly effort. Our analysis is based on a model of endogenous risk production and contract choice. It exploits data from multiple years of contract choices and claims by customers of a major Portuguese auto insurance company. We additionally use our framework to investigate the relative effectiveness of dynamic versus static contract features in incentivizing effort and inducing sorting on private risks, as well as to assess the welfare costs of mandatory liability insurance.