Yale Marketing Seminar

The Yale Marketing Seminar Series presents recent research papers in marketing. The goal is to bring researchers from other universities to the Yale campus to stimulate exchange of ideas and deepen understanding of marketing trends. These seminars are geared towards faculty and PhD students interested in marketing.

Friday Series 11:35 a.m.-12:50 p.m., Edward P. Evans Hall, 165 Whitney Avenue, New Haven, CT, Atwood Classroom 4230.
This seminar series is organized by Assistant Professor of Marketing Soheil Ghili. Lunch is provided. 

Fall 2018

September 7

Minkyung Kim(Yale School of Management)
Paper:  A Structural Model of a Multi-tasking Salesforce with Private Information
We develop the first structural model of a multi-tasking salesforce. The model incorporates two novel features, relative to the extant structural literature on salesforce compensation: (i) multi-tasking given a multi-dimensional incentive plan; (ii) salesperson's private information about customers. While the model is motivated by our empirical application that uses data from a microfinance bank where loan officers are jointly responsible and incentivized for both loan acquisition and repayment, it is more generally adaptable to salesforce management in CRM settings focused on customer acquisition and retention. The compensation plan, which is multiplicative in acquisition and retention performance induces contemporaneous complementarities that balance effort on both tasks and intertemporal effort dynamics that align salesperson's incentives better with the firm even in the presence of salesperson's private information about customers. Our estimation strategy extends two-step estimation methods used for unidimensional compensation plans for the multi-tasking model with private information and intertemporal incentives. We combine flexible machine learning (random forest) for the identification of private information and the first stage multi-tasking policy function estimation. Estimates reveal two latent segments of salespeople - a "hunter'' segment that is more efficient on loan acquisition and a "farmer'' segment that is more efficient on loan collection. Counterfactual analyses show (i) that joint responsibility for acquisition and collection leads to better outcomes for the firm than specialized responsibilities even when salespeople are matched with their more efficient tasks; and (ii) that aggregating performance on multiple tasks using an additive function leads to substantial adverse specialization of "hunters", where they specialize on acquisition at the expense of the firm, compared to the multiplicative form used by the firm. 

September 14

Elizabeth Friedman (Yale School of Management)
Paper: Apples, Oranges and Erasers: The Effect of Considering Similar versus Dissimilar Alternatives on Purchase Decisions
When deciding whether to buy an item, consumers sometimes think about other ways they could spend their money. Past research has explored how increasing the salience of outside options (i.e., alternatives not immediately available in the choice set) influences purchase decisions, but whether the type of alternative considered systematically affects buying behavior remains an open question. Ten studies find that relative to considering alternatives that are similar to the target, considering dissimilar alternatives leads to a greater decrease in purchase intent for the target. When consumers consider a dissimilar alternative, a competing non-focal goal is activated, which decreases the perceived importance of the focal goal served by the target option. Consistent with this proposed mechanism, the relative importance of the focal goal versus the non-focal goal mediates the effect of alternative type on purchase intent, and the effect attenuates when the focal goal is shielded from activation of competing goals.

September 25 (Tuesday, 11:35 AM – 12:50 PM, Atwood Classroom 4230)

Kate Barasz (Harvard Business School)
Paper: I Know Why You Voted for trump: (Over)inferring Motives Based on Choice
People often speculate about why others make the choices they do. This paper investigates how such inferences are formed as a function of what is chosen. Specifically, when observers encounter someone else’s choice (e.g., of political candidate), they use the chosen option’s attribute values (e.g., a candidate’s specific stance on a policy issue) to infer the importance of that attribute (e.g., the policy issue) to the decision-maker. Consequently, when a chosen option has an attribute whose value is extreme (e.g., an extreme policy stance), observers infer—sometimes incorrectly—that this attribute disproportionately motivated the decision-maker’s choice. Seven studies demonstrate how observers use an attribute’s value to infer its weight—the value-weight heuristic—and identify the role of perceived diagnosticity: more extreme attribute values give observers the subjective sense that they know more about a decision-maker’s preferences, and in turn, increase the attribute’s perceived importance. The paper explores how this heuristic can produce erroneous inferences and influence broader beliefs about decision-makers.

October 2 (Tuesday, 11:35 AM – 12:50 PM, Lin Tech., Classroom 4400)

Artem Timoshenko (MIT)
Paper: Identifying Customer Needs from User-generated Content
Firms traditionally rely on interviews and focus groups to identify customer needs for marketing strategy and product development. User-generated content (UGC) is a promising alternative source for identifying customer needs. However, established methods are neither efficient nor effective for large UGC corpora because much content is non-informative or repetitive. We propose a machine-learning approach to facilitate qualitative analysis by selecting content for efficient review. We use a convolutional neural network to filter out non-informative content and cluster dense sentence embeddings to avoid sampling repetitive content. We further address two key questions: Are UGC-based customer needs comparable to interview-based customer needs? Do the machine-learning methods improve customer-need identification? These comparisons are enabled by a custom dataset of customer needs for oral care products identified by professional analysts using industry-standard experiential interviews. The analysts also coded 12,000 UGC sentences to identify which previously identified customer needs and/or new customer needs were articulated in each sentence. We show that (1) UGC is at least as valuable as a source of customer needs for product development, likely more-valuable, than conventional methods, and (2) machine-learning methods improve efficiency of identifying customer needs from UGC (unique customer needs per unit of professional services cost).

October 4

Rachel Gershon(Washington University in St. Louis)
Paper:  The Reputational Benefits and Material Burdens of Prosocial Referral Incentives
Selfish incentives typically outperform prosocial incentives. However, this research identifies a context where prosocial incentives are more effective: customer referral programs. Companies frequently offer “selfish” incentives to customers who refer, incentivizing those customers directly for recruiting friends. However, companies can alternatively offer “prosocial” incentives that reward the referred friend instead. In multiple field and incentive-compatible experiments, this research finds that prosocial referrals, relative to selfish referrals, result in more new customers. This pattern occurs for two reasons. First, at the referral stage, customers expect to receive reputational benefits when making prosocial referrals within their social network, thereby boosting performance of prosocial referrals. Second, at the uptake stage, the burden of signing up is high, and therefore referral recipients prefer to receive an incentive themselves. Due to the combination of reputational benefits at the referral stage and action burdens at the uptake stage, prosocial referrals yield more new customers overall. The high frequency of selfish referral offers in the marketplace suggests these forces play out in ways that are unanticipated by marketers who design incentive schemes.

October 8

Melanie Brucks (Stanford)
Paper: The Creativity Paradox: Soliciting Creative Ideas Undermines Ideation
When developing product ideas and original marketing content, firms and marketers often organize ideation activities to harvest a rich set of new ideas. We explore a popular paradigm for guiding these activities—explicitly requesting creative ideas—in the context of consumer idea generation contests. We demonstrate that this common practice can paradoxically undermine ideation, decreasing the total number of novel ideas that contestants generate (i.e., ideas rated as surpassing the threshold of average novelty). A single paper meta-analysis across six incentive-compatible ideation contests on different products (toy, office supply, toiletry, and mobile app) involving close to 2,000 contestants estimated that soliciting creative ideas resulted in 1.49 fewer novel ideas per contestant, which amounted to a 20% decrease in productivity and a loss of 500 unique novel ideas in total. This productivity loss occurs because soliciting creative ideas prompts people to self-impose a high standard, which leads to a unique cognitive process that restrains (instead of expands) their thinking. This research also offers important solutions for marketers to ensure the productivity of ideation and fuel innovation.

October 15 (Monday, 11:35 AM – 12:50 PM, Isaacson Classroom 4210)

Andrew Rhodes (Toulouse School of Economics)
Paper: Repeat Consumer Research
Consumers often buy products repeatedly over time, but need to search for information on firms' prices and product features. We characterize the optimal search rule in such an environment, and show that it may be optimal to `stagger' search over time. Specifically, consumers may search intensively upon entering the market, buy several different products early on, settle on one for a period of time, but then after a while restart their search process. We also solve for equilibrium prices, and relate both their level and dispersion to the market search cost. We consider various extensions, including the case where firms may offer discounts to past customers.

October 19

Kalinda Ukanwa (University of Maryland)
Paper: Discrimination in Service
The goal of this research is threefold: 1) to uncover the mechanism by which service discrimination can emerge from seemingly rational service policy; 2) to investigate how service discrimination interacts with competition and consumer word-of-mouth to affect profits; 3) to help firms avoid losing profits due to discrimination. By employing a mixed-methods approach, the authors find that variation in consumer quality (i.e., their profitability to the firm) and measurement error in detecting consumer quality moderate the magnitude of service discrimination. Large error in measuring consumer quality exacerbates service discrimination, while large intra-group variation in consumer quality attenuates discrimination. This research shows that discrimination can be profitable in the short-run but can backfire in the long-run.

Agent-based modeling demonstrates that non-discriminatory service providers, on average, earn higher long-term profits than discriminatory service-providers, in spite of a seeming short-term profit advantage for discrimination. The authors provide managerial recommendations on reducing service discrimination’s profit-damaging effects. This research emphasizes the long-term benefits of switching to a service policy that does not use group identity information. However, for firms that must persist in using group identity information, this research has additional recommendations, which include increasing investment in methods of measurement error reduction and increasing exposure to consumers of different populations. By doing so, a firm could reduce service discrimination while improving its long-term profits and societal well-being.

October 25 (Thursday, 11:35 AM – 12:50 PM, Isaacson Classroom 4210)

Emma Levine (Chicago Boothl)
Paper: Conflicts between honesty and benevolence
Although honesty is typically conceptualized as a virtue, it often conflicts with other equally important moral values, such as benevolence. In this talk, I explore how communicators and targets navigate conflicts between honesty and benevolence. In the first half of the talk, I demonstrate that communicators and targets make egocentric moral judgments of deception. Specifically, communicators focus more on the costs of deception to them—for example, the guilt they feel when they break a moral rule, whereas targets focus more on whether deception helps or harms them. As a result, communicators and targets make asymmetric judgments of prosocial lies of commission and omission. Communicators often believe that omitting information is more ethical than telling a prosocial lie, whereas targets often believe the opposite. Consequently, communicators miss opportunities to provide targets with comfort and care across a wide range of domains, including health and employment discussions. In the second half of the talk, I present a series of economic game experiments demonstrating that communicators’ focus on following a rule of honesty leads them to avoid information about the social consequences of their communication. Taken together, this research provides new insight into the causes and consequences of lie aversion, and the mechanisms that allow people to harm others while feeling moral.

October 29 (Monday, 11:35 AM – 12:50 PM, Isaacson Classroom 4210)

Shunyuan Zhang (Carnegie Mellon)
Paper: Can Lower-quality Images Lead to Greater Demand on AirBnB?
We investigate how AirBnB hosts make decisions on the quality of property images to post.  Prior literature has shown that the images play the role of advertisements and the quality of the images have a strong impact on the present demand of the property – as compared to lower quality amateur images, high quality professional images can increase the present demand by 14.3% on matched samples (Zhang et al. 2018). However, the reality is that there exist a large number (approximately two-thirds) of amateur (low-quality) images on AirBnB.  One possible explanation is that these images are costly for the hosts, as most of them are amateur photographers. However, this does not completely explain the result – in 2011, AirBnB started offering highest quality professional images for free to all the hosts by sending their professional photographers to the property and shoot, process and post the photos for the hosts. To AirBnB’s surprise, only 30% of the hosts used the AirBnB professional photography program.  We posit that the host’s decision on what quality of images to post depends not only on the advertising impact of images on the present demand and on the cost of images, but also on the impact of images on the future demand. Thus, some hosts would be hesitant to post high-quality images because they can create unrealistically high expectations for the guests, especially if the actual property is not as good as what the images portray and if the hosts are unable to provide a high level of service to match those expectations. This would result in the satisfaction level of guests to decrease, who would then leave a bad review or not write any review at all; and since the number/quality of reviews is one of the key drivers in generating new bookings, this will adversely affect the future demand.

In this paper, we attempt to disentangle the aforementioned factors that influence the host’s decision on the type of photographs to post and explore policies that AirBnB can employ to improve the hosts’ adoption of professional photos and thereby improve the profitability of both the hosts and AirBnB. To do so, we build a structural model of demand and supply, where the demand side entails modeling of guests’ decisions on which property to stay, and the supply side entails modeling of hosts’ decisions on what quality of images to post and what level of service to provide in each period. We estimate our model on a unique one-year panel data consisting of a random sample of 958 AirBnB properties in Manhattan (New York City) where we observe hosts’ monthly choices of the quality of images posted and their and service that they provided. Our key findings are:  First, guests who pay more attention to images tend to care more about reviews, revealing an interesting trade-off problem for the hosts. Second, hosts incur considerable costs for posting above-average quality of images. Third, hosts are heterogenous in their abilities in investing service effort. In counterfactual analyses we simulate AirBnB properties assuming they all start with entry state and low-level images. We then compare the impact of the current policy (offering free high-level images to hosts) and of a proposed policy (offering free medium-level images to hosts) on the average property demand. We show that the proposed policy, though dominated by the current policy in the short-run (for the first four periods), outperformed the currently policy in the long-run (7.6 % vs 12.4%). The interpretation is that, medium-level images, compared to high-level images, despite forming a smaller expected utility for the consumers, has a greater effect on property demand in the long-run as they, with lower risks of creating a dissatisfactory gap, help hosts to obtain new reviews. Moreover, individual hosts who might end up using amateur (low-level) images to avoid the dissatisfactory gap under the current policy, now use free medium-level images to make more revenues under the proposed policy. In the second counterfactual, we explore an alternative policy in which AirBnB were to offer a menu of image quality choices for free. The menu includes both high- and medium- level of property images (images examples are provided) and allow the hosts to self-select which program they want. Comparing with the proposed policy in the first simulation, we find that this policy performs the best in the long-run by improving average property demand by 16.2%.

November 1 (Thursday, 11:35 AM – 12:50 PM, Isaacson Classroom 4210)

Kaitlin Wooley (Cornell)
Paper: The Dissimilarity Bias: The Effect of Dissimilarity on Goal Progress Perceptions and Motivation
Perceived goal progress directly influences consumer goal pursuit, yet determinants of progress perceptions are largely unexamined. Six studies identified a novel factor that biases consumers’ perceptions of goal progress: categorization. When pursuing a goal, consumers categorize completed and remaining actions that are dissimilar into separate units, anchoring their estimates of progress on the proportion of units completed versus remaining, rather than on the absolute amount of progress made. On the other hand, consumers categorize completed and remaining actions that are similar together into the same unit, anchoring their estimates of goal progress closer to the objective amount of progress made versus remaining. As a result, dissimilarity (vs. similarity) between completed and remaining actions leads consumers to infer greater progress when they are farther from their goal, and to infer less progress when they are closer to their goal. This research documents this bias for behaviors ranging from exercising to studying and demonstrates implications for motivation and goal attainment in incentive-compatible contexts.

November 2 (Friday, 11:35 AM – 12:50 PM, Jones Classroom 2220)

Ming Yin
Paper: TBA
Abstract Forthcoming

November 5 (Monday, 11:35 AM – 12:50 PM, Gould Classroom 4220)

Abigail Jacobs
Paper: TBA
Abstract Forthcoming

Spring 2018

February 2

Anat Keinan(Harvard)
Paper: The Impostor Syndrome from Luxury Consumption
The present research proposes that luxury consumption can be a double-edged sword: while luxury consumption yields status benefits, it can also make consumers feel inauthentic, producing the paradox of luxury consumption. Feelings of inauthenticity from luxury consumption emerge due to the perceived dominance of extrinsic motivation over intrinsic motivation for consuming luxury. This phenomenon is more pronounced among consumers with low levels of psychological entitlement. It is moderated by conspicuousness of the product and consumption, and by the perceived malleability of cultural capital.

February 9

Sydney Scott (Washington Univ. at St. Louis)
Paper: Consumers Prefer “Natural” More for Preventatives than for Curitives
We demonstrate that natural product alternatives are more strongly preferred when used to prevent a problem than when used to cure a problem. This organizing principle explains variation in the preference for natural across distinct product categories (e.g., food vs. medicine), within product categories (e.g., between different types of medicines), and for the same product depending on how it is used (to prevent or to cure ailments). Contrary to prior work which finds that natural products are judged as superior on all dimensions, the prevent-cure difference in the natural preference occurs because natural products are viewed as safer but less potent than synthetic alternatives. Importantly, consumers care relatively more about safety than potency when preventing as compared to when curing, which leads to a stronger natural preference when preventing. Consistent with this explanation, when natural alternatives are described as more risky and more potent, reversing the standard inferences about naturalness, then natural alternatives become more preferred for curing than for preventing. This research sheds light on when the marketing of “natural” is most appealing to consumers.

February 13 (Tuesday, 11:30 AM – 12:45 PM, Gould Classroom 4220)

Nikhil Naik (Harvard)
Paper: Understanding Predictors of Physical Urban Change Using 1.6 Million Street View Images
For more than a century, economists and sociologists have advanced theories connecting the dynamics of cities’ physical appearances to their location, demographics, and built infrastructure. However, research has been limited by a lack of high-throughput methods to quantify urban appearance and change. In this paper, I introduce a machine learning (ML) algorithm that evaluates urban appearance and change from time-series Google Street View images. The ML algorithm—trained with aggregate perception of thousands of online users from a crowdsourced game—computes Streetchange, a metric for the change in appearance of buildings and streets between two images of the same location captured several years apart. The algorithm is robust to changes in weather, vegetation, and traffic across images. Validation studies show strong agreement between Streetchange and urban change estimates obtained from human experts and administrative data. Using Streetchange, I calculate physical change between 2007–2014 for 1.65 million street blocks from five major American cities. My collaborators and I use aggregate census tract-level Streetchanges between 2007–2014 and census data from 2000 to determine the socioeconomic predictors of physical urban growth. We find that a dense population of highly educated individuals—rather than housing costs, income, or ethnic makeup—is the best predictor of future neighborhood growth, an observation that is compatible with economic theories of human capital agglomeration. In addition, neighborhoods with better initial appearances and physical proximity to the central business district experience more substantial upgrading. Together, our results underscore the importance of human capital and education in defining urban outcomes and illustrate the value of using visual data and machine learning in combination with econometric methods to answer economic questions.

February 15 (Thursday, 10:30 AM – 11:45 AM, Gould Classroom 4220)

Jason Anastasopoulos(Princeton)
Paper: Deep learning and deep trust: what machine learning can teach us about trust and persuasion in the digital age.
Whether purchasing a product, voting for the president of the United States, or embarking on an entrepreneurial venture, establishing trust with the individuals and firms that we engage with is a precondition for providing them with our time and our resources.  Accordingly, understanding how firms establish trust among consumers and how politicians establish trust among their constituents can teach us about one of the foundational aspects of persuasion and influence. Deep learning methods, through an ability to link abstract concepts, such as trust, to features of images and texts, provide us with an unprecedented opportunity to understand how political figures and firms establish trust among the public. In this talk, I will present a paper entitled “Political image analysis with deep neural networks” which employs deep learning methods and a database of 296,460 Facebook photos to gain insights about how politicians in the U.S. Congress establish trust among their constituents through the symbols, objects and people that they present in the images that they post on social media.

February 16

Amin Sayedi (Univ. of Washington)
Paper: Learning Click-through Rates in Search Advertising, joint work with J. Choi (Columbia)
Prior literature on search advertising primarily assumes that search engines know advertisers' click-through rates, the probability that a consumer clicks on an advertiser's ad. This information, however, is not available when a new advertiser starts search advertising for the first time. In particular, a new advertiser's click-through rate can be learned only if the advertiser's ad is shown to enough consumers, i.e., the advertiser wins enough auctions. Since search engines use advertisers' expected click-through rates when calculating payments and allocations, the lack of information about a new advertiser can affect new and existing advertisers' bidding strategies. In this paper, we use a game theory model to analyze advertisers' strategies, their payoffs, and the search engine's revenue when a new advertiser joins the market. Our results indicate that a new advertiser should always bid higher (sometimes above its valuation) when it starts search advertising. However, the strategy of an existing advertiser, i.e., an incumbent, depends on its valuation and click-through rate. A strong incumbent increases its bid to prevent the search engine from learning the new advertiser's click-through rate, whereas a weak incumbent decreases its bid to facilitate the learning process. Interestingly, we find that, under certain conditions, the search engine benefits from not knowing the new advertiser's click-through rate because its ignorance could induce the advertisers to bid more aggressively. Nonetheless, the search engine's revenue sometimes decreases because of this lack of information, particularly, when the incumbent is sufficiently strong. We show that the search engine can mitigate this loss, and improve its total profit, by offering free advertising credit to new advertisers.

February 23 (Classroom Room 4430)

Ovul Sezer (UNC)
Paper: Misguided Self-presentation: When and why humblebragging and backhanded compliments backfire
The ability to present oneself effectively to others is one of the most essential skills in social life. Consumers constantly make decisions to prompt favorable impressions in others, as countless critical rewards depend on others’ impressions. In this research, I identify unexamined yet ubiquitous self-presentation strategies—humblebragging and backhanded compliments—that people use in an effort to manage the delicate balancing act of self-presentation. Using datasets from social media and diary studies, I document the ubiquity of these strategies in real life across several domains. In laboratory and field experiments, I simultaneously examine the underlying motives for these self-presentation strategies and others’ perceptions of these strategies—allowing for an analysis of their efficacy—as assessed by the opinions targets hold of the would-be self-presenter. I provide evidence from both lab and field to show that humblebragging and backhanded compliments backfire, because they are seen as insincere.

March 6 (Tuesday, 11:35 AM – 12:50 AM, Gould Classroom 4220)

Allison Chaney (Princeton Post-doc)
Paper: The Social Side of Recommendation Systems: How Groups Shape Our Decisions
Recommendation systems occupy an expanding role in everyday decision making, from choice of movies and household goods to consequential medical and legal decisions.  This talk will explore a sequence of work related to recommending decisions for people to take.  First we will examine the results of a large-scale study of television viewing habits, focusing on how individuals adapt their preferences when consuming content with others. Next, we will leverage our insights about the social behavior of individuals to incorporate social network information into a model for providing personalized recommendations.  Finally, we will consider the impacts of recommendation algorithms like these on human choices and the homogeneity of group behavior.  

April 13 (Classroom 4400, Lin Tech.)

Harikesh Nair (Stanford)
Paper: Modern Data Driven eCommerce: JD.com in China
I provide an overview and discussion of data driven eCommerce businesses, with a focus on China.  I will also discuss applications of data-science at JD.com, China's second largest eCommerce firm.

April 27

Brad Shapiro (Univ. of Chicago)
Paper: Promoting Wellness or Waste? Evidence from Antidepressant Advertisingi
Direct-to-Consumer Advertising (DTCA) of prescription drugs is controversial and has ambiguous potential welfare effects. In this paper, I estimate costs and benefits of DTCA in the market for antidepressant drugs. In particular, using individual health insurance claims and human resources data, I estimate the effects of DTCA on outcomes relevant to societal costs: new prescriptions, prices and adherence. Additionally I estimate the effect of DTCA on labor supply, the economic outcome most associated with depression. First, category expansive effects of DTCA found in past literature are replicated, with DTCA particularly causing new prescriptions of antidepressants. Additionally, I find evidence of no advertising effect on the prices or co-pays of the drugs prescribed or on the generic penetration rate. Next, lagged advertising is associated with higher first refill rates, indicating that the advertising marginal are not more likely to end treatment prematurely due to initial adverse effects. Despite first refill rates being higher for those that are more likely advertising-marginal, concurrent advertising drives slightly lower refill rates overall, particularly among individuals who stand to gain least from treatment. Finally, advertising significantly decreases missed days of work, with the effect concentrated on workers who tend to have more absences. Back-of-the-envelope calculations suggest that the wage benefits of the advertising marginal work days are more than an order of magnitude larger than the total cost of the advertising marginal prescriptions.

Fall 2017

September 8 (11:15 AM – 12:30 PM)

Shai Davidai (New School)
Paper: The second pugilist’s plight: Why people believe they are above average, but are not especially happy about it
               People’s tendency to rate themselves as above average is often taken as evidence of unrealistic optimism and undue self-regard. Yet, everyday experience is occasioned with feelings of inadequacy and insecurity. How, can these two experiences be reconciled? We suggest that people hold two complementary beliefs: that they are above average and, at the same time, that the average is not a relevant or meaningful standard of comparison. In seven studies (N = 1,566), we find that people look to others who are above-average as a benchmark for social comparison (Studies 1A an 1B), that “the average person” is not even considered a relevant low standard of comparison, and that despite perceiving themselves as better than average, people don’t see themselves as better than their self-chosen standards of comparison (Study 2). We show that this is due to increased cognitive accessibility of above-average standards of comparison (Studies 3A and 3B), and that people hold themselves to such high standards even when recalling instances of personal inadequacy (Study 4A) or when they expect their own performance to be lower than average (Study 4B). We discuss the implications for self-enhancement research and the importance of examining who people compare themselves to in addition to how people believe they compare to others.

September 12 (Tuesday, 11:45 AM – 1:00 PM, Gould Classroom 4220)

Liang Guo (CUHK)
Paper: Testing the Role of Contextual Deliberation in the Compromise Effect
               One of the most well-known examples of preference construction is the compromise effect. This puzzling anomaly can be rationalized by contextual deliberation (i.e., information-acquisition activities that can partially resolve utility uncertainty before choice). In this research we investigate the theoretical robustness and empirical validity of this explanation. We conduct five experiments with more than one thousand participants, and show that the compromise effect can be positively mediated by response time, cannot be mitigated by context information, but can be moderated by manipulating the level of  deliberation (i.e., time pressure, preference articulation, task order). These findings are consistent with the predictions of the theory of contextual deliberation.

September 15

Eleanor Putnam-Farr (SOM Post-doc)
Paper: Balancing Promise with Performance: Field Testing Optimal Strategies for Maximizing Ongoing Participation
               In order to be successful, marketers need to both attract and retain customers, but firms often focus disproportionately on maximizing acquisition (e.g. enrollment) because the outcome is relatively easy to measure and understand. However techniques that maximize enrollment may have differing effects on performance and ongoing participation. Marketers who want to maximize ongoing participation need to balance the value that comes from maximizing initial interest (often by generating high expectations) against the potential cost that comes from making overly high claims that result in dissatisfaction. This is made even more challenging in domains where the customer participation determines the ultimate “product” performance and thus maximizing the initial interest may actually increase the motivation and subsequent performance. Using high headline numbers (i.e. explicit and prominent quantification of maximum potential benefits) may attract participants and set higher targets that result in higher performance, but it might also lead to lower satisfaction and program abandonment. In a large-scale field experiment (N=8,918), and multiple follow up experiments, we explore this tension and show that 1) consumers do use headline numbers as anchors for program expectations, but 2) may separate program expectations from personal performance targets. This results in 3) lower ongoing participation from consumers who were shown a high headline number rather than piece rate reward language during recruitment but 4) these results can be moderated by focusing people on realistic performance or more achievable results.

September 19 (Tuesday, 11:30 AM – 12:30 PM, Attwood Classroom 4230)

Jungju Yu (SOM Doctoral Candidate)
Paper: A Model of Brand Architecture Choice: A House of Brands vs. A Branded House
               Some firms use different brands for distinct product categories, while others use the same brand. In this paper, we study for which product markets a firm should use the same brand. To answer this question, we propose a framework of market-relatedness to conceptualize the relationship between markets through supply-side (similar production technology) and demand-side (similar target customers).  We apply this framework to a model of reputation with features of both adverse selection and moral hazard to analyze an interaction between information spillover across markets and investment incentives.
     We show that umbrella branding is optimal if product markets are closely related on either supply-side or demand-side. However, surprisingly, we find that independent branding is optimal if product markets are closely related on both supply-side and demand-side. We also provide implications for customer relationship management and innovation for quality in each product market.

September 22 (Gould Classroom 4220)

Ellen Evers (UC Berkeley)
Paper: Elicitation Based Preference Reversals in Willingness to Pay and Choice
               A key assumption in the empirical application of rational decision theory is that of procedure invariance; preferences are independent of how they are elicited. This means that measuring preferences among a bundle of goods should yield the same ordinal rankings, regardless of whether preferences are measured using a valuation strategy (e.g., willingness-to-pay) or choices. In 13 studies we demonstrate violations of procedure invariance, such that consumers more strongly prefer affective over functional goods in choices as compared to willingness-to-pay. These preference reversals result from a combination of two processes. The first is that decision-makers are more likely to rely on affect when making a choice.  The second is that they place a relatively greater weight on both the long-term value of a product and its necessity when indicating their willingness to pay, while placing a relatively greater weight on instant gratification and wants when making choices. Contrary to the necessary empirical assumption that preferences are consistent across measurements, we find that participants treat different measurement techniques as entirely different situations. 

September 28 (Thursday, Gould Classroom 4220)

Uri Gneezy(Columbia University)
Paper: Using violations of the Fungibility Principle to increase incentives effectiveness
               A small change to an incentive structure can have a dramatic impact—positive or negative—on outcomes. We suggest a way to increase the perceived value of the incentives without increasing the budget used. This increase relates to the observation that separate mental accounts violate the economic principle of fungibility by which money in one mental account offers a perfect substitute for money in another account. We propose that targeting incentives to a specific, highly desirable mental account could increase the effectiveness of the intervention.
               We test this targeting hypotheses in a behavioral intervention aimed at increasing walking for taxi drivers in Singapore. Participants received a reward for each of three months in which they met a physical activity goal. In one treatment, the reward was cash. In the targeted treatment, we used the same amount of money, but associated the reward with a specific averse cost for these drivers—paying for the daily rent of the taxi. We find that cash motived people to be more active, but, as predicted, associating the reward with the aversive expense was more effective. Importantly, this difference remained after the incentives were stopped, resulting in a stronger habit for the participants in the targeted incentives treatment.  

October 12 (Thursday, Isaacson Classroom 4210)

Ryan Dew (Columbia)
Paper: Bayesian Nonparametric Customer Base Analysis with Model-based Visualizations
               Marketing managers are responsible for understanding and predicting customer purchasing activity, a task that is complicated by a lack of knowledge of all of the calendar time events that influence purchase timing. Yet, isolating calendar time variability from the natural ebb and flow of purchasing is important, both for accurately assessing the influence of calendar time shocks to the spending process, and for uncovering the customer-level patterns of purchasing that robustly predict future spending. A comprehensive understanding of purchasing dynamics therefore requires a model that flexibly integrates both known and unknown calendar time determinants of purchasing with individual-level predictors such as interpurchase time, customer lifetime, and number of past purchases. In this paper, we develop a Bayesian nonparametric framework based on Gaussian process priors, which integrates these two sets of predictors by modeling both through latent functions that jointly determine purchase propensity. The estimates of these latent functions yield a visual representation of purchasing dynamics, which we call the model-based dashboard, that provides a nuanced decomposition of spending patterns. We show the utility of this framework through an application to purchasing in free-to-play mobile video games. Moreover, we show that in forecasting future spending, our model outperforms existing benchmarks.

October 16 (Monday, 12:30 – 1:45 PM, Nooyi Classroom 2230)

Liu Liu (NYU)
Paper: Visual Listening In: Extracting Brand Image Portrayed on Social Media
               Marketing academics and practitioners recognize the importance of monitoring consumer online conversations about brands. The focus so far has been on user generated content in the form of text. However, images are on their way to surpassing text as the medium of choice for social conversations. In these images, consumers often tag brands and depict their experience with the brands. We propose a “visual listening in” approach to measuring how brands are portrayed on social media (Instagram) by mining visual content posted by users, and show what insights brand managers can gather from social media by using this approach. Our approach consists of two stages. We first use two supervised machine learning methods, support vector machine classifiers and deep convolutional neural networks, to measure brand attributes (glamorous, rugged, healthy, fun) from images. We then apply the classifiers to brand-related images posted on social media to measure what consumers are visually communicating about brands. We study 56 brands in the apparel and beverages categories, and compare their portrayal in consumer-created images with images on the firm’s official Instagram account, as well as with consumer brand perceptions measured in a national brand survey. Although the three measures exhibit convergent validity, we find key differences between how consumers and firms portray the brands on visual social media, and how the average consumer perceives the brands.

October 20

Franklin Shaddy (U Chicago)
Paper: Seller Beware: How Bundling Affects Valuation
               How does bundling affect valuation? This research proposes the asymmetry hypothesis in the valuation of bundles: Consumers demand more compensation for the loss of items from bundles, compared to the loss of the same items in isolation, yet offer lower willingness-to-pay for items added to bundles, compared to the same items purchased separately. This asymmetry persists because bundling causes consumers to perceive multiple items as a single, inseparable “gestalt” unit. Thus, consumers resist altering the “whole” of the bundle by removing or adding items. Six studies demonstrate this asymmetry across judgments of monetary value (Studies 1 and 2) and (dis)satisfaction (Study 3). Moreover, bundle composition—the ability of different items to create the impression of a “whole”—moderates the effect of bundling on valuation (Study 4), and the need to replace missing items (i.e., restoring the “whole”) mediates the effect of bundling on compensation demanded for losses (Study 5). Finally, we explore a boundary condition: The effect is attenuated for items that complete a set (Study 6).

October 27

Tong Guo (U Michigan)
Paper: The Effect of Information Disclosure on Industry Payments to Physicians
               U.S. pharmaceutical companies paid $2.6 billion to physicians in the form of gifts to promote their medicine in 2015. Offering financial incentives to prescribers creates concerns about potential conflict of interest. To curb the inappropriate financial relationships between healthcare providers and firms, several states instituted disclosure laws wherein firms were required to publicly declare the payments that they made to physicians. In 2013, this law was rolled out to all 50 states as part of the affordable Care Act. A consequence of the public disclosure is that all players in the market - patients, physicians, rival firms, and payers (insurance companies and the government) - can observe which physicians are being targeted by which firms as well as the amount of marketing expenditure directed towards each physician. We investigate the causal impact of this increased transparency on subsequent payments between firms and physicians.  Combining machine learning with quasi-experimental difference-in-difference research design, we find control \clones" for every physician-product pair in the treated states using the Causal Forest algorithm (Wager and Athey 2017). The algorithm is computationally efficient and robust to model mis-specifications, while preserving consistency and asymptotic normality. Using a 29-month national panel covering $100 million-dollar payments between 16 anti-diabetics brands and 50,000 physicians, we find that the monthly payments declined by 2% on average due to disclosure. However, there is considerable heterogeneity in the treatment effects with 14% of the drug-physician pairs showing a significant increase in their monthly payment. Moreover, the decline in payment is smaller among drugs with larger marketing expenditure and prescription volumes, and among physicians who were paid more heavily pre-disclosure and prescribed more heavily. Thus, while information disclosure did lead to reduction in payments on average (as intended by policy makers), the effect is limited for big drugs and popular physicians. We further explore potential mechanisms that are consistent with the data pattern. This paper takes the first step towards shedding light on the role of public disclosure policy in solving conflict-of-interest issues in the pharmaceutical industry, especially in reducing payments made by pharmaceutical firms to physicians.

November 3

Xiao Liu (Stern)
Paper: Large Scale Cross-Category Analysis of Consumer Review Content on Sales Conversion Leveraging Deep Learning
               Consumers often rely on product reviews to make purchase decisions, but how consumers use review content in their decision making has remained a black box. In the past, extracting information from product reviews has been a labor intensive process that has restricted studies on this topic to single product categories or those limited to summary statistics such as volume, valence, and ratings. This paper uses deep learning natural language processing techniques to overcome the limitations of manual information extraction and shed light into the black box of how consumers use review content. With the help of a comprehensive dataset that tracks individual-level review reading, search, as well as purchase behaviors on an e-commerce portal, we extract six quality and price content dimensions from over 500,000 reviews, covering nearly 600 product categories. The scale, scope and precision of such a study would have been impractical using human coders or classical machine learning models. We achieve two objectives. First, we describe consumers’ review content reading behaviors. We find that although consumers do not read review content all the time, they do rely on review content for products that are expensive or with uncertain quality. Second, we quantify the causal impact of content information of the read reviews on sales. We use a regression discontinuity in time design and leverage the variation in the review content seen by consumers due to newly added reviews. To extract content information, we develop two deep learning models: a full deep learning model that predicts conversion directly and a partial deep learning model that identifies content dimensions. Across both models, we find that aesthetics and price content in the reviews significantly affect conversion across almost all product categories. Review content information has a higher impact on sales when the average rating is higher and the variance of ratings is lower. Consumers depend more on review content when the market is more competitive, immature or when brand information is not easily accessible. A counterfactual simulation suggests that re-ordering reviews based on content can have the same effect as a 1.6% price cut for boosting conversion.

November 10 (12:00 – 1:15 PM)

Monic Sun (Boston University)
Paper: A Model of Smart Technologies
               We study the optimal pricing and design of smart technologies that are based on artificial intelligence (AI) and can learn consumers’ preferences over time. In particular, we allow the technology to help consumers in two ways. First, based on initial learning, it can help predict the consumers’ next consumption occasion and make recommendations accordingly. Second, it can help consumers save the operational cost of using the service under a repeated consumption occasion. We attempt to characterize our results in a two-period model with dynamic pricing. When the firm is only moderately smart, it adopts a conservative pricing strategy and the main effect of smart technologies are to help consumers save operational cost, which could benefit both the consumer and the firm. As the firm becomes better at predicting the next purchase occasion, it starts to raise second period price upon learning consumer preferences, and consumers as a result are more reluctant to use the service in the first period and give the firm an opportunity to learn. Anticipating the consumer’s reactions, the firm finds it optimal to lower the first-period price. Under certain conditions, the reduction of this price can dominate the increase in the firm’s second-period price and lead to a lower total profit across the two periods.  
               From a product design perspective, our preliminary analysis suggests that it is not always profitable to increase the smartness of a firm’s technology even when doing so does not involve direct costs. It is also important to note that the “price” in our model can be interpreted as either a direct price that consumers have to pay to the firm or a form of advertising exposure, so that a higher “price” means that the consumer would need to tolerate a larger amount of ads which would bring the firm more revenue from third-party advertisers. Correspondingly, our model has implications not only for the pricing and design of smart technologies and their interaction with consumers, but also for platforms on which advertisers aim to target the users of smart technologies through collaborating with the owners of such technologies.

November 17

Stijn van Osselaer (Cornell)
Paper: The Power of Personal
Since the time of the industrial revolution, technology has improved the well-being of both producers, whose incomes could rise through greater productivity, and consumers, e.g., through greater availability and lower prices of consumer goods. However, this has come at the cost of alienation between consumers and producers (and between consumers and production in general). I will discuss some early results from a budding research program investigating the effects of reducing this alienation (e.g., by identifying producers to consumers and vice versa). I will argue that more recent developments in technology can lead to further alienation and objectification of consumers, but may also be used to bring producers and consumers closer together by making business more personal.

December 1 (12:00 – 1:15 PM)

Garrett Johnson (Northwestern)
Paper: The Online Display Ad Effectiveness Funnel & Carryover: Lessons from 432 Field Experiments
We analyze 432 online display ad field experiments on the Google Display Network. The experiments feature 431 advertisers from varied industries, which on average include 4 million users. Causal estimates from 2.2 billion observations help overcome the medium's measurement challenges to inform how and how much these ads work. We find that the campaigns increase site visits (p<10^-212) and conversions (p<10^-39) with median lifts of 17% and 8% respectively. We examine whether the in-campaign lift carries forward after the campaign or instead only causes users to take an action earlier than they otherwise would have. We find that most campaigns have a modest, positive carryover four weeks after the campaign ended with a further 6% lift in visitors and 16% lift in visits on average, relative to the in-campaign lift. We then relate the baseline attrition as consumers move down the purchase process—the marketing funnel—to the incremental effect of ads on the consumer purchase process—the 'ad effectiveness funnel.' We find that incremental site visitors are less likely to convert than baseline visitors: a 10% lift in site visitors translates into a 5-7% lift in converters.

Spring 2017

February 3

Jing Li (Harvard University – Job Market Candidate)
Paper: Compatibility and Investment in the U.S. Electric Vehicle Market
Competing standards often proliferate in the early years of product markets, potentially leading to socially inefficient investment. This paper studies the effect of compatibility in the U.S. electric vehicle market, which has grown ten-fold in its first five years but has three incompatible standards for charging stations. I develop and estimate a structural model of consumer vehicle choice and car manufacturer investment that demonstrates the ambiguous impact of mandating compatibility standards on market outcomes and welfare. Firms under incompatible standards may make investments in charging stations that primarily steal business from rivals and do not generate social benefits sufficient to justify their costs. But compatibility may lead to underinvestment since the benefits from one firm's investments spill over to other firms. I estimate my model using U.S. data from 2011 to 2015 on vehicle registrations and charging station investment and identify demand elasticities with variation in federal and state subsidy policies. Counterfactual simulations show that mandating compatibility in charging standards would decrease duplicative investment in charging stations by car manufacturers and increase the size of the electric vehicle market.

February 13 (Monday, Gould Classroom 4220)

Katalin Springel (UC Berkeley – Job Market Candidate)
Paper: "Network Externality and Subsidy Structure in Two-Sided Markets: Evidence from Electric Vehicle Incentives"
In an effort to combat global warming and reduce emissions, governments across the world are implementing increasingly diverse incentives to expand the proportion of electric vehicles on the roads. Many of these policies provide financial support to lower the high upfront costs consumers face and build up the infrastructure of charging stations. There is little guidance theoretically and empirically on which governmental efforts work best to advance electric vehicle sales. I model the electric vehicle sector as a two-sided market with network externalities to show that subsidies are non-neutral and to determine which side of the market is more efficient to subsidize depending on key vehicle demand and charging station supply primitives. I use new, large-scale vehicle registry data from Norway to empirically estimate the impact that different subsidies have on electric vehicle adoption when network externalities are present. I present descriptive evidence to show that electric vehicle purchases are positively related to both consumer price and charging station subsidies. I then estimate a structural model of consumer vehicle choice and charging station entry, which incorporates flexible substitution patterns and allows me to analyze out-of-sample predictions of electric vehicle sales. In particular, the counterfactuals compare the impact of direct purchasing price subsidies to the impact of charging station subsidies. I find that between 2010 and 2015 every 100 million Norwegian kroner spent on station subsidies alone resulted in 835 additional electric vehicle purchases compared to a counterfactual in which there are no subsidies on either side of the market. The same amount spent on price subsidies led to only an additional 387 electric vehicles being sold compared to a simulated scenario where there were no EV incentives. However, the relation inverts with increased spending, as the impact of station subsidies on electric vehicle purchases tapers off faster.

February 15 (Wednesday, Attwood Classroom 4230)

Paul Ellickson (University of Rochester)
Paper: Private Labels and Retailer Profitability: Bilateral Bargaining in the Grocery Channel
We examine the role of store branded ``private label'' products in determining the bargaining leverage between retailers and manufacturers in the brew-at-home coffee category. Exploiting a novel setting in which the dominant, single-serve technology was protected by a patent preventing private label entry, we develop a structural model of demand and supply-side bargaining and seek to quantify the impact of private labels on bargaining outcomes. We find that, while bargaining parameters are relatively symmetric across retailers and manufacturers, the addition of private labels alters bargaining leverage by improving the disagreement payoffs of the retailers (relative to the manufacturers), thereby shifting bargaining outcomes in the retailer's favor.

February 24

Kristin Diehl (University of Southern California)
Paper: Savoring an Upcoming Experience Affects Ongoing and Remembered Consumption Enjoyment
 Five studies, using diverse methodologies, distinct consumption experiences, and different manipulations, demonstrate the novel finding that savoring an upcoming consumption experience heightens enjoyment of the experience both as it unfolds in real time (ongoing enjoyment) and how it is remembered (remembered enjoyment). Our theory predicts that the process of savoring an upcoming experience creates affective memory traces that are reactivated and integrated into the actual and remembered consumption experience. Consistent with this theorizing, factors that interfere with consumers’ motivation, ability, or opportunity to form or retrieve affective memory traces of savoring an upcoming experience limit the effect of savoring on ongoing and remembered consumption enjoyment. Affective expectations, moods, imagery, and mindsets do not explain the observed findings.

February 27

Soheil Ghili (Northwestern University)
Paper: Network Formation and Bargaining in Vertical Markets: The Case of Narrow Networks in Health Insurance
Network adequacy regulations” expand patients’ access to hospitals by mandating a lower bound on the number of hospitals that health insurers must include in their networks. Such regulations, however, compromise insurers’ bargaining position with hospitals, which may increase hospital reimbursement rates, and may consequently be passed through to consumers in the form of higher premiums. In this paper, I quantify this effect by developing a model that endogenously captures (i) how insurers form hospital-networks, (ii) how they bargain with hospitals over rates by threatening to drop them from the network or to replace them with an out-of-network hospital, and (iii) how they set premiums in an imperfectly competitive market. I estimate this model using detailed data from a Massachusetts health insurance market, and I simulate the effects of a range of regulations. I find that “tighter” regulations, which force insurers to include more than 85% of the hospital systems in the market, raise the average reimbursement rates paid by some insurers by at least 28%. More moderate regulations can expand the hospital networks without causing large hikes in reimbursement rates.

March 3

Rebecca Hamilton (Georgetown University)
Paper: Learning that You Can’t Always Get What You Want: The Effect of Childhood Socioeconomic Status on Decision Making Resilience
Much of the literature on consumer decision making has focused on choice, implicitly assuming that consumers will be able to obtain what they choose. However, the options consumers choose are not always available to them, either due to limited availability of the options or to consumers’ limited resources. In this research, we examine the impact of childhood socioeconomic status on consumers’ responses to choice restriction. Building on prior work showing that perceived agency and effective coping strategies may differ by socioeconomic status, we hypothesize that consumers socialized in low socioeconomic status environments will be more likely to exhibit two adaptive strategies in response to two different forms of choice restriction. In three studies in which participants encountered unavailability of their chosen alternative, we find that participants of various ages with low (vs. high) childhood socioeconomic status display greater persistence in waiting for their initial choices yet greater willingness to shift when the alternative they have chosen is clearly unattainable. We discuss the theoretical implications of these results and how they contribute to a deeper understanding of the long-term effects of socioeconomic status on consumer behavior.

March 10

Kanishka Misra (University of California, San Diego)
Paper: Dynamic Online Pricing with Incomplete Information Using Multi-Armed Bandit Experiments 
Consider the pricing decision for a manager at large online retailer, such as Amazon.com, that sells millions of products. The pricing manager must decide on real-time prices for each of these product. Due to the large number of products, the manager must set retail prices without complete demand information. A manager can run price experiments to learn about demand and maximize long run profits. There are two aspects that make the online retail pricing different from traditional brick and mortar settings. First, due to the number of products the manager must be able to automate pricing. Second, an online retailer can make frequent price changes.  Pricing differs from other areas of online marketing where experimentation is common, such as online advertising or website design, as firms do not randomize prices to different customers at the same time.
In this paper we propose a dynamic price experimentation policy where the firm has incomplete demand information. For this general setting, we derive a pricing algorithm that balances earning profit immediately and learning for future profits. The proposed approach marries statistical machine learning and economic theory. In particular we combine multi-armed bandit (MAB) algorithms with partial identification of consumer demand into a unique pricing policy. Our automated policy solves this problem using a scalable distribution-free algorithm. We show that our method converges to the optimal price faster than standard machine learning MAB solutions to the problem. In a series of Monte Carlo simulations, we show that the proposed approach perform favorably compared to methods in computer science and revenue management.

April 7

Selman Erol (MIT, Post Doc)
Paper: Network Hazard and Bailouts
I introduce a model of contagion with endogenous network formation in which a government intervenes to stop contagion. The anticipation of government bailouts introduces a novel channel for moral hazard via its effect on network architecture. In the absence of bailouts, the network formed consists of small clusters that are sparsely connected that serve to minimize contagion. When bailouts are anticipated, firms are less concerned with the contagion risk their counterparties face. As a result, they are less disciplined during network formation and form networks that are more interconnected, exhibiting a core-periphery structure wherein many firms are connected to a smaller number of central firms. Interconnectedness within the periphery increases spillovers. Core firms serve as a buffer when solvent and an amplifier when insolvent. Thus, in my model, ex-post time-consistent intervention by the government increases systemic risk and volatility through its effect on network formation with ambiguous welfare effects.

April 21

Derek Rucker (Northwestern University)
Paper: Power and Prosocial Behavior
Prior work has found a low-power state to produce a communal orientation that increases consumers’ propensity to engage in prosocial behaviors. The present research demonstrates that the presence of opportunity costs eliminates and, in fact, suppresses the communal orientation that typically accompanies low-power states. In contrast, for participants in a high-power state, where an agentic orientation is more prevalent, opportunity costs appear to have little effect. As a consequence, replicating prior research, when opportunity costs were not salient, states of low power produce more prosocial behavior than states of high power. However, significantly qualifying past findings, when opportunity costs are salient low-power states produce less prosocial behavior than high-power states. Evidence consistent with this hypothesis is found across various prosocial behaviors (i.e., donations to charitable organizations, and gifts for other). Taken together, the present work changes our understanding of the relationship between consumers’ state of power and prosocial behavior.

Fall 2016

August 24 (Wednesday, Nooyi Classroom 2230)

Jeff Galak (Carnegie Mellon)
Paper: (The Beginnings of) Understanding Sentimental Value
Sentimental value, or the value derived from associations with significant others or events in ones life, is prevalent, important, and, yet, under-researched. I present a broad overview of a new research program designed to define this construct, begin to understand its antecedents, and demonstrate some important consequences for individuals and for firms.

September 9

Jan Van Mieghem  (Northwestern)
Paper: Collaboration and Multitasking in Processing Networks: Humans versus Machines
One of the fundamental questions in operations is to determine the maximal throughput or productivity of a process. Does it matter whether humans or machines execute the various steps in the process? If so, how do we incorporate this difference in our planning and performance evaluation? We propose some answers by discussing two examples: a theoretical analysis and an empirical study.

September 23

Rosanna Smith (Yale Doctoral Student)
Paper: “The Curse of the Original: When Product Enhancements Undermine Authenticity and Value”
Companies often introduce enhancements to their products in order to both stay relevant and increase product appeal. However, companies with iconic status (e.g., Converse, Levi’s, Coca-Cola) are often confronted with a unique challenge: When they introduce product enhancements, even those that unequivocally improve functionality or quality, they may encounter fierce consumer backlash. In the present studies, we identify cases in which product enhancements can backfire and decrease consumer interest. We draw on the concept of authenticity and propose that product enhancements in these cases may be seen as a violation of the original vision for the product and, therefore, may be perceived as less authentic. Across four studies, we document this “curse of the original” as well as its associated psychological mechanisms and boundary conditions.

Paper: ““Closer to the Creator: Temporal Contagion Explains the Preference for Earlier Serial Numbers”
Consumers demonstrate a robust preference for items with earlier serial numbers (e.g., No. 3/100) over otherwise identical items with later serial numbers (e.g., No. 97/100) in a limited edition set. This preference arises from the perception that items with earlier serial numbers are temporally closer to the origin (e.g., the designer or artist who produced it). In turn, beliefs in contagion (the notion that objects may acquire a special essence from their past) lead consumers to view these items as possessing more of a valued essence. Using an archival data set and five lab experiments, the authors find the preference for items with earlier serial numbers holds across multiple consumer domains including recorded music, art, and apparel. Further, this preference appears to be independent from inferences about the quality of the item, salience of the number, or beliefs about market value. Finally, when serial numbers no longer reflect beliefs about proximity to the origin, the preference for items with earlier serial numbers is attenuated. The authors conclude by demonstrating boundary conditions of this preference in the context of common marketing practices.

October 7 (Noon – 1:30 PM, Attwood Classroom 4230)

Jia Liu (Columbia University)
Paper: A Semantic Approach for Estimating Consumer Content Preferences from Online Search Queries
We develop an innovative topic model, Hierarchically Dual Latent Dirichlet Allocation (HDLDA), which not only identifies topics in search queries and webpages, but also how the topics in search queries relate to the topics in the corresponding top search results. Using the output from HDLDA, a consumer’s content preferences may be estimated on the fly based on their search queries. We validate our proposed approach across different product categories using an experiment in which we observe participants’ content preferences, the queries they formulate, and their browsing behavior.  Our results suggest that our approach can help firms extract and understand the preferences revealed by consumers through their search queries, which in turn may be used to optimize the production and promotion of online content.

October 21

Joe Alba (University of Florida)
Paper: Belief in Free Will: Implications for Practice and Policy
The conviction one holds about free will serves as a foundation for the views one holds about the consumption activities of other consumers, the nature of social support systems, and the constraints that should or should not be placed on industry. Across multiple paradigms and contexts, we assess people’s beliefs about the control consumers have over consumption activities in the face of various constraints on agency. We find that beliefs regarding personal discretion are robust and resilient, consistent with our finding that free will is viewed as noncorporeal. Nonetheless, we also find that these beliefs are not monolithic but vary as a function of identifiable differences across individuals and the perceived cause of behavior, particularly with regard to physical causation. Taken together, the results support the general wisdom of libertarian paternalism as a framework for public policy but also point to current and emerging situations in which policy makers might be granted greater latitude.

October 28

Tony Ke (MIT)
Paper: Optimal Learning before Choice
A Bayesian decision maker is choosing among multiple alternatives with uncertain payoffs and an outside option with known payoff. Before deciding which one to adopt, the decision maker can purchase sequentially multiple informative signals on each of the available alternatives. To maximize the expected payoff, the decision maker solves the problem of optimal dynamic allocation of learning efforts as well as optimal stopping of the learning process. We show that the optimal learning strategy is of the type of consider-then-decide. The decision maker considers an alternative for learning or adoption if and only if the expected payoff of the alternative is above a threshold. Given several alternatives in the decision maker's consideration set, we find that sometimes, it is optimal for him to learn information from an alternative that does not have the highest expected payoff, given all other characteristics of all the alternatives being the same. If the decision maker subsequently receives enough positive informative signals, the decision maker will switch to learning the better alternative; otherwise the decision maker will rule out this alternative from consideration and adopt the currently most preferred alternative. We find that this strategy works because it minimizes the decision maker's learning efforts. It becomes the optimal strategy when the outside option is weak, and the decision maker's beliefs about the different alternatives are in an intermediate range.

November 4

Christopher Hsee (University of Chicago)
Paper: TBA
Abstract forthcoming.

November 18 (Noon – 1:30 PM)

Scott Schriver (Columbia University)
Paper: Optimizing Content and Pricing Strategies for Digital Video Games

The video game industry has experienced a wave of disruption as consumers rapidly shift to acquiring and consuming content through digital channels.  Incumbent game publishers have struggled to adapt their content and pricing strategies to shifting consumption patterns and increased competition from low cost independent suppliers. Recently, game publishers have pursued new business models that feature downloadable content (DLC) services offered in conjunction with or as a replacement for traditional physical media. While service-based models can potentially extract additional surplus from the market by allowing for more customized content bundles and pricing than with physically distributed media, exploiting these opportunities poses a  challenge to firms who must attempt to optimize their offerings over a formidably complex decision space.

In this paper, we develop a structural framework to facilitate the recovery of consumer preferences for game content and the optimization of firm content/price strategies.  Our approach is to leverage rich covariation in observed content consumption and DLC service subscriptions to infer consumer content valuations and price sensitivities. We devise a joint model of video game activity and demand for downloadable content, where consumers sequentially make (discrete) DLC subscription choices followed by (continuous) choices of how much to play.  Our model accounts for forward-looking consumer expectations about declining content prices and attendant concerns for dynamic selection bias in our demand estimates.  We document evidence of heterogeneous preferences for content and significant effects of DLC availability on game usage.  Our counterfactual experiments suggest that compressing the DLC release cycle and moving to a recurring fee structure are both viable ways to increase revenues.

December 2

Drazen Prelec (MIT)
Paper: Aggregating information: The meta-prediction approach
The twin problems of eliciting and aggregating information arise at many levels, including the social (wisdom-of-the-crowd) and the neural (ensemble voting). The first, elicitation problem involves crafting incentives that ensure that incoming signals are honestly reported; the second, aggregation problem involves selecting the best value from a distribution of reported values.  Current aggregation methods take as input individuals' beliefs about the right answer, expressed with votes or, more precisely, with probabilities. However, one can show that any belief-based algorithm can be defeated by even simple problems. An alternative approach that is guaranteed to work in theory, assumes that individuals with different opinions share an implicit possible worlds model whose parameters are unknown, but which can be estimated from their predictions of the opinion distribution. Using this additional information, Bayes' rule selects individuals who would be least surprised by known evidence. Their answer defines the best estimate for the group. I will review some old and some new evidence bearing on this approach, and discuss extensions to market research. 

December 9

Sunita Sah (Cornell University)
Paper: Conflict of Interest Disclosure and Appropriate Restraint: The Power of Professional Norms
Conflicts of interest present an incentive for professionals to give biased advice. Disclosing, or informing consumers about, the conflict is a popular solution for managing such conflicts. Prior research, however, has found that advisors who disclose their conflicts give more biased advice. Across three experiments, using monetary incentives to create real conflicts of interest, I show that disclosure can cause advisors to significantly decrease or increase bias in advice based on the context in which the advice is provided. Drawing from norm focus theory and the logic of appropriateness literature, this investigation examines how professional norms cause advisors to either succumb to bias (by believing that disclosure absolves them of their responsibility—caveat emptor) or restrain from bias (by reminding advisors of their responsibility towards advisees). Professional norms significantly alter how disclosure affects advisors—increasing bias in advice in business settings but decreasing bias in medical settings. These findings not only disconfirm previous assumptions regarding conflict of interest disclosures but also highlight the importance of context when understanding the potential and pitfalls of disclosure. 

December 16

Przemek Jeziorski  (UC Berkeley)
Paper: Adverse Selection and Moral Hazard in a Dynamic Model of Auto Insurance
We measure risk-related private information and investigate its importance in a setting where individuals are able to modify risk ex-ante through costly effort. Our analysis is based on a model of endogenous risk production and contract choice. It exploits data from multiple years of contract choices and claims by customers of a major Portuguese auto insurance company. We additionally use our framework to investigate the relative effectiveness of dynamic versus static contract features in incentivizing effort and inducing sorting on private risks, as well as to assess the welfare costs of mandatory liability insurance.