The Yale Marketing Seminar Series presents recent research papers in marketing. The goal is to bring researchers from other universities to the Yale campus to stimulate exchange of ideas and deepen understanding of marketing trends. These seminars are geared towards faculty and PhD students interested in marketing.
The Thursday Series runs from 11:35am-12:50pm in EVANS Hall, Room 2210 Allison Classroom. Zoom option available. Lunch will be served prior to seminar outside of classroom 2210.
Spring 2025
May 1
Alice Moon (Assistant Professor of Marketing, Georgetown University)
Paper: TBD
Abstract: TBD
April 24
Akshina Banerjee (Assistant Professor, Professor of Marketing, Ross School of Business, University of Michigan)
Paper: TBD
Abstract: TBD
April 17
Krista Li (Blanche “Peg” Philpott Professor and Associate Professor of Marketing at the Kelley School of Business, Indiana University)
Paper: Platform Certification and Consumer Verification
Abstract: The rise of online platforms has brought public concerns about the quality of products sold on platforms by third-party sellers. In response, many platforms have launched certification programs to endorse selected sellers. Consumers are increasingly gaining access to tools that enable them to verify product quality independently. This raises two key questions: How should platforms design certification programs and set commission rates when consumers either have or lack the ability to verify products? Furthermore, how do the two quality assurance mechanisms—platform certification and consumer verification—interact with sellers' price signaling to shape market outcomes separately and jointly? We use a Bayesian persuasion model to analyze the platform's information design, accounting for these strategic interplays. Our analysis reveals three key insights: First, when consumers lack independent verification, the platform may intentionally withhold sellers' quality information in its certification design. As the opportunity cost for sellers to join the platform increases, the platform adjusts the informativeness of certification and the commission rate in a nonmonotonic manner. Second, introducing consumer verification can induce sellers to distort prices, prompting the platform to implement a more transparent certification program to mitigate price distortions. Third, although both certification and verification enhance market transparency, they affect the involved parties differently. Platform certification consistently benefits the platform and sellers but may reduce consumer or social welfare in the absence of consumer verification. Surprisingly, empowering consumers with verification tools can harm consumers and society regardless of whether a certification program is in place.
April 10
Stephen Spiller (Professor of Marketing and Behavioral Decision Making, Associate Dean of Anderson Ph.D. Program, University of California, Los Angeles)
Paper: Widely-Used Measures of Overconfidence Are Confounded With Ability
Abstract: The overconfidence concept is one of the great success stories of psychological research, influencing research in other disciplines as well as discourse in the popular press, business, and public policy. Relative to underconfidence, overconfidence at various tasks is purportedly associated with greater narcissism, lower anxiety regarding those tasks, higher status, greater savings, more planning, and numerous other differences. Yet much of this evidence may merely reflect that there are associations with ability rather than overconfidence. This results from two underappreciated properties of typical measures of overconfidence. First, performance is an imperfect measure of ability; accounting for performance does not sufficiently account for ability. Second, self-evaluations of performance should reflect ability in addition to performance; because performance is ambiguous, people should use prior beliefs about their ability. I show these basic principles imply that commonly-used measures of overconfidence are confounded with ability. I support these analytical results by reexamining previously-published findings. In the first analysis, I find overconfidence predicts subsequent performance, consistent with overconfidence as a signal of ability but inconsistent with overconfidence as a bias. In the second set of analyses, I find the purported association between overconfidence and other proposed constructs can be adequately explained through ability alone. I close with recommendations on approaches to recognize and reduce the extent of the problem. This model serves as a stark reminder: when researchers propose that differences in overconfidence are associated with other behaviors, beliefs, or evaluations, they must account for the possibility that differences in ability provide a sufficient explanation.
April 3
Daniel Bartels (Leon Carroll Marshall Professor of Marketing, University of Chicago
Booth School of Business)
*Cancelled.
April 3
Andrey Fradkin (Dean’s Research Scholar, Assistant Professor of Marketing, Boston University Questrom School of Business and an affiliate of the Boston University Economics Department)
Paper: Data Sharing and Website Competition: The Role of Choice Architecture
Abstract: Regulations like the GDPR require firms to secure consumer consent before using data. In response, some firms employ ``dark patterns''---interface designs that encourage data sharing. We study the causal effects of these designs on consumer consent choices and explore how these effects vary across individuals, firms, and the frequency of these choices. We ran a field experiment where participants installed a browser extension that randomized cookie consent interfaces as they browsed the internet. We find that consumers accept all cookies over half of the time absent dark patterns, with substantial preference heterogeneity across users. In addition, users frequently close the window without making an active choice. When the interface hides certain options behind an extra click, users are significantly more likely to select the options that remain visible. Purely visual manipulations have much smaller effects. Larger and better-known firms achieve higher consent rates, giving them a competitive advantage, but dark patterns do not exacerbate this advantage. Our structural model shows that the consumer-surplus-maximizing consent banner increases welfare by 11% compared to the most common banner, while reducing consent rates by 17%. However, even the best banner is shadowed by the benefit of not having to interact with banners at all, which can increase consumer surplus by up to 43%.
March 27
Samsun Knight (Assistant Professor at the University of Toronto’s Rotman School of Management)
Paper: Emotion Sequences and Persuasive Stories: Evidence from Online Fundraising and LLM-Assisted Rewrites
Abstract: What types of stories are most persuasive? In this paper, we introduce a new taxonomy of short-form story types based on specific emotional progressions across each half of the text, or "emotion sequences." Using transformer-based classifiers, we analyze over 14,000 medical fundraising pitches from GoFundMe.org and estimate the impact of particular emotion sequences on fundraising success. Among other findings, we show that medical fundraising pitches that begin with a sad tone and end on a caring tone are significantly more likely to succeed. We then develop a novel framework for testing the generalizability of our field findings using crowd-sourced, LLM-assisted rewrites to introduce these emotion sequences to randomly-selected fundraisers. We demonstrate that solely-human rewrites often fail due to skill deficits, but LLM-assisted rewrites allow researchers to effectively enact and test the out-of-sample application of research results using online participants. With this approach, we establish that pitches rewritten to feature our focal emotion sequences see a significant boost in perceived quality, even for some sequences associated with lower success in observational data, while placebo rewrites produce null effects.
March 6
Oded Netzer (Vice Dean for Research, Arthur J. Samberg Professor of Business, Columbia Business School)
Paper: Leveraging repeated marketing interventions for effective targeting/personalization
Abstract: Targeting customers is at the heart of marketing strategy. To do so effectively, firms need to understand the effectiveness of their targeting efforts across customers, over time, and for different levers that are at the firm’s disposal. Recent advances in data collection, analyses, and technology have fueled the ability of firms to personalize their marketing efforts. To learn which marketing actions are likely to generate the most positive response, marketers often run A/B tests. Researchers have armed firms with tools that leverage experimentation to estimate heterogeneous treatment effects in order to inform personalized targeting. These approaches often use machine learning (ML) tools to relate customer characteristics observed prior to the intervention to the magnitude of the impact of the intervention. Firms can then use these customer characteristics to select targets for whom the marketing intervention is likely to results in stronger effects. However, while these approaches provide estimates of heterogeneous treatment effects with respect to the observed intervention(s), they often fail to quantify how much of the treatment effect is due to (1) the design of the intervention, (2) the customer’s sensitivity to interventions, and (3) contextual factors such as time of the day or day of the week. Furthermore, these one-shot approaches fail to recognize the fact that the same customer is often exposed to multiple interventions over time. As a result, the insights may not be fully generalizable to interventions conducted in a different context, on a different date, or with a somewhat different design. The objective of this research is twofold. First, we explore the heterogeneity and dynamics of marketing interventions in the context of field experiments, decomposing the effectiveness of the firm’s marketing actions into aspects related to the design of the intervention, the heterogeneity in susceptibility of customers to marketing actions, and other contextual factors. Second, we propose a modeling framework that is simple, scalable and that leverages customers’ exposure to multiple treatments over time. More specifically, we develop a hierarchical Bayesian shrinkage approach to model the treatment effect of marketing interventions as a function of observed and unobserved campaign characteristics, time, and unobserved customer-level heterogeneous effects. For the empirical application, we observe data on nearly 3 million users who, over the course of 2.5 years, were exposed to approximately 2,000 randomized marketing interventions (A/B tests), with an average of over 40 interventions per user. The repeated exposure to interventions allows us to study how much of the treatment effect variation is due to campaign characteristics (e.g., incentives with an educational goal versus incentives to prevent churn), individual sensitivity to marketing actions, and contextual effects (such as when the promotion was run). Furthermore, we can investigate whether there are signs of ‘over-targeting’, that is, whether marketing campaigns become less effective once customers have been exposed to interventions in the past.
March 4 (Tuesday*)
Ilya Morozov (Assistant Professor of Marketing, Kellogg School of Management, Northwestern University)
Paper: Space Exploration
Abstract: When people search, they seem to learn from past discoveries in deciding where to look next. We design an experiment to understand if there are discernable patterns to how individuals learn where to search. We vary the complexity of the search environment in natural ways, manipulating how much people learn about the state space with each search. Using experimental data, we show that participants use their knowledge of spatial correlations when choosing where to search, honing their strategies to specific search settings. Moreover, they better identify and avoid dominated search strategies in less complex search environments.
February 27
Lan Luo (Professor of Marketing, USC Marshall, University of Southern California)
Paper: Wisdom of the AI Crowd? Can We Detect AI-Generated Fake Product Reviews?
Abstract: In the era of LLM models such as ChatGPT, many platforms and policy makers have become increasingly concerned about the proliferation of AI-generated misinformation including AIgenerated fake reviews. In response to such concerns, researchers and companies have invested heavily to develop multiple AI detectors across various domains. Within this context, we examine the performance of several prevailing LLM detectors in reliably distinguishing humanwritten reviews, AI-generated fake reviews, and human reviews assisted by AI. Using a large volume of real-world review data from multiple platforms and lab experiments, we find that most prevailing LLM detectors struggle with distinguishing these three types of reviews with high accuracy. Next, we delve into the underlying mechanisms as to why these detectors do not work well within the empirical context of product reviews. We further examine whether and how these detectors’ predictions can be explained by review and product characteristics. Lastly, we seek possible remedies by leveraging paid, fine-tuned, and human detectors respectively. Although some of these explorations present promising improvements, our findings suggest that there might be a potentially fundamental barrier to reliably detecting unwanted AI-generated fake reviews from human reviews assisted by AI. As such, particularly within the context of product reviews, the “arms race” between unwanted AI-generated fake reviews and AI detectors is likely to be on-going and nuanced.
February 6
Jessie Sun (Assistant Professor of Psychological & Brain Sciences, Arts & Sciences at Washington
University in St. Louis)
Paper: Moral Opportunities and Tradeoffs in Everyday Life
Abstract: Moral psychological research has overwhelmingly relied on hypothetical scenarios and lab experiments. As a result, existing research has little to say about how people experience morality in real-world contexts. For example, how often do people notice moral opportunities and face moral tradeoffs in everyday life? To address these questions, I will present findings from two Day Reconstruction Method studies. In Study 1, 377 U.S. participants reported whether each of 12 virtues (e.g., compassion, honesty, loyalty) was relevant, their momentary enactments of the virtues, and perceived tradeoffs between pairs of virtues during episodes from their daily life (12,385 total), across 7 days. In Study 2, 608 participants provided detailed information about one tradeoff episode. Participants perceived an opportunity to express at least one of the 12 virtues 77% of the time. Virtue “tradeoffs” (Study 2; 31.7%) were perceived more frequently than “conflicts” (15.8%; Study 1), and most often involved honesty or courage. Tradeoffs were largely resolved by prioritizing one of the two virtues (53.9% of decisions), compared to making compromises (28.4%), finding a way to show high levels of both virtues (14.2%), or avoiding decisions (3.5%). Together, these results shed light on how people experience and navigate moral opportunities and tradeoffs in everyday life.
January 30
Remi Daviet (Assistant Professor of Marketing, Wisconsin School of Business, University of Wisconsin-Madison)
Paper: Leveraging Generative AI to Create Visual Content in Digital Advertising
Abstract: Generative artificial intelligence for image synthesis has the potential to transform the digital advertising industry. However, a wide range of uncertainties persists regarding its integration into traditional advertising processes, including finding effective implementations, training methodologies, and achievable performance gains. Moreover, the large space of variations that can be generated makes it challenging to identify content that is both performing well and compatible with a brand's standards or campaign objectives. This paper addresses these concerns by proposing a novel creative design process combining generative AI with two deep Bayesian prediction models. The first model identifies potential high-performance visuals, while the second assesses acceptability by the brand. Both models undergo sequential training using optimized batches of creatives, allowing us to minimize costs and required training set size. We demonstrate the effectiveness of our approach with a field application to scene setting in ads for an outdoor activity company. Our results show that our approach can generate high performing visuals consistently, although with potentially less variety than what a human designer could produce. By providing a framework guiding the integration of generative AI in digital advertising, this paper seeks to bridge the gap between theoretical potential and effective practical applications.
January 23
Dan Schley (Associate Professor, Department of Marketing Management, Rotterdam School of Management, Erasmus University)
Paper: Measurement Invariance Across Conditions: A Case Study of Material and Experiential Happiness
Abstract: Experimentation is often called the strongest tool for defining causality in the proverbial scientific toolbelt. But, the claim of “causality” comes with the burden of many explicit and implicit experimental assumptions. In this research, I investigate the role of measurement invariance across experimental conditions – something implicitly assumed to hold, and as a consequence previously ignored in experimental research. When manipulating treatment X (e.g., watching funny versus neutral videos), that manipulation is assumed to cause a change in a relevant latent construct (e.g., happiness). This construct is then measured using some operationalized dependent variable (e.g., a 5-item happiness scale). If happiness in one condition is higher than in the other, we usually assume that to mean that the treatment caused happiness. That is true under the tacit assumption that the dependent variable is measuring the same construct in both conditions. Does, for example, a happiness scale measure the same latent factor of “happiness” after watching a funny video as it does after watching a neutral video? This may be the case in many instances, but need not always be true. In this talk I introduces the potential issues that occur if there is not measurement invariance across conditions and introduce some initial simulations for defining a test for measurement invariance across conditions. I will demonstrate the implications in the context of a recent hot topic in consumer research: the “experiential advantage” whereby consumers derive more “happiness” from consuming experiences compared to consuming material goods.
Fall 2024
December 12
Robyn LeBoeuf (Joyce and Chauncy Buchheit Distinguished Professor in Marketing, Olin Business School at Washington University)
Paper: Depletion Aversion: People Avoid Spending Accounts Down to Zero
Abstract: We document the phenomenon of “depletion aversion:” people avoid spending from accounts when doing so would completely deplete those accounts, even when depleting the accounts might make financial sense. For example, we find that people would rather pay a $500 expense from an account with a $1000 balance than from one with a $500 balance, even if the $1000 account pays interest at a higher rate. We consider why depletion aversion may arise, and we identify responsibility as a key mechanism underlying it: People find depleting accounts to be irresponsible, even when depletion is financially beneficial. Consistent with this proposal, we find that depletion aversion does not arise for spending-oriented accounts where depletion seems more responsible (e.g., gift cards, or accounts earmarked for a specific expense, such as a vacation savings account). We consider implications for mental accounting and consumer saving and spending behavior.
November 7
William Ryan (Ph.D. Candidate in Marketing at the Haas School of Business, University of California Berkeley)
Paper: People Are (Shockingly) Bad at Valuing Hedges
Abstract: People often must plan for the worst. They purchase product warranties, insure their homes, and proactively make backup plans. People should be willing to pay more to hedge against bad outcomes that are more likely to happen. Across 14 studies (N = 5,591) we find that decision- makers instead behave as though they almost fully ignore probability information when hedging against bad outcomes. As a result, they dramatically underinvest in hedges they are likely to need, while overinvesting in those that are unlikely to be helpful. This behavior occurs in both abstract settings (e.g. hedging lotteries) and naturalistic ones (e.g. buying warranties or insurance), occurs with fully incentive-compatible decisions, and is robust across a wide range of probabilities and outcomes. In our studies, we test a variety of possible explanations. Ultimately, we find support for an account in which decision makers focus almost solely on the bad outcome they are hedging against, while ignoring how likely that bad outcome is to occur. Interestingly, when the same participants invest to make a good outcome better (rather than hedge to make a bad outcome less bad), they were sensitive to probabilities. Leveraging this result, we find that a reframing of hedges which makes them appear more like investments can make hedging decisions better calibrated to the likelihood of an outcome.
October 31
Rafael Batista (Doctoral Student, Behavioral Science, The University of Chicago, Booth School of Business)
Paper: Words that Work: Using Language to Generate Hypotheses
Abstract: In this paper, we examine how specific features of language drive consumer behavior.
Our contribution, however, lies not in testing specific hypotheses; rather, it is
in demonstrating a data-driven process for generating them. We devise an approach
that generates interpretable hypotheses from text by integrating large-language models
(LLMs), machine learning (ML), and psychology experiments. Using a dataset with
over 60,000 headlines (and over 32,000 A/B tests), we produce human-interpretable
hypotheses about what features of language might affect engagement. We then test a
subset of these hypotheses out-of-sample using two datasets: one consisting of 1,600
A/B tests and another containing over 5,000 social media posts. Our approach indeed
facilitates discovery. For instance, we find that describing physical reactions
significantly increases engagement. In contrast, focusing on positive aspects of human
behavior decreases it. A third hypothesis posited that referring to multimedia (e.g.,
GIFs, videos) would influence engagement, and it does, only it significantly increases
engagement in one domain while significantly decreasing it in another. This approach
extends beyond a single application. In general, it offers a data-driven method for
discovery that can convert unstructured text data into insights that are interpretable,
novel, testable, and generalizable. It does so while maintaining a transparent role for
both human researchers and algorithmic processes. This approach offers a practical
tool to researchers, organizations, and policymakers seeking to aggregate insights from
multiple marketing experiments.
October 24
Claire Robertson (Post-Doctoral Fellow, Department of Psychology, New York University)
Paper: Negativity drives online news consumption:
Abstract: Online media is important for society in informing and shaping opinions, hence raising the question of what drives online news consumption. Here we analyse the causal effect of negative and emotional words on news consumption using a large online dataset of viral news stories. Specifically, we conducted our analyses using a series of randomized controlled trials (N = 22,743). Our dataset comprises ~105,000 different variations of news stories from Upworthy.com that generated ∼5.7 million clicks across more than 370 million overall impressions. Although positive words were slightly more prevalent than negative words, we found that negative words in news headlines increased consumption rates (and positive words decreased consumption rates). For a headline of average length, each additional negative word increased the click-through rate by 2.3%. Our results contribute to a better understanding of why users engage with online media.
Paper: Inside the funhouse mirror factory: How social media distorts perceptions of norms:
Abstract: The current paper explains how modern technology interacts with human psychology to create a funhouse mirror version of social norms. We argue that norms generated on social media often tend to be more extreme than offline norms which can create false perceptions of norms–known as pluralistic ignorance. We integrate research from political science, psychology, and cognitive science to explain how online environments become saturated with false norms, who is misrepresented online, what happens when online norms deviate from offline norms, where people are affected online, and why expressions are more extreme online. We provide a framework for understanding and correcting for the distortions in our perceptions of social norms that are created by social media platforms. We argue the funhouse mirror nature of social media can be pernicious for individuals and society by increasing pluralistic ignorance and false polarization.
October 17
Ella J. Xu (Ph.D. Candidate in the Marketing Department, Quantitative Marketing, Stern School of Business, New York University)
Paper: An Explainable and Theory-Driven Deep Learning Architecture for Consumer Search and Consideration Sets
Abstract: To capture the complexity of search behavior at the consideration stage, we develop an explainable deep learning model (Recurrent Neural Networks) based on theories of consumer search. One feature of our model is that it flexibly allows learning about both product attributes and preference weights through search, rather than learning about only one of these primitives as assumed in prior work. We apply this model to a dataset of consumers searching for smartphones, which includes attribute-level eye-tracking data and search refinement tool usage. We find that our model improves the predictions of more flexible but atheoretical deep learning models and maintains high accuracy even for small data samples. In addition, we show that consumers mainly search to learn their preference weights early on and switch to learning product attributes about 40% into their search process. Finally, we derive a number of other managerially valuable insights into the formation of consideration sets.
October 10
Ruohan Zhan (Assistant Professor Department of Industrial Engineering and Decision Analytics, Hong Kong University of Science and Technology)
Paper: Estimating Treatment Effects under Recommender Interference: A Structured Neural Networks Approach
Abstract: Recommender systems are essential for content-sharing platforms by curating personalized content. To evaluate updates to recommender systems targeting content creators, platforms frequently rely on creator-side randomized experiments. The treatment effect measures the change in outcomes when a new algorithm is implemented compared to the status quo. We show that the standard difference-in-means estimator can lead to biased estimates due to recommender interference that arises when treated and control creators compete for exposure. We propose a “recommender choice model” that describes which item gets exposed from a pool containing both treated and control items. By combining a structural choice model with neural networks, this framework directly models the interference pathway while accounting for rich viewer-content heterogeneity. We construct a debiased estimator of the treatment effect and prove it is asymptotically normal with potentially correlated samples. We validate our estimator's empirical performance with a field experiment on Weixin short-video platform. In addition to the standard creator-side experiment, we conduct a costly double-sided randomization design to obtain a benchmark estimate free from interference bias. We show that the proposed estimator yields results comparable to the benchmark, whereas the standard difference-in-means estimator can exhibit significant bias and even produce reversed signs.
October 3
Kevin Lee (Ph.D. Candidate in Economics at The University of Chicago, Booth School of Business)
Paper: Generative Brand Choice
Abstract: Estimating consumer preferences for new products in the absence of historical data is an important but challenging problem in marketing, especially in product categories where brand is a key driver of choice. In these settings, measurable product attributes do not explain choice patterns well, which makes questions like predicting sales and identifying target markets for a new product intractable. After estimating brand preferences from choice data using a structural model, I use an LLM to generate predictions of these brand utilities from text descriptions of the brand and consumer. My main result is that when fine-tuned on estimates from an economic model, LLMs attain unprecedented performance at predicting preferences for brands excluded from the training sample. Conventional models based on text embeddings return predictions that are essentially uncorrelated with the actual utilities, and general-purpose LLMs are also uninformative. In comparison, my fine-tuned LLM's predictions attain a correlation of 0.52 with the actual values and 17 times higher mutual information than embedding-based models; i.e. for the first time, informative predictions can be made for consumer preferences of new brands. Furthermore, I demonstrate how to extract interpretable mechanisms by projecting the internal activations of the fine-tuned LLM into an interpretable feature space, which highlights the textual elements driving the predicted preferences. Finally, I combine causal estimates of the price effect from instrumental variables methods with the LLM predictions to enable pricing-related counterfactuals. By integrating the powerful generalization abilities of LLMs with principled economic modeling, my framework enables informed decisions on optimizing the marketing mix of a new product. More broadly, this approach illustrates how new kinds of questions can be answered by using the capabilities of modern LLMs to systematically combine the richness of qualitative data with the precision of quantitative data.
September 26
Jasmine Yang (Ph.D. Candidate in the Marketing division of Columbia Business School)
Paper: What Makes For A Good Thumbnail? Video Content Summarization Into A Single Image
Abstract: Thumbnails, reduced-size preview images or clips, have emerged as a pivotal visual cue that helps consumers navigate through video selection while “previewing” for what to expect in the video. We study how thumbnails affect viewers’ behavior (e.g., views, watchtime, preference match, and engagement). We propose a video mining procedure that decomposes high-dimensional video data into interpretable features (image content, affective emotions, and aesthetics) leveraging computer vision, deep learning, text mining and advanced large language models. Motivated by behavioral theories such as expectation-disconfirmation theory and Loewenstein’s theory of curiosity, we construct theory-based measures to evaluate the thumbnail relative to the video content to assess the degree to which the thumbnail is representative of the video. Using both secondary data from YouTube and a novel video streaming platform called “CTube” that we build to exogenously randomize thumbnails across videos, we find that aesthetically pleasing thumbnails lead to overall positive outcomes across measures (e.g., views and watchtime). On the other hand, content disconfirmation between the thumbnail and the video leads to opposing effects. It leads to more views, higher watchtime but lower post-video engagement (e.g., likes and comments). To further investigate how thumbnails affect consumers’ video choice and watchtime decisions, we build a Bayesian learning model in which consumers’ decisions to click on a video and continue watching the video are based on their priors (the thumbnail) and updated beliefs of the video content (the video’s frames, characterized as multi-dimensional and correlated video topic proportions). Our results suggest that viewers overall prefer watching videos longer when there is a higher disconfirmation between their initial content beliefs formed based on the thumbnail and updated beliefs based on the observed video scenes (signals), suggesting one role of thumbnails as generating curiosity for what may come next in the video. In addition, viewers prefer less disconfirmation before observing the thumbnail, highlighting the role of disconfirmation may change before and after the thumbnail. Based on the model’s estimates, we then run a series of counterfactual analyses to propose optimal thumbnails and compare with current practices of thumbnail recommendation to guide creators and platforms in thumbnail selection.
September 19
Mohin Banker (Ph.D. Candidate in Management, Behavioral Marketing Program, Yale Graduate School of Arts and Sciences)
Paper: Positive Information is Generalized More Than Negative Information When Controlling for Prior Beliefs
Abstract: Information is often generalized to similar objects and situations. For example, having a positive (or negative) experience at a restaurant may lead one to predict having positive (or negative) experiences at similar restaurants. Previous research has shown divergent results on whether people are more likely to generalize positive or negative information without a consistent explanation, largely finding stronger negative generalizations if anything. In five preregistered studies and four supplemental studies (N = 8,645), we provide a unifying framework of when and why positive or negative generalizations are stronger. In many previous experimental contexts, people hold positive prior beliefs (e.g., most restaurants are good), which leads to stronger negative generalizations because (a) measurement in these paradigms allow greater shifts in beliefs in the negative direction and magnify negative generalizations and (b) negative information contrasts with their positive prior beliefs more than positive information. For similar reasons, when people hold negative prior beliefs (e.g., most restaurants are bad), positive generalizations are stronger than negative generalizations. To isolate the true effect independent of priors, we employ a novel paradigm to control for prior beliefs. Across several domains, we demonstrate that positive generalizations are stronger than negative generalizations, all else equal. Evidence suggests this effect is not simply driven by impression management concerns, but reflects a belief that positive information is more diagnostic of other objects than negative information.
September 12
Seung Yoon Lee (Ph.D. Candidate in Management, Quantitative Marketing Program, Yale Graduate School of Arts and Sciences)
Paper: A Structural Model of Consumer Utility Generation for Personalization in Gaming Environments
Abstract: In gaming and gamified environments, consumer utility is generated through an active, dynamic process in response to game features. Consumers choose whether to spend time (play or quit) and money (purchase “tools”) to maximize utility. As the game difficulty level increases, win rate and player utility decrease, and players quit. But purchasing tools increases ability to win, and help reach a higher game level, increasing utility and extending play. As players reach a higher level with tools, they may continue to purchase additional tools to increase win rates and reach higher levels– creating the possibility of a positive feedback loop between purchase and play. In this paper, we build a dynamic structural model of the active process of consumer utility generation in gaming environments with time and money as inputs. The model generalizes dynamic durable goods purchase models (where only purchases are made) and dynamic models of effort/time response (as in incentive compensation models); this makes our model suitable for novel gamified environments requiring both time/effort and money inputs (e.g., digital learning/health habits, gamified loyalty programs). Estimates reveal three latent segments of players: premium enthusiasts who derive enjoyment from play itself and most willing to purchase tools; and win-seekers and progress-seekers who both find playing the game itself costly and have higher price sensitivity– the former primarily values immediate rewards, while the latter also values level-up rewards. We use counterfactuals to evaluate real-time personalization policies on who to target and when during gameplay with (i) discounts on tools; and (ii) dynamic difficulty adjustment of the game. Our counterfactual results show support for the positive feedback mechanism between purchase and play.
September 5
Fei Teng (Ph.D. Candidate in Management, Quantitative Marketing Program, Yale Graduate School of Arts and Sciences)
Paper: Honest Ratings Aren’t Enough: How Rater Mix Variation Impacts Suppliers and Hurts Platforms
Abstract: Customer reviews and ratings are critical for the success of online platforms in that they help consumers make choices by reducing uncertainty and motivate supplier (worker) incentives. Existing literature has shown that rating systems face problems primarily due to fake or discriminatory reviews. However, customers also differ in their rating styles -- some are generous and others are harsh. In this paper, we introduce the novel idea: even if raters are honest and unbiased, differences in the early rater mix (of generous and harsh raters) for a supplier can lead to biased ratings and unfair outcomes for suppliers. This is because platforms display past ratings to customers whose own ratings and acceptance of suppliers are impacted by it; and platform uses the past ratings for its prioritization and recommendations. These lead to the path dependence. Using data from a gig-economy platform, we estimate a structural model to analyze how early ratings affect long-term worker ratings and earnings. Our findings reveal that early ratings significantly impact future ratings leading to persistent advantages for early lucky workers and disadvantages for unlucky ones. Further, the use of these ratings in the platform's prioritization algorithms magnify these effects. We propose a neutral adjusted rating metric that can mitigate these effects. Counterfactuals show that using the metric enhances the accuracy of rating systems for customers, fairness in earnings for workers, and better retention of high quality workers for the platform. The resulting supplier turnover can lead to lower quality supplier mix on platforms.
August 29
Chi-Ying Wang (Ph.D. Candidate in Management, Quantitative Marketing Program, Yale Graduate School of Arts and Sciences)
Paper:Optimal Design of Recommended Choice Sets
Abstract: On most modern online stores (e.g., Amazon), consumers have the ability to search for products using queries, which can be vague and related to many product subcategories (e.g., jewelry can be diamonds or sapphires). Based on these queries, consumers are presented with a choice set of products. The design of the recommended choice set can lean towards either diversifying products across different subcategories (product-breadth-focused) or selecting products primarily from a single subcategory that is most likely to align with the consumer’s preference (product-depth-focused). A firm has two information sources – the firm can utilize information about the consumer (e.g. her purchase history and other similar consumers’ purchase patterns) and predictive technologies to predict a subcategory that best aligns with the consumer’s preferences and the consumer can signal her private information about her subcategory preference by choosing a query. We present a model for designing a fixed-sized choice set based on queries, demonstrating that the optimal choice set’s characteristics hinge on the accuracy of the firm’s predictive technologies, the difference between subcategories, the value of the consumer’s outside option, and the variation in products’ match values within a subcategory. Specifically, we find that the firm puts more emphasis on product breadth in recommended choice sets under lower predictive accuracy, a larger difference between subcategories, and a lower consumer outside option. Moreover, as the variance in the match value distribution of the consumer’s preferred subcategory increases, the firm should prioritize product breadth for a high outside option and focus on product depth for a low outside option
Spring 2024
May 2
Sudeep Bhatia (Associate Professor of Psychology, University of Pennsylvania)
Paper: The Structure of Everyday Choice: Insights from 100K Real-life Decision Problems
The diversity and complexity of everyday choices make them difficult to formally study. We address this challenge by constructing a dataset of over 100K real-life decision problems based on a combination of social media and large-scale survey data. Using large language models (LLMs), we are able to extract hundreds of choice attributes at play in these problems and map them onto a common representational space. This representation allows us to quantify both the content (e.g. broader themes) and the structure (e.g. specific tradeoffs) inherent in everyday choices. We also present subsets of these decision problems to human participants, and find consistency in choice patterns, allowing us to predict naturalistic decisions with established decision models. Overall, our research provides new insights into the attributes, outcomes, and goals that underpin important life choices. In doing so, our work shows how LLM-based largescale structure extraction can be used to study real-world human behavior.
April 25
Raghu Iyengar (Professor of Marketing, Wharton School)
Paper: The Impact of Experiential Store on Customer Purchases
Despite the rising popularity of non-traditional retail spaces providing immersive experiences, empirical evidence on their impact on customer behavior remains limited. We study the causal impact of customers visiting an experiential store on their purchase behavior. Analyzing individual-level transactions from a personal care business over a year before and after the store's launch, we find a positive and economically significant average treatment effect on customer spending. However, substantial heterogeneity exists, with only around 20\% of customers exhibiting a significant positive effect, while the majority show no significant change. The most substantial treatment effects are observed among high-value customers who, despite a long lapse since their last interaction, actively engaged with the firm. We decompose the treatment effect across the differing needs using a model that links product purchases in a customer basket to underlying customer needs. We find that needs linked to sophisticated skincare routines, connecting to high-priced items that customers can assess through hands-on testing and workshops provided in the store, exhibit positive significant effects. In contrast, treatment effects associated with basic skincare routines show no significant impact. The results align with experiential learning and haptics, offering insights into the implications for experiential retailing.
April 18
Ioannis Evangelidis (Associate Professor of Marketing, ESADE Business School)
Paper: Inflation, Shrinkflation, Skimpflation: Consumers’ Beliefs about the Fairness of Price Increases, Product Size Decreases, and Product Quality Decreases
Rising costs for companies have led to the proliferation of shrinkflation——the practice of decreasing the size of the product without adjusting its price—and skimpflation—the practice of decreasing the quality of the product without adjusting its price. While shrinkflation and skimpflation are becoming increasingly prevalent practices in the marketplace, there’s limited understanding of consumer perceptions towards those practices. In this seminar, I will discuss the results of multiple preregistered studies that investigate consumers’ beliefs about the extent to which shrinkflation and skimpflation are fair when firms face cost increases.
April 11
Jillian J. Jordan (Assistant Professor of Business Administration, Harvard Business School)
Paper: How reputation does (and does not) drive people to punish without looking
Punishing wrongdoers can confer reputational benefits, and people sometimes punish without careful consideration. But are these observations related? Does reputation drive people to people to "punish without looking"? And if so, is this because unquestioning punishment looks particularly virtuous? To investigate, we assigned "Actors" to decide whether to sign punitive petitions about politicized issues ("punishment"), after first deciding whether to read articles opposing these petitions ("looking"). To manipulate reputation, we matched Actors with co-partisan "Evaluators", varying whether Evaluators observed (i) nothing about Actors' behavior, (ii) whether Actors punished, or (iii) whether Actors punished and whether they looked. Across four studies of Americans (total n = 10,343), Evaluators rated Actors more positively, and financially rewarded them, if they chose to (vs. not to) punish. Correspondingly, making punishment observable to Evaluators (i.e., moving from our first to second condition) drove Actors to punish more overall. Furthermore, because some of these individuals did not look, making punishment observable increased rates of punishment without looking. Yet punishers who eschewed opposing perspectives did not appear particularly virtuous. In fact, Evaluators preferred Actors who punished with (vs. without) looking. Correspondingly, making looking observable (i.e., moving from our second to third condition) drove Actors to look more overall-and to punish without looking at comparable or diminished rates. We thus find that reputation can encourage reflexive punishment-but simply as a byproduct of generally encouraging punishment, and not as a specific reputational strategy. Indeed, rather than fueling unquestioning decisions, spotlighting punishers' decision-making processes may actually encourage reflection.
April 4
Longxiu Tian (Assistant Professor of Marketing, Kenan-Flagler Business School)
Paper: Learning Customer Heterogeneity from Aggregate-Response Online Experiments
Online field experiments, or A/B tests, that rely on traffic from web visitors, search engines, or social media platforms typically only avail of aggregate-response test results (e.g., total impressions and clicks), due to data privacy and sharing restrictions. This has limited analysts to measuring average effects in such settings, despite a rich literature in marketing on the importance of accounting for the customer-base's preference distribution. To solve this problem, we propose a hierarchical Bayesian (HB) aggregate logit model to infer within-test heterogeneity distributions across customer preferences by leveraging between-test variations across aggregate-response A/B tests. We illustrate this method using a large-scale dataset of news headline tests (totaling 150,817 variations). To quantify the design space of tests, we decompose textual headlines via Transformer networks and provide interpretability via transfer learning from prelabeled corpora. Additionally, we exemplify the value of quantifying heterogeneity by disentangling the impact of clickbait headlines on clickthrough rate (CTR). Posterior inference across seven different operationalizations of clickbait consistently exhibit a pattern of reduced mean and increased variance of the impact of clickbait. Within our empirical context, this suggests so-called clickbait headlines were likely an artifact of providing journalistic “coverage" across readership's heterogeneous preference, than simply to drive CTR via sensationalist headlines.
March 28
Woo-kyoung Ahn (John Hay Whitney Professor of Psychology, Yale University)
Paper: Decoding the impact of biological explanations on mental disorder perceptions
My research program explores the impact of causal explanations on people's judgments. In this presentation, I will share findings demonstrating that attributing mental disorders to biological factors (e.g., genetics, brain abnormalities) can adversely influence perceptions of those with mental disorders and individuals' own mental health issues. Specifically, biological attributions lead to increased social distancing and a preference for medication over psychotherapy among both laypeople and clinicians. Individuals with depressive symptoms who believe their condition is biologically based may feel unable to regulate their emotions and become pessimistic about recovery. Furthermore, we found that manipulating beliefs about genetic predisposition to depression—via a sham saliva test—led participants to perceive their recent experiences as more depressive compared to those not informed of a genetic risk. Individuals led to believe they lacked genetic risks for alcoholism underestimated symptoms, potentially posing a public health risk. I will also discuss effective psychoeducational interventions designed to mitigate these negative effects.
February 29
Julia Minson (Associate Professor of Public Policy, Harvard Kennedy School of Government)
Paper: Underestimating Counterparts’ Learning Goals Impairs Conflictual Conversations
People struggle to have thoughtful conversations about opposing views, especially when the topic is important, and parties’ outcomes are intertwined. Whereas previous work has approached this problem by focusing on strategies to change individual-level mindsets (e.g., encouraging open-mindedness or intellectual humility), we focus on the role of partners’ beliefs about their counterparts. In earlier work (Collins, Dorison, Gino & Minson, 2022; N = 2,614), we found that people consistently underestimated how willing disagreeing counterparts were to learn about their views and these beliefs predicted the outcomes of conflictual conversations. In both American partisan disagreement and the Israeli-Palestinian conflict, a short informational intervention highlighting counterparts’ willingness to learn decreased opponent derogation and increased desire for future contact. Our ongoing research, however, shows that individuals in conflict generally fail to effectively express their willingness to understand opposing perspectives. This appears to be a problem of both ability and motivation: both teaching participants a simple two-sentence intervention and strong financial incentives led to improvements in communicating willingness to learn and subsequent conflict outcomes. Importantly, we also find that exhibiting a willingness to learn about one’s counterpart is perceived as a less dominant – and thus a potentially less desirable –behavior.
February 22
Nils Wernerfelt (Assistant Professor of Marketing, Kellogg School of Management)
Paper: Estimating the Value of Offsite Data to Advertisers on Meta
We study the extent to which advertisers benefit from data that are shared across applications. These types of data are viewed as highly valuable for digital advertisers today. Meanwhile, product changes and privacy regulation threaten the ability of advertisers to use such data. We focus on one of the most common ways advertisers use offsite data and run a large-scale study with hundreds of thousands of advertisers on Meta. Within campaigns, we experimentally estimate both the effectiveness of advertising under business as usual, which uses offsite data, as well as how that would change under a loss of offsite data. Using recently developed deconvolution techniques, we flexibly estimate the underlying distribution of treatment effects across our sample. We find a median cost per incremental customer using business as usual targeting techniques of $43.88 that under the median loss in effectiveness would rise to $60.19, a 37% increase. Similarly, analyzing purchasing behavior six months after our experiment, ads delivered with offsite data generate substantially more long-term customers per dollar, with a comparable delta in costs. Further, there is evidence that small scale advertisers and those in CPG, Retail, and E-commerce are especially affected. Taken together, our results suggest a substantial benefit of offsite data across a wide range of advertisers, an important input into policy in this space.
February 15
Dante Donati (Assistant Professor of Business, Columbia Business School)
Paper: Can Facebook Ads prevent Malaria? Two field experiments in India
Social media campaigns are increasingly used for public health objectives, yet their effectiveness remains poorly understood. This paper presents results of two trials evaluating the impact of a large malaria prevention Meta campaign in India, 1-to-5 months after ad exposure. The first is a cluster randomized controlled trial that evaluated a “real world” ad campaign, where experimental groups were assigned at the district level and survey respondents were independently recruited in three high-burden states. While this intervention increased the use of bed nets among individuals living in solid (concrete) houses by 10.5%, the campaign was ineffective for those living in non-solid dwellings, where malaria risk is higher. In turn, self-reported malaria incidence decreased by 41\% among individuals living in solid houses. Consistently, administrative health facility data show a reduction in urban monthly incidence by 41%, but no effect in rural areas. Did the ad campaign not persuade households at greater malaria risk, or did it fail to reach them? In a second field experiment we varied exposure to the same ads at the individual level, using the re-marketing tools of the ad platform. We find that bed net use increased for both types of households, suggesting that social media campaigns need to refine their targeting strategies to reach development objectives. The paper proposes a series of micro-targeting approaches to maximize impacts in sub-populations of interest.
February 8
Brett Hollenbeck (Associate Professor of Marketing, Anderson School of Management)
Paper: Misinformation and Mistrust: The Equilibrium Effects of Fake Reviews on Amazon.com
This paper investigates the impact on consumers of the widespread manipulation of reputation systems by sellers on two-sided online platforms. We focus on a relevant empirical setting, the use of fake product reviews on e-commerce platforms, which can affect consumer welfare via two channels. First, rating manipulation deceives consumers directly, causing them to buy lower quality products and pay higher prices for the products with manipulated ratings. Second, the presence of rating manipulation lowers trust in ratings, which may result in worse product matches if consumers place too little weight on quality ratings. This decrease in trust may also increase price competition and benefit consumers by lowering prices on high quality products whose quality is less easily observed. We formally model how consumers form beliefs about quality from product ratings and how these beliefs are affected by the presence of fake reviews. We use incentivized survey experiments to measure beliefs about fake review prevalence. Our model of product quality is incorporated into an empirical model of consumer demand for products and how demand is shifted by ratings, reviews, and prices. The model is estimated using a large and novel dataset of products observed buying fake reviews to manipulate their Amazon ratings. We use counterfactual policy simulations in which fake reviews are removed and consumer beliefs adjust accordingly to explore the effectiveness and welfare and profit implications of different methods of regulating fake reviews.
February 1
Joshua Knobe (Professor of Philosophy and Psychology and Linguistics, Yale University)
Paper: What Comes to Mind?
Although we would ideally want to think carefully about every aspect of our lives, it's a core part of our human predicament that we cannot think carefully about everything. Thus, we have to think only about some things and leave other things unexplored. If people can only think about some things, which should they think about? In a series of studies, we find that people specifically tend to think about the things they regard as good and to think less about things they regard as bad. In computational work, we then show that this is a rational response to the problem of limited cognitive resources.