Skip to main content

How to Make a Bad Decision

Some of our most important decisions are shaped by something as random as the order in which we make them. The gambler’s fallacy, as it’s known, affects loan officers, federal judges — and probably you too. How to avoid it? The first step is to admit just how fallible we all are.

Our latest Freakonomics Radio episode is called “How to Make a Bad Decision.” (You can subscribe to the podcast at iTunes or elsewhere, get the RSS feed, or listen via the media player above.)

Some of our most important decisions are shaped by something as random as the order in which we make them. The gambler’s fallacy, as it’s known, affects loan officers, federal judges — and probably you too. How to avoid it? The first step is to admit just how fallible we all are.

Below is a transcript of the episode, modified for your reading pleasure. For more information on the people and ideas in the episode, see the links at the bottom of this post. And you’ll find credits for the music in the episode noted within the transcript.

Listen to the  story 

*      *      *

Let’s say I flip a coin and it comes up … heads. Now I flip it again … hmm heads again. One more time … and … wow, that’s three heads in a row. Okay, if I were to flip the coin one more time, what are you predicting? Here’s what a lot of people would predict: “Let’s see, heads-heads-heads … it’s gotta come up tails this time.” Even though you know a coin toss is a random event, and that each flip is independent — and therefore the odds for any one coin toss are … 50-50. But that doesn’t sit well with people.

Toby MOSKOWITZ: That doesn’t sit well with people.

Toby Moskowitz is an economist at Yale.

MOSKOWITZ: We like to tell stories and find patterns that aren’t really there. And if you flip a coin, say, ten times, most people think — and they’re correct — that on average you should get five heads, five tails. The problem is they think that should happen in any ten coin flips. And of course it’s very probable that you might get eight heads and two tails and it’s even possible to get ten heads in a row. But people have this notion that randomness is alternating. And that’s not true.

This notion has come to be known as “the gambler’s fallacy.”

MOSKOWITZ: This is a common misconception in Vegas. You go to the slot machine, it hasn’t paid out in a long time and people think, “Well, it’s due to be paid out.” That is just simply not true, if it is a truly independent event, which it is, the way it’s programmed.

DUBNER: So Toby, you have co-authored a new working paper called “Decision-Making Under the Gambler’s Fallacy,” and if I understand correctly, the big question you’re trying to answer is how the sequencing of decision-making affects the decisions we make. Is that about right?

MOSKOWITZ: That’s correct. In fact, the genesis of the paper was really to take this idea of the gambler’s fallacy, which has been repeated many times in psychological experiments, which is typically a bunch of undergrads playing for a free pizza, and apply it to real-world stakes, where the stakes are big, there is a great deal of uncertainty, and these decisions matter a lot.

Some of these decisions matter so much they can mean the difference between life and death. So these probably aren’t the kind of decisions we should be making based on a coin toss.

*      *      *

So, Toby Moskowitz and his co-authors, Daniel Chen and Kelly Shue, have written this interesting research paper. It’s called “Decision-Making Under the Gambler’s Fallacy.” It’s the kind of paper that academics publish by the thousand. They publish in order to get their research out there, maybe to get tenure, etc. So it matters for them. Does it matter for you? Why should you care about something like the gambler’s fallacy?

Well, we often talk on this program about the growing science of decision-making. But it’s funny. Most of the conversations focus on the outcome for the decision-maker. What about the people the decision is affecting? What if you are a political refugee, hoping to gain asylum in the United States? There’s a judge making that decision. What if you’re trying to get your family out of poverty in India by starting a business and you need a bank loan? There’s a loan officer making that decision. Or what if you’re a baseball player, waiting on a 3-2 pitch that’s going to come at you 98 miles an hour from just 60 feet, 6 inches away? That’s where the umpire comes in.

MOSKOWITZ: We’ll start with Major League Baseball – that was a simple one.

Moskowitz and his co-authors analyzed decision-making within three different professions – baseball umpires, loan officers, and asylum judges – to see whether they fall prey to the gambler’s fallacy. Because …

MOSKOWITZ: … there’s all kinds of possible areas where the sequence of events shouldn’t matter, but our brains think they should and it causes us to make poor decisions.

Decisions that are the result of …

MOSKOWITZ: What I would call decision heuristics.

A “heuristic” being, essentially, a cognitive shortcut. Now, why choose baseball umpires?

MOSKOWITZ: Because baseball has this tremendous data set called PITCHf/x, which records every pitch from every ballgame and what it records is if you look at the home-plate umpire — where the pitch landed, where it was located within or outside the strike zone, and also what the call was from the umpire.

Moskowitz and his colleagues looked at data from over 12,000 baseball games, which included roughly 1.5 million called pitches – that is, the pitches where the batter doesn’t swing, leaving the umpire to decide whether the pitch is a ball or strike. As they write in the paper: “We test whether baseball umpires are more likely to call the current pitch a ball after calling the previous pitch a strike and vice versa.” There were 127 different umpires in the data. The researchers did not focus on pitches that were obvious balls or strikes.

MOSKOWITZ: If you take a pitch dead center of the strike zone, umpires get that right 99 percent of the time.

Instead, they focused on the real judgment calls.

MOSKOWITZ: So the thought experiment was as follows — take two pitches that land in exactly the same spot. The umpire should be consistent and call that pitch the same way every time. Because the rules state that each pitch is independent in terms of calling it correctly — it’s either in the strike zone or it’s not.

The first thing the PITCHf/x data shows is that umpires are, generally, quite fallible.

MOSKOWITZ: On pitches that are just outside of the strike zone – they’re definitely balls, but they’re close – on those pitches, umpires only get those right about 64 percent of the time. So that’s a 36 percent error rate, it’s big.

DUBNER: Slightly better than flipping a coin, but not much.

MOSKOWITZ: Not much. Better than you and I could do though, I would say.

And how does the previous pitch influence the current pitch?

MOSKOWITZ: Just as a simple example, if the previous pitch was a strike, the umpire was already about half a percent less likely to call the next pitch a strike.

Half a percent doesn’t seem like that big an error. But keep in mind that’s for the entire universe of next pitches — whether it’s right down the middle, or high and outside, or in the dirt. What happens when the next pitch is a borderline call?

MOSKOWITZ: So if you look at pitches on the corners, near the corners — that’s where you get a much bigger effect. So as an example, if I see two pitches on the corners, one happened to be preceded by a strike call, and one that didn’t — the one preceded by a strike call, the next pitch will less likely be called a strike about three-and-a-half percent of the time. Now if I increase that further, if the last two pitches were called a strike, then that same pitch will less likely be called a strike 5.5 percent. So those are pretty big numbers.

DUBNER: And let me just ask you, other than finishing location of the pitch, what other factors relating to pitch speed or spin or angle, etc., did you look at and/or could you control for, and is that important?

MOSKOWITZ: You always want to control for those things, because some people might argue, “Well, maybe they see it differently if it’s a 98-mile-an-hour fastball or an 80-mile-an-hour slider or curve. Maybe that changes the optics for the umpire.” So we try to control for all that, and the beautiful thing about baseball is that they have an enormous amount of data. We threw in things like the horizontal spin and vertical distance of the pitch. The movement of it, the speed. The arc when it leaves the pitcher’s arm to when it crosses the plate. We also controlled for who the pitcher was, who the batter was, and even who the umpire was.

DUBNER: Since you’re controlling for the individual umpires, I assume you have a list of the best and worst umpires, yes?

MOSKOWITZ: On this dimension, yes. And it turns out they’re all pretty much about the same. So you could either view them as equally good or equally bad. There wasn’t a single umpire that didn’t exhibit this kind of behavior. They all fell prey to what we interpret as the gambler’s fallacy in terms of calling pitches. Which stands to reason because they’re all human.

Hunter WENDELSTEDT: One of the biggest things you have to do when you’re an umpire is be honest with yourself.

That’s Hunter Wendelstedt.

WENDELSTEDT: Well, now I’m a Major League baseball umpire. I’ve been in the Major Leagues full-time since 1999. So I’ve been able to travel this great country doing something I love and that’s umpiring baseball games.

Wendelstedt’s father, Harry, was also a major-league umpire, an extremely well-regarded one. Harry also ran the Wendelstedt Umpire School, near Daytona Beach, Florida, which Hunter now runs during the off-season. They start with the fundamentals.

WENDELSTEDT: You hold up a baseball. “Here is the baseball. Here are the measurements of the baseball. Here is the weight of the baseball.” Same thing with the bat, and you go step by step. There is a proper way for an umpire to put their mask on and take their mask off so as to not block their vision. Different ways to ensure that you got the best look you can. And that’s the first 7 to 11 days.

If you’re fortunate enough to make it as an umpire all the way to the majors, you know you’ll be subject to a great deal of scrutiny.

WENDELSTEDT:  Because now on any given day at every major league stadium, you have cameras, most of them high-definition, super-slow-motion, that are critiquing every pitch and every play.

Wendelstedt is a fan of the PITCHf/x system that Toby Moskowitz used to analyze umpire decisions.

WENDELSTEDT: Because once these pitch systems got into place, it’s been a great educational tool for us. Because you look at it and we get a score sheet after every game we work behind the plate, and it tries to see if you have any trends. And it really helps us become a better-quality product for the game of baseball.

We sent Hunter Wendelstedt the Moskowitz research paper which argues that major league umpires succumb to the gambler’s fallacy.

WENDELSTEDT: I was reading that, I got nervous. But that was really interesting. That’s just stuff I’ve never even thought about. It’s kind of blowing my mind in the last couple days. It’s pretty neat.

But Wendelstedt wasn’t quite ready to accept the magnitude of umpire error the researchers found.

WENDELSTEDT: I think it’s very interesting and I really look forward to studying that some more, because running the umpire school and all that, I gotta keep up on the trends and the way that the perception is going out there also.

Wendelstedt did say that if an umpire makes a bad call – whether behind the plate or in the field – you don’t want to try to compensate later.

WENDELSTEDT: If you miss something – the worst thing to do, you can never make up a call. People are like, “That’s a makeup call.” Well, no, it’s not, because if you try and make up a call – now you’ve missed two. And that’s something that we would never, ever want to do.

The Moskowitz research paper only analyzed data for the home-plate umpire, the one calling balls and strikes. For those of you not familiar with baseball: there are four umpires working every game — one behind home plate and one at each of the three bases. The umps rotate positions from game to game, so a given ump will work the plate only every few games.

Interestingly, baseball uses six umps during the postseason, adding two more down the outfield lines — which has always struck me as either a jobs program or a rare admission of umpiring fallibility. Because if you need those two extra umps to get the calls right during the postseason, doesn’t that imply they ought to be there for every game?

In a more overt admission of the fallibility of umpires, baseball has increasingly been using video replays to look at close calls. In such cases, the calls are overturned nearly half the time. Nearly half the time! Calls by the best umpires in the world. Which might make you question the fundamental decision-making ability of human beings generally — and whether we’d be better off getting robots to make more of the relatively simple judgment calls in our life, like whether a baseball pitch is a ball or a strike.

But, human nature being what it is — and most of us having an undeservedly high opinion of ourselves as good decision-makers — we probably won’t be seeing wholesale automation of this kind of decision-making anytime soon. Making decisions, after all, is a big part of what makes us human.

So it’s hardly surprising we’d be reluctant to give that up. But if the gambler’s fallacy is as pronounced as Toby Moskowitz and his colleagues argue, you might wish otherwise. Especially if you are, say, applying for a bank loan in India …

MOSKOWITZ: And we got a little bit lucky here.

“Lucky,” meaning some other researchers had already run an experiment

MOSKOWITZ: … with a bank in India and a bunch of loan officers on actual loans.

And the data from that experiment allowed Moskowitz and his co-authors to look for evidence of the gambler’s fallacy. Because …

MOSKOWITZ: … What they did was they took that data and they reassigned them to other loan officers.

Which allowed for a randomization of the sequence of loan applications.

MOSKOWITZ: Suppose you and I look at the same six loans. I happen to look at them in a descending order, you happen to look at them in ascending order, let’s say alphabetically, just some way to rank them. And then the question is, “Did we come to different decisions just purely based on the sequencing of those loans?”

Now, keep in mind these were real loan applications that an earlier loan officer had already approved or denied. This let the researchers measure an approval or denial in the experiment against the “correct” answer — although the correct answer in this case isn’t nearly as definitive as a correct ball or strike call in baseball. Why? Because if a real loan application had been denied, the bank had no follow-up data to prove whether that loan actually would have failed.

MOSKOWITZ: But the loans that were approved — we can look at the performance of that loan later on. You could see whether it was delinquent or didn’t pay off as well. So unlike baseball, where we know for sure there is an error, here, it’s not quite clear.

DUBNER: How much did loan officers in India fall prey to the gambler’s fallacy?

MOSKOWITZ: So, you and I are looking at the same six set of loan applications, and the sequence in which I received them — suppose I had three very positive ones in a row, then I’m much more likely to deny the fourth one, even if it was as good as the other three.

The analysis showed that the loan officers got it wrong roughly eight percent of the time simply because of the sequence in which they saw the applications.

DUBNER: Talk for just a minute why this kind of experiment, a field experiment, is inherently, to people like you, more valuable than a lab experiment with a bunch of undergrads trying to get some free pizza, for instance.

MOSKOWITZ: That’s right. Well, in this particular case, this is their job, first of all, so you’re dealing with experts in their own field, making decisions that they should be experts on as opposed to maybe very smart undergrads but making a decision on something they haven’t had a lot of experience doing and shouldn’t be considered experts doing. The second thing is incentives.

Ah, incentives. One beauty of the original experiment was that it had the loan officers working under one of three different incentive schemes. Which allows you to see if the gambler’s fallacy can perhaps be overcome by offering a strong-enough reward. Some loan officers operated under a weak incentive scheme …

MOSKOWITZ: … which basically meant you just got paid for doing your job, whether you got it right or wrong, what we would call flat incentive.

Then there was a moderate incentive scheme …

MOSKOWITZ: … which is: we’ll pay you a little more if you get it right, and then pay you a little bit less when you get it wrong.

And, finally, some loan officers were given a strong incentive scheme:

MOSKOWITZ: … which was, We’ll pay you a little bit more to get it right, but we’ll punish you severely for getting it wrong. Meaning you approved it when it should have been denied, or you denied it when it should have been approved. Then it costs you money.

So how was the gambler’s fallacy affected under stronger incentives?

MOSKOWITZ: Well, this was the most interesting part.

With the strongest incentive at play – where the loan officer was significantly rewarded or punished for not messing up an application simply because of the order they read it …

MOSKOWITZ: We found that that eight percent error rate, or I should say what we ascribe to the gambler’s fallacy affecting decision-making, goes down to 1 percent.

DUBNER: Wow.

MOSKOWITZ: Doesn’t get eliminated completely, but pretty nicely. We then looked at what the loan officers did in order to get that 8 percent down to 1 percent — it turns out they ended up spending a lot more time on the loan application. If they make a quick decision they rely on these simple heuristics of, “Well, I just approved three loans in a row, I should probably deny this one.” But if I’m forced to actually just use information and think about it slowly because I really want to get it right – because I get punished if I don’t – then I don’t rely on those simple heuristics as much. I force myself to gather the information and I make a better decision.

DUBNER: Or, to put it in non-academic terminology, if you’re paid a lot to not suck at something, you’ll tend to not suck.

MOSKOWITZ: If effort can help, that’s right.

Coming up next on Freakonomics Radio:

MOSKOWITZ: Let’s hope that federal asylum judges aren’t deciding 50 percent of their cases based on sequencing.

Also: how stock prices are affected by when a company reports earnings:

SHUE: It makes today’s earnings announcement seem kind of less good in comparison.

And, if you like this show, why don’t you give it a nice rating on whatever podcast app you use. Because your approval means everything to us.

*      *      *

Even if you’ve never watched a baseball game in your life, even if you don’t care at all whether someone in India gets a bank loan, you might care about how the United States runs its immigration courts. And whether it decides to grant or deny asylum to a petitioner.

MOSKOWITZ: This is clearly a big decision, certainly for the applicants, right? I mean, in some cases, it could mean the difference between life and death, or imprisonment and not imprisonment, if they have to go back to their country, where they’re fleeing for political reasons or something else.

These cases are heard in immigration courts by federal judges. Each case is randomly assigned — which, if you’re an applicant, is a hugely influential step. As Toby Moskowitz and his coauthors write, New York at one time had three immigration judges who granted asylum in better than 8 of 10 cases; and two other judges who approved fewer than 1 in 10. So as the researchers compiled their data to look at whether the gambler’s fallacy is a problem in federal asylum cases, they focused on judges with more moderate approval rates. The data went from 1985 to 2013.

MOSKOWITZ: So we looked only at judges that decided at least 100 cases in a given court and only looked at courts or districts that had at least 1,000 cases. Among that set, across the country over those several decades, you’re talking about 150,000 decisions and I think it was 357 judges making those decisions. So quite a large sample size.

The researchers controlled for a number of factors: the asylum seekers’ country of origin; the success rate of the lawyer defending them; even time of day — which, believe it or not, can be really important in court. A 2001 paper looked at parole hearings in Israeli prisons to see how the judges’ decisions were affected by extraneous factors — hunger, perhaps. This study found that judges were much more likely to grant parole early in the day – shortly after breakfast, presumably – and again shortly after the lunch break. So Moskowitz and his colleagues tried to filter out all extraneous factors, in order to zoom in on whether the sequencing of cases affected the judges’ rulings. Keep in mind there’s also no way to measure a “correct” ruling.

MOSKOWITZ: When a judge denies a certain case, we don’t know for sure if that was the right or the wrong decision. So I want to qualify that because what we can show is whether the sequencing of approval or denial decisions has any bearing on the likelihood that the next case is approved or denied. And that we show pretty strongly.  

DUBNER: So what does it look like for an asylum judge to be affected by the gambler’s fallacy?

MOSKOWITZ: So if the cases are truly randomly ordered, then what happened to the last case should have no bearing on this case, right? Not over large samples. And what we find is that’s not true. If the previous case was approved by the judge, then the next case is less likely to be approved by almost one percent. Where it gets really interesting is, is if the previous two cases were approved, then that drops even further to about one-and-a-half percent. And if these happen on the same day, that goes up even further, closer to 3 percent. And then obviously if it’s two cases in the same day it gets even bigger, it starts to approach about 5 percent. So those are pretty big numbers, especially for the applicants involved. Or to put it a little differently, just by the dumb luck of where you get sequenced that day could affect your probability of staying in this country by five percent, versus going back to the country that you’re fleeing. That’s a remarkable number, in my opinion.

DUBNER: And in a different arena, if I hear that a baseball umpire might be wrong 5 percent of the time, I think, “Well, but the stakes are not very high.” But in the case of an asylum seeker, this is a binary choice. This is not a one ball or strike out of many. This is I’m either in the country or I’m not in the country. And so what did that suggest to you about the level of the severity that the gambler’s fallacy can wreak I guess, on different important decisions, whether it’s for an individual or – I guess I’m thinking at a governmental level – “I refused to declare war on a given dictator three times in the last five years, but the fourth time gets harder,” I guess, yeah?

MOSKOWITZ: Yeah I think that’s right. You can imagine the poor family that happens to follow two positive cases. Even if their case is just as viable, their chances of getting asylum go down by 5 percent. That doesn’t sound like much, but compare that to what it would be if the reverse had been true. If the two cases preceding them were poor cases and were denied, then their chances of being approved go up by five percent. That becomes a 10 percent difference just based on who happened to be in front of you that day. Total random occurrence. So you wouldn’t expect the magnitudes to be huge. Let’s hope that federal asylum judges aren’t deciding 50 percent of their cases based on sequencing.

DUBNER: So the lesson if I’m seeking asylum, or any other ruling, what I really want to do is bribe someone to let me get to the judge right after he or she has rejected the previous few applicants, right? Other than that….

MOSKOWITZ: It would be worth it.

DUBNER: Well, plainly it would be really really really worth it, unless you get caught bribing and then obviously get rejected for asylum because of just that. So you’re telling us the data from the decision-maker’s side. What about the the seeker’s side? Is there anything that can be done to offset this bias?

MOSKOWITZ: I’m not sure there’s much you can do. You’re at the mercy of courts. I suppose if you have a particularly good lawyer, maybe there is a way to lobby. I mean, I am told the cases are randomized, I assume that’s true. But who knows. Like you said, maybe bribes is a bit extreme. But maybe there is a way…

DUBNER: Well, feigning illness at least, to break the streak.

MOSKOWITZ: Yes exactly. I mean there is all kinds of things that perhaps a good lawyer could do.  

The evidence that Moskowitz and his colleagues present is, to me at least, fairly compelling evidence that decision-makers in these three realms – courts, banks, and baseball – make occasional poor decisions based on nothing more substantial than the order they faced the decisions. But what if these researchers are just wrong? What if there are other explanations?

MOSKOWITZ: No, that’s a fair question. There are certainly other possible things to consider, and we try to rule them out. The first and the most obvious thing would be that the quality or merits of cases has that similar pattern. That seems hard to believe. We believe the randomization of cases, certainly in the loan-officer experiment, where we know it’s randomized because we did it and these other economists randomized it themselves, we know we can rule that out. So I don’t think that’s an issue, but, maybe just the quality of cases has this sort of alternating order to it and these guys are actually making the right decision. We don’t think that’s true and in baseball we can actually prove it by showing that they’re getting the wrong call.

It’s also interesting, to me at least, that what the Moskowitz research is pushing against is an instinct that a lot of people are trying to develop, which is pattern-spotting.  More and more, especially when we’re dealing with lots of data, we look perhaps harder than we should for streaks or anomalies that aren’t real. We may look for bias that isn’t necessarily bias. Our umpire friend Hunter Wendelstedt brought this up when we asked whether, as most baseball fans believe, umpires treat certain pitchers with undue respect.

WENDELSTEDT: Well, you know, here is the thing about it. You take Clayton Kershaw —  the umpire is going to call more strikes when Clayton Kershaw’s out there. Why? Is it ‘cause we like him better? No, it’s because he throws more strikes. He’s a better pitcher than a rookie that’s just coming up from getting the call up from the New Orleans Zephyrs. It’s one of those things – the reason why Greg Maddux and John Smoltz – they’re in the Hall of Fame for a reason.

Toby Moskowitz points to one more barrier to unbiased decision-making, related to the gambler’s fallacy but slightly different. It’s another bias known as …

MOSKOWITZ: …sequential contrast effects. That sounds like a very technical term, but it’s a pretty simple idea. The idea is if I read a great book last week, then the next book I read, even if it’s very very good, I might be a little disappointed. Because my reference for what a really good book is just went up.

DUBNER: And you can see how that phenomenon would really be important in let’s say job applicants or any kind of applicant, yeah?

MOSKOWITZ: Correct. We see this all the time, that the sequence of candidates that come through for a job, I think, matters. Both from the gambler’s fallacy as well as from sequential-contrast effects.

Kelly SHUE: So I along with a couple of other researchers were interested in this idea of sequential-decision errors.

That’s Kelly Shue.

SHUE: I am an associate professor of finance at University of Chicago, the Booth School of Business.

She’s also one of Toby Moskowitz’s co-authors on the gambler’s fallacy paper. And she’s a co-author on another paper, called “A Tough Act to Follow: Contrast Effects in Financial Markets.”

SHUE: And I was talking to some asset managers in New York and they said that when they consider earnings announcements by firms, their perception of how good the current earnings announcement was is very much skewed by what they have recently seen.

So Shue and her colleagues collected data on firms’ quarterly earnings announcements from 1984 to 2013, to see how the markets responded.

SHUE: We look at how that firm’s share price moves on the day of the earnings announcement and in a short time window before and after that announcement.

And what did they find?

SHUE: So what we find is that if yesterday an unrelated large firm announced a very good earnings announcement, it makes today’s earnings announcement seem kind of less good in comparison. And on the other hand, suppose yesterday’s earnings announcement was pretty disappointing then today’s news, all else equal, looks more impressive.

Before you go thinking that stock-market investors are particularly shallow, Shue notes that contrast effects like these have been widely observed in lab experiments.

SHUE: So what they’ve shown is that subjects will judge crimes to be less egregious if they have recently been exposed to narratives of more egregious crimes. College students will rate pictures of their female classmates to be less attractive if they’ve recently been exposed to videos of more attractive actresses. So we believe something fairly similar is happening in the context of earnings.

In this research as well as the gambler’s fallacy research, the timing of the consecutive decisions really matters. Toby Moskowitz again:

MOSKOWITZ: Meaning if the decisions that you’re making occur very close in time then you tend to fall prey to the sequencing effect. So take the judge’s example, for instance. We find that if cases are approved on the same day, then the likelihood of the next case that same day goes way down. If those cases were one day removed, the effect gets a lot weaker. Or in fact if there is a weekend in between the decisions, then it’s almost nonexistent. So if the judge approved a bunch of cases on Friday, that really doesn’t have much bearing on what happens Monday.

Moskowitz has tried to apply this insight to his own decision-making, when it comes to grading students’ papers.

MOSKOWITZ: If I see a sequence of good exams, that may affect the poor students who happen to be later in the queue in my pile. But one of the things I try to do, mostly just because I don’t want my head to explode, is I take frequent breaks between grading these papers and I think that breaks that sequencing, my mind sort of forgets about what I did in the past because I’ve done something else in between.

DUBNER: What do you do during your breaks?

MOSKOWITZ: Go for a walk. Check email. Get some coffee. Maybe work on something else. Or, my students don’t want to hear this, but occasionally I’ll grade an exam if front of a baseball game and I’ll stop and watch a couple of innings.

DUBNER: Obviously every realm is different. A loan officer is different from a baseball umpire is different from an asylum judge is different from a professor grading papers and so on. But what they all would seem to have in common is a standard of competence or excellence or whatnot. And so is there any way for all of us to try to avoid the bias of the gambler’s fallacy, to try to I guess connect more with an absolute measure, rather than a relative measure?

MOSKOWITZ: Well, that’s a very good question. I think it does depend on the field. Obviously if you think about asylum judges, the absolute measure sort of your overall approval or denial rate might be good from a judge’s perspective, but it’s certainly not great from the applicant’s perspective if you make a lot of errors on the side. The errors may balance out, but to those applicants, there is huge consequences.

Now that Moskowitz has seen empirical proof of the gambler’s fallacy, he sees it just about everywhere he looks.

MOSKOWITZ: My wife, who is a physician, claims that she thinks that happens. I would also argue test-taking. My son who’s actually studying a little bit for the SSATs, he’ll say things like, “You know, know I’m not sure what the answer to number 4 was but the last two answers were A so it can’t be A, right?” And you sort of caution, “That may not be right. And so it depends on whether the test makers have any biases either.”

DUBNER: And then it becomes game theory, which becomes harder and more fun.

MOSKOWITZ: Exactly. That would actually be a more interesting test, wouldn’t it, if student just figured that out and let them in.

Moskowitz plays tennis, where there’s plenty of opportunity for a rethink on the sequencing of shots.  

MOSKOWITZ: If you’re serving for instance, one of the best strategies is a randomized strategy, like a pitcher should do in baseball. And I’m not very good at being random, just like most humans. I’ll say to myself, “Well, I hit the last couple down the middle. Maybe I should go out wide on this one.” But that’s not really random. What I should do is what some of the best pitchers in baseball do. Rumor has it Greg Maddux used to do this, which is, recognizing that he is not very good at being random, he would use a cue in the stadium that was totally random. For instance, are we in an even or an odd inning? And is the time on the clock even or odd? Some other cue that would just give him a sense of, “Well, I’ll throw a fastball if the clock ends on an even number and the inning’s even I’ll throw a slider if it’s an odd, I should say to myself, “If the score is if it’s even or odd, whatever, if I count five blades of grass on the court as opposed to three.” Something that’s totally random and has nothing to do with it, allows me to supply that random strategy, which my brain is not very good at doing. Most people’s brains are not.

DUBNER: It’s an interesting paradox, that it takes a pretty smart person to recognize how not smart we are at doing something so seemingly simple as being random, because it wouldn’t seem to be so difficult, right?

MOSKOWITZ: I would say that’s fairly true in general that the smartest people I know are so smart because they know all the things they don’t know and aren’t very good at. And that’s a very tough thing to do.

Interesting. “The smartest people know all the things they aren’t very good at.” Me? I’ve never been very good at learning just when to end a podcast episode. I’m going to start working on that right … now.

*      *      *

Freakonomics Radio is produced by WNYC Studios and Dubner Productions. Today’s episode was produced by Harry Huggins. The rest of our staff includes Shelley Lewis, Jay Cowit, Merritt Jacob, Christopher Werth, Greg Rosalsky, Alison Hockenberry,Emma Morgenstern and Brian Gutierrez. If you want more Freakonomics Radio, you can also find us on Twitter and Facebook and don’t forget to subscribe to this podcast on iTunes or wherever else you get your free, weekly podcasts.