The Science of Success Podcast

  • Get STARTED
  • The Podcast
    • The Best Of
    • SHOW NOTES
    • EPISODE LIST
    • BLOG
  • ABOUT US
    • Our Mission
    • Team
    • Our Partners
    • CONTACT
  • Evidence-Based Growth
  • Rate & Review
  • BOOKSHELF
  • Shop
  • Search
BarryNalebuff(2)-01.jpg

How a Game Theory Expert Sold One Billion Bottles of Tea & What He Learned On The Journey with Barry Nalebuff

March 30, 2017 by Lace Gilger in Decision Making

In this episode we discuss the fundamental principles of game theory, we correctly guess the answers to SAT questions - without every knowing what the question was! We look at how to use game theory in practical ways, and go deep on how a college professor and his student started a beverage company, sold a billion bottles of tea, and competed against Coke, Nestle, and other major players to become incredibly successful with our guest Barry Nalebuff.

Barry is a Professor of Economics and Management at Yale School of Management. A graduate of MIT, a Rhodes Scholar and Junior Fellow at the Harvard Society of Fellows, Barry earned his doctorate at Oxford University. Barry is the author of several books, an expert in game theory which he applies to business strategy, and the co-founder of Honest Tea which has been named one of America’s fastest Growing Companies

We discuss:

  • What is Game Theory?

  • What are the fundamental principles of game theory?

  • The difference between ego-centric and being alo-centric

  • How do you design a system that avoids death spirals?

  • Everything in life is a game

  • Barry grills me on game theory with a fascinating example

  • We crush through some SAT questions and find the correct answer - without every knowing the question!

  • We use a simple game to understand Nash equilibrium and how that explains third world development challenges and corruption

  • What is the prisoner’s dilemma and how does it apply to the real world?

  • How global warming demonstrates a multi-person prisoner’s dilemma

  • The concept of “signaling” in game theory and how Michale Spence won a noble prize studying it

  • A real-world example of how signaling can be used to change outcomes getting hired

  • How to use game theory to negotiate and create the best possible outcomes

  • A concrete example of how to "divide the pie” and reach a fair and “principled” conclusion in a negotiation

  • Why it's important to figure out what the pie is before you determine how to split it

  • How a professor and his student pooled their resources, started a beverage company, sold a billion bottles of tea, and competed against coke, nestle, and other major players

  • The concept of “declining marginal utility” and how that shaped the founding of Honest Tea

  • We explain why a function is maximized when its derivative is zero

  • The “Babysitter Theorem” and why it was critical to Honest Tea’s success

  • How Barry and Seth used the Lean Startup approach to launch Honest Tea

  • Would it make sense for Pepsi to release a perfect replica of Coke?

  • Barry’s advice for aspiring entrepreneurs

  • Be radically different

    • Solve a challenging problem

    • Succeed without being copied

  • How Honest Tea prevented their business model from being copied and knocked off

  • And much more

Thank you so much for listening!

Please SUBSCRIBE and LEAVE US A REVIEW on iTunes! (Click here for instructions on how to do that).

SHOW NOTES, LINKS, & RESEARCH

  • [Coursera Lecture] Negotiating Online

  • [Bio] Michael Spence

  • [Book] Mission in a Bottle by Seth Goldman, Barry Nalebuff, and Sungyoon Choi

  • [Movie] A Beautiful Mind (2001)

  • [Book Site] Mission in a Bottle

  • [Bio] Barry J. Nalebuff

Episode Transcript

[00:00:06.4] ANNOUNCER: Welcome to The Science of Success with your host, Matt Bodnar.

[0:00:12.7] MB: Welcome to The Science of Success. I’m your host, Matt Bodnar. I’m an entrepreneur and investor in Nashville, Tennessee and I’m obsessed with the mindset of success and the psychology of performance. I’ve read hundreds of books, conducted countless hours of research and study, and I am going to take you on a journey into the human mind and what makes peak performers tick, with the focus on always having our discussion rooted in psychological research and scientific fact, not opinion.

In this episode, we discuss the fundamental principles of game theory. We correctly guessed the answers to SAT questions without ever knowing what the questions are. We look at how to use game theory in practical ways, and we go deep on how a college professor and his student started a beverage company, sold a billion bottles of tea, and competed against Coke, Nestle, and other major players to become incredibly successful, with our guest, Barry Nalebuff.

The Science of Success continues to grow with more than 800,000 downloads, listeners in over 100 countries, hitting number one in New and Noteworthy, and more. I get listener comments and emails all the time asking me, “Matt, how do you organize and remember all this incredible information?” A lot of our listeners are curious about how I keep track of all the incredible knowledge I get from reading hundreds of books, interviewing amazing experts, listening to awesome podcast, and more.

Because of that, we’ve created an epic resource just for you; a detailed guide called How to Organize and Remember Everything. You can get it completely free by texting the word “smarter” to the number 44222. Again, it’s a guide we created called How to Organize and Remember Everything. All you have to do to get it is to text the word “smarter” to the number 44222 or go to scienceofsuccess.co and put in your email.

In our previous episode, we discussed why dieting actually predicts weight gain over the long run. How you can build a health style of habits that accumulates small advantages and create a healthy lifestyle overtime. How habit loops are formed and how you can leverage neuroscience to create habits that stick. The concept of mindful eating and how it can transform your relationship to the meals that you eat, and more with our guest, Darya Rose. If you want to build a healthy lifestyle, listen to that episode. 

[0:02:36.9] MB: Today, we have another amazing guest on the show, Dr. Barry Nalebuff. Barry is a professor of economics and management at the Yale School of Management. He’s a graduate of MIT, a Road Scholar, and a Junior Fellow at the Harvard Society of Fellows. He earned his doctorate at Oxford University. He’s the author of several books, an expert in game theory, which he applies to business strategy, and he’s the cofounder of Honest Tea, which has been named one of America’s fastest growing companies. 

Barry, welcome to The Science of Success.

[0:03:06.2] BN: Thanks for having me.

[0:03:07.4] MB: We’re very excited to have you on here. For listeners who may not be familiar with you and your story, tell us a little bit about your background. 

[0:03:14.8] BN: You gave me a nice intro. For many years, I couldn’t seem to hold a job, so I taught at Harvard, Princeton, and now, Yale. I’ve been here, I think, 27 years, so it’s home. I teach in the School of Management. My subjects are negotiation, innovation, strategy, and game theory. 

[0:03:34.5] MB: Game theory is something that I’m fascinated with. I love strategy and games and it’s something that I love reading about and thinking about. Actually, the original introduction I had to you and your writing was the old school book, Thinking Strategically, that you wrote with Avinash Dixit, and I’ve really enjoyed that. I’m curious, for listeners who may not know much about game theory, how would you describe sort of what game theory is and sort of the basics of game theory. 

[0:04:01.1] BN: Sure. It’s the science of interaction. You start with a simpler problem, called decision theory, and that is you make a decision and you think about how that will interact with nature. When an engineer builds a bridge, they think about tensile strength of steel, the load factor on a bridge, but you don’t have to think about how the bridge is going to respond to your actions. 

In contrast, when a strategist or a general takes a particular action, they have to think about how the other side will respond. What is their objective? What are they trying to accomplish? Only by taking into account the other side do you have a chance of being success. Normally, people tend to be egocentric, that is they’re focused on themselves. Game theory is all about being allocentric, understanding the perspective of others. 

[0:04:50.6] MB: Very interesting. Where does that tie into kind of the idea of strategic behavior?

[0:04:56.8] BN: Your strategy is both about predicting how other people respond to what you do, but also about shaping what it is that they’re going to do. If you think about the current healthcare debate, everybody would like to have a situation where people without preexisting conditions can just get insurance whenever they want, but the challenge is that if they can buy insurance anytime, then they’ll say, “Well, I’ll just wait until I need insurance, and then I’ll get it,” and then the problem is only the sickest people will go and buy insurance. That means that the cost of the insurance is going to be incredibly high and you get into what sometimes is called the death spiral. 

[inaudible 0:05:35.3] is how do you go and design a system which gives people incentives to sign up even when they’re not don’t currently need it. One tactic, like with the stick, which is we’re going to impose a tax penalty on you if you don’t sign up. The other view could be a carrot, which is if you do sign up, we’ll guarantee that you can continue to sign up and you can get continuous coverage and you can’t be denied insurance going forward so long as you’ve been continuously covered.

[0:06:05.9] MB: I think that’s a great real world example. I’m curious, for somebody who’s listening and thinking, “Game theory kind of sounds interesting, but why does it matter to my life, or why it is important?” Why do you think it’s so critical to study and learn and understand game theory?

[0:06:22.2] BN: Right. I think pretty much everything we do in life is a game. It’s not always called that way. Whether it’d be negotiating with your children, your spouse, or raise at work, to understand how competitors respond to you in a market place. So since you’re playing a game, you might as well play it well and understand, really, what’s going on. Game theory is everything in my view. It’s everywhere. I thought if you don’t mind, actually, I’d give you a little example that we could have some fun with. 

[0:06:51.8] MB: Yeah, that sounds great.

[0:06:53.7] BN: Here is a question I’ve taken from the SAT, and I’d like your — It’s a real question. I didn’t make this up. I’d like you to tell us what the right answer is. Here are the choices; A is merits, B is disadvantages, C is rewards, D is jargon, and E is problems.

[0:07:12.6] MB: All right. What’s the question?

[0:07:14.6] BN: Oh! No, I wasn’t going to tell you the question. You see, I think, by using game theory, you can actually figure out the right answer. Let’s imagine for a moment, you’re the person writing the test. What is your objective in this? 

I think there are a couple of objectives. One; you don’t want everybody to get the right answer. You don’t want everybody to get the wrong answer. You want to be able to spread people out. Two; you want to make sure there’s only one possible correct answer, ‘cause if there are two right answers, you’re going to have to go and re-grade the exam, and it will be a nightmare. Three; you never want somebody to get the right answer to the wrong logic. Okay?

Now, you understand the perspective of the person writing the test and your choices are merits, disadvantages, rewards, jargon, and problems. 

[0:08:02.7] MB: All right. Going back to my SAT days, and I definitely should have had another cup of coffee this morning. I’m looking at it and the one to me that seems to jump out and be the least like the other four, or five, is jargon. It seems totally disconnected to the others. 

[0:08:17.6] BN: Okay. That could be a good thing, or a bad thing, because if it’s just all by itself, then maybe it doesn’t have any good decoys. Let’s go and figure out if we can use any specific principles here; merits, disadvantages, rewards, jargon, and problems. I’ll get you started a little bit. I would say that disadvantages and problems are pretty similar to each other. 

[0:08:41.8] MB: I agree. 

[0:08:43.2] BN: And so if one is right, it’s kind of hard to imagine that the other would be wrong, that somebody could make a  good case that the other word would also be an appropriate choice. My view is that two of them knock each other out. Are you with me on that?

[0:08:57.7] MB: Yeah, let’s go with that. 

[0:09:00.3] BN: Now you are left with merits, rewards, and jargon. 

[0:09:04.6] MB: I think there’s definitely a distinction between — There’s obviously a distinction between merits and rewards, but they both have to me sort of this almost like positive connotation, whereas jargon just seems completely out on an island. 

[0:09:18.4] BN: The island part is dangerous, because it could be no decoys, but the question is how close are merits and rewards? Are they sufficiently close that somebody could make a valid argument that they would both work? I think the answer is yes, actually. 

Here is the first part of the question. It says, “Each occupation has its own.” You could see how both merits and rewards work for that and it’d be pretty hard to distinguish between the two as you go on. I think those cancel each other out and you’re left with jargon. 

The whole question is actually, “Each occupation has its own blank; bankers, lawyers, computer professionals, for example, all use among themselves language which outsiders have difficulty following.” You can see that jargon is the right answer.  

[0:10:06.6] MB: Ta-da!  

[0:10:07.6] BN: Okay? Now, as you understand it, let’s try one last one, because you’ve got the principle; accurate, popular, erroneous, widespread, and ineffectual. Accurate, popular, erroneous, widespread, and ineffectual. 

[0:10:25.1] MB: All right. I’m writing these down; accurate, popular, erroneous, widespread, and ineffectual.

[0:10:30.9] BN: Correct. 

[0:10:32.0] MB: Let’s see, popular and widespread obviously kind of synonymous. 

[0:10:36.6] BN: They cancel each other out. Great. Now, we’re left with accurate, erroneous, and ineffectual.  

[0:10:41.7] MB: Accurate and erroneous are kind of opposites. I think, ineffectual —  

[0:10:45.6] BN: They are indeed antonyms. That’s correct. 

[0:10:48.1] MB: Antonyms. There we go. We’re getting our vocab words in. I think that, to me, erroneous and ineffectual could do have sort of similar connotations. I’d probably — I don’t know. I don’t know —  

[0:11:00.0] BN: Let’s pause for a moment. Accurate and erroneous, because they’re opposites, actually are each great decoys with the other. That is, if a person reads a sentence backwards, or misunderstands the meaning of the word and flips it, they would choose the other one by mistake, but nobody could claim that that was the correct answer. 

That suggest that accurate and erroneous are our most likely candidates here.  Now, we have to figure out is there a good decoy for one of them. 

[0:11:28.3] MB: It feels to me like ineffectual could be  a decoy for erroneous. 

[0:11:33.0] BN: The question is if one was correct, but you really argue the other one is not correct. 

[0:11:38.5] MB: I think it’d be challenging. 

[0:11:40.1] BN: I don’t know. Actually, you could be correct. You could be accurate, but also ineffectual. My example here is totally spot on, but I don’t seem to be having the effect I want, and so I could be ineffectual in this example that I’m using even if I’m not erroneous.  

[0:11:54.9] MB: Good point.  

[0:11:55.7] BN: To me, the words actually are that there’s open water between the meetings as supposed to popular and widespread, where I think you can really make the case that there’s not open water there. To me, ineffectual is a good decoy, but a far enough decoy away that it really there isn’t both a right answer for erroneous. In fact, the question whether some people who think only the poor and less educated people use slang, but this idea is erroneous. 

Anyway, my point in this is that it’s actually possible by understanding how the test maker is trying to achieve — The test maker’s objective. You can figure out what’s the right answer without even reading the question. Of course, it’s easier to do the problems reading the questions. You understand what the other side is trying to achieve, then you can accomplish what it is that you’re trying to achieve. That’s the essence of game theory. 

[0:12:50.2] MB: That’s a great demonstration, and it’s really fascinating. I think it does an amazing job of kind of highlighting the point that just by understanding the other party and their incentives in the way that they think you can get a tremendous amount of information.  

[0:13:05.0] BN: That’s the idea. 

[0:13:06.3] MB: I’m curious, what are some of the kind of core mental models or concepts that come out of game theory? 

[0:13:13.4] BN: One of the most important is the idea of equilibrium, and this goes back to John Nash who won a Nobel Prize, had a movie done about him, A Beautiful Mind, starring Russell Crowe. This is the question of, “How do I figure out what the other side is going to do when they are trying to figure out what it is that I’m going to do?” That’s a challenge because, essentially, it can’t be I’m responding to your actual actions, I have to respond to what I think you’re going to be doing while you’re thinking about what it is that I’m going to be doing. 

Here is another simple game that we can play. If the two of us pick the same number, we’ll have a third party, your producer, each pay us that amount of money. We have to pick a number between 1 and 10. If we don’t pick the same number, we both get zero. Do you understand the game so far?

[0:14:09.9] MB: Absolutely.

[0:14:11.2] BN: All right. I think seven is really a lucky number. I like seven. I heard a lot of people — I understand pick seven. Of course, you could pick any other number. Now, you can see that there’s a little bit of a paradox that you have to choose. Which is we both pick 10, we both get more money. You might be saying, “Although I’d like to pick 10, I’m a little worried that Barry is going to pick seven. Barry might have to pick seven, cause he’s afraid that Matt is going to pick seven because he thinks Barry is going to pick seven.” We both end up worse off, or maybe we don’t even coordinate, I end up picking seven, and you pick 10, and we both get zero. 

One of the challenges that exists with Nash equilibrium is that it can be more than one and we may fail to find it, and this simple example, and she goes a long way towards explaining why we have development issues in many Third World Countries. We could like to be in a situation where nobody pays bribes and nobody asks for bribes. We want to get rid of corruption, but if I believe that the official is going to want a bribe, and I don’t pay the bribe, then I won’t get my new passport, I won’t get my new driver’s license, then I will have to offer the bribe, the person will end up taking it and we end up in a situation where the economy get stuck in this corrupt equilibrium and doesn’t advance as quickly as it might.

You can’t just change by having one person change them. I’d go from 7 to 10 and the other person doesn’t flip, it doesn’t help either. You have to have this coordinated move, and that’s not so easy to do.

[0:15:50.9] MB: That’s a beautiful demonstration going from a very simple game to an extremely concrete real world application. I’d be curious, could you explain — Another one I know is very popular and kind of the cornerstones of game theory; The Prisoner’s Dilemma. For listeners who may not know what that is, or maybe have heard it but don’t really understand it, I’d love to hear your kind of explanation of it and then maybe if you can think of one, perhaps a real world instance of the The Prisoner’s Dilemma as well.

[0:16:18.4] BN: Yeah, I sometimes shy away from it, not because it isn’t [inaudible 0:16:22.7] example, but because people end up thinking that’s all there is to game theory, is the idea of the The Prisoner’s Dilemma, and anyone who has seen a detective movie knows the drill. Two prisoners are interviews in separate cells and each one is told that if they can confess and they’re the first to confess, they’ll get a lighter sentence, maybe even get the turn state’s evidence. Whereas if they both confess, there’s a whole lot less value in those confessions, and so it doesn’t work out so well for them. On the other hand, if neither confesses, they may actually get a life sentence, or not even convicted. 

The problem is that if you’re colleague in crime doesn’t confess, it turns out that the leniency you are shown is a good — It really makes it worthwhile for you to confess. Similarly, if the arrival of your fellow criminal does confess — Oh my God! You surely better, because otherwise they can have the book thrown at you. 

Whatever happens, it’s in your interest to confess, and then when both sides confess, they don’t do so well compared to the situation where neither confesses. By the way, we use this not just for criminals, but also in antitrust enforcement, in corporate crime. If it turns out there’s been a conspiracy or an antitrust and one company comes forward, they end up often getting amnesty as a result. If you know that if you are rival — If your coconspirator has this incentive to come forward and be a whistle blower, then you may decide you have to do that too, because you’ll be left having the books thrown at you.  

It’s used in many context, and sometimes we think of this as a bad thing if you’re a prisoner, but it’s a good thing if you are the law enforcement. Then, the question — Go ahead.

[0:18:10.2] MB: Continue please. 

[0:18:11.9] BN: Then, the question is how do you get out of it? It could be that, “Well, okay. I’m going to meet up with you in jail. If I do that and you confessed, or other prisoners will say, “Wait a second. This guy was a rat. He confessed,” and they’re going to punish you quite severely in prison for that. That’s a good enough deterrent. 

If we’re actually coming across people again and again and we have the ability to punish them in the future for what they did in that confession, then that’s how the mob often prevents people from churning. What’s true and what’s possible in a single interaction much more is possible when you run into the same people again and again. 

We can also think of a multi-person prisoner’s dilemmas, and you can think of global warming often in that circumstance, which is it’s in my own interest to drive a car, to fly in an airplane, to heat my house, to use an air-conditioner, and if other people are all doing that and the planet is going to go and heat up, well, I can’t stop it, so I might as well enjoy life now. If nobody else is going to do it, then it doesn’t really hurt for me to go ahead and expend a little bit more carbon. 

In some sense, whatever anybody else does, I want to be a little bit more of a carbon user. Then, when everybody acts that way, we end up putting way too much carbon in the atmosphere and we globally suffer the consequences. Each individual has an incentive to do something that’s not good when it’s done collectively.

[0:19:50.1] MB: Is that the same instance of the concept of the tragedy of the commons, or is there a distinction there in this multi-person prisoner’s dilemma?

[0:19:57.5] BN: Nope. Strategies of the commons is pretty much the multi-person dilemma, and it’s one reason why people believe ole for government regulation, which is that we think of the invisible hand, Adam Smith, sort of prizes guide people to do the right things, but sometimes those prizes are unfair, because you’re not correctly charged anything when you take this action, putting carbon in the environment. If you don’t have a prize mechanism, there’s no sense in which what the way people will play these games will necessarily be good for themselves or for society as a whole.  

[0:20:34.1] MB: Another concept from game theory that I’ve heard you talk about before is the idea of signaling. Can you talk a little bit about that and kind of explain that? 

[0:20:41.1] BN: Sure. Michael Spence won a Nobel Prize for his work on Signaling, and it’s a little embarrassing to me, because part of the theory of signaling is that you go to get a degree, not for the stuff that we teach you, although we’ll teach you about signaling theory, but for what it says to the rest of the world. 

Let me give you an example with my MBA students. Imagine that you are a smart woman and you want to convince your employer that you are really going to be there and make a commitment to this company. The employer is sexist and, perhaps even illegally, is thinking about discriminating against women, because they’re worried they’re going to leave, have kids, and start a family. Therefore, this person doesn’t want to make the investment in the employee that they would make if they knew this person would be staying on. 

Now, the employee says to the firm, “Okay. Look, I’m really — I’m committed. I’m the kind of person who will stay here through thick and thin.” The problem is anyone can say that and it doesn’t mean anything. How do you take an action which conveys you’re the type of person who really means that as supposed to just saying it? 

One way of doing that is going and getting an MBA. You go and you spend $120,000 in tuition. You spend two years of your life listening to professors like me and you’re able to endure that. Why would you have done that and then leave labor force right away? You can say, “Look, I took these actions that only makes sense if in fact I am planning to be here for the next 20 years, next 15 years.” 

A lot of people who will say that they are committed to this company to being a professional, but you could look at the actions that I’ve taken and show that I’m not that average person, I’m really the one who is going to go and make this happen. 

A nice example, I think, from Steve Levitt, Freakonomics, is a dentist who’s getting a little on in the years and wants to convince patients that he or she is not about to retire goes and buys a new furniture for the office, like, “Look, if I was going to retire in the next six months, I would have just let that [inaudible 0:23:01.4] kinda hangout there a little bit longer, but it didn’t make sense for me to redecorate the office, buy a new furniture, a new equipment if I was about to hang out my shingle.” Anybody can say they’re going to be staying on, but the fact that I have made this investment suggest I’m really going to be around for a little bit longer.

[0:23:21.2] MB: One of the other concepts that you’ve talked about in the past is the idea of using principled arguments in negotiation and kind of the concept of dividing up the pie. I’d love to explore that idea briefly. 

[0:23:34.5] BN: Sure. I’ll put a little plug in. I’ve created a free online negotiation course at Coursera, coursera.org, and you can learn all of it there, but here’s the preview; a lot of people think that negotiation is about who can yell louder, and shake my hand, do this deal, five, four, three, two, one, shake my hand now, say yes. It’s sort of how Dwight negotiates in the office. That’s not a principled approach. 

I want to say, “What type of arguments can you make that might persuade other people about what is appropriate, what is fair?” I spent a lot of time thinking about what is the pie? Why are we having a particular negotiation? I can use a simple example, if you’d like. We have A and B, two parties who have nine to divide up. In the sense that if they can reach an agreement, there’s a pie of size nine that they can share. 

Now, in order to figure out what will happen, I also have to say what they should do — What they will do if they can’t reach an agreement. Let’s say A can get one on his own, and B get two on her own. If they reach an agreement, A and B together can get nine. If not, A can walk away with one and B can walk away with two. Now, the question is how do we divide up the nine?

What most people say is that B is, in some sense, twice as strong as A, and so B should get six and A should get three. That if B can get twice as much if they don’t reach an agreement. Perhaps B should get twice as much if they do. My view is we fundamentally misunderstand power and proportionality, and the right way of thinking about this is that A and B without an agreement can collectively get three; A gets one, B gets two, so collectively they get three. If they reach an agreement, they can get nine. There’s an extra six they can get by reaching an agreement. Who is more important for that agreement? A or B? My answer to that is they’re equally important. If A walks away, that six disappears. If B walks away, that six disappears. 

That means A and B are equally important to that six, so you should divide it three and three, so A will get four and B will get five. That is the principle, which is figure out what the pie is. Figure out what the two of you are able to create by working together, rather than not reach an agreement, and split those gains. 

[0:26:21.0] MB: I love that example, and I think it’s a great way to look at it, because if you think about sort of 50-50 split off the bat, it’s not quite equitable than if you think about a two-thirds, one-third split, it’s not quite equitable, but really looking at all of the different outcomes and what the parties can achieve on their own versus what they can create together. I think you’ve achieved the most sort of fair, and I guess as you would say, principles split of the proceeds. 

[0:26:46.6] BN: The argument doesn’t depend on which side you’re taking, and I think that that aspect of, “I can make that argument for either side,” is a critical component of what it means to be fair and reasonable. 

[0:26:58.7] MB: I think that’s a great segue to dig into Honest Tea, which we haven’t talked about yet. Obviously, you’re an expert in game theory, but on top of that, you and one of your students actually founded one of the fastest growing companies in America, a beverage company that has been incredibly successful. I’d love to just kinda here this story —    

[0:27:18.8] BN: We just sold our billionth bottle. 

[0:27:20.2] MB: Congratulations.  

[0:27:20.6] BN: [inaudible 0:27:21.2] McDonald’s signs up there, “Billions and billions served.” 

[0:27:25.4] MB: That’s amazing. That’s really cool. It’s so fascinating. I’d love to hear the story of how a professor and a student start a beverage company and go up and compete against the likes of Coke and Nestle and other giants in that space. 

[0:27:39.7] BN: You think it’d be a recipe for a disaster in the sense that neither of us had much experience in terms of starting a company, starting a beverage company, but we had some ideas. In fact, we had issues like, on our side, passion, we had luck, and we had economic theory. I’ll emphasize the economic theory, because that’s my job. 

One of the key lessons we say in economics is declining marginal utility. If you liked — The first scoop of ice cream is really good, the second is okay, and by the time you’re in the 10th scoop, it’s like, “I’m kinda full now. I’m not so interested in having a little more ice cream.” 

Same thing in terms of whether it’d be shirts, or shoes — Well, maybe not shoes. For most things, as you have more and more of them, the incremental value of the next one is less and less interesting. I think that’s true for sugar. You add a little bit of sugar to a beverage, it takes away the bitterness. The next, add some flavor, and each incremental one is less and less good, but it carries the same number of calories. 

It didn’t make any sense to us that all the beverages out there were either zero calories and often very sweet with diet, or 140 calories and basically turned into a liquid candy. Why wasn’t there a normal beverage with one or two teaspoons of sugar? We figured out that weren’t alone and wanting to have something like that. With that inspiration and insight, we thought we could just make a tea that tasted like tea and to use an old fashion recipe, kind of fire, water, and leaves, and not much else. That was the start of Hones Tea. 

[0:29:32.3] MB: I think it’s really interesting, and you talked about the concept of declining marginal utility. I don’t know if you still do it or not, but you used to actually put a curve on the bottles that demonstrated kind of the tradeoff between calories and flavor in terms of sugar content.  

[0:29:49.2] BN: I think that label may now be a historical item, although only pretty recently, it was on the Green Dragon Tea. This, again, is a case where only real wonks could get the inside line, but in calculus, we learn that the derivative of a function is — The function is maximized where its derivative is zero. What that means in normal person speak is that when you’re doing something and you’re right at the optimum, when you’re doing it as best as possible, if you make a small change, you had a little bit more, a little bit less, it has almost no impact on the result.
 
In particular, imagine that you came up with a recipe, which maximized the flavor based on how much sugar was in there. That one, the blind taste test. Now, we cut back the sugar 10%. Essentially, since the case was optimized, cutting back the sugar by 10% will have almost no impact in terms of what people think for the flavor, but it will cut back the calories by 10%. That is a direct linear result.
 
What’s interesting is the product, which wins the blind taste test, is not actually the best product in the market. Another way of thinking about this is a blind taste test. If you’d like your eyes are closed and your mouth is open. If I want to flip that, we’d have a test where your eyes are open and your mouth is closed. What is that? That’s a test where you go and read the label. The ideal label, if you like, has zero calories in it and nothing artificial. The problem is that doesn’t always taste so great. 

For the same reason that I wouldn’t want the product to win the eyes-open-mouth-closed test, I don’t want the product to win the eyes-closed-mouth-open test. Where I think the right test is is there something in-between the two where you read the label and you taste the product. That’s going to lead you to something which is less sweet than wins the blind taste test; and more sweet than someone that wins the eyes-open-mouth-shut test. So to speak, that’s a sweet spot that Honest Tea lives in.

[0:31:56.7] MB: Another theory you’ve talked about that that helped inform the start-up of Honest Tea was the idea of the babysitter theorem. Will you talk a little bit about that?

[0:32:04.5] BN: Yeah, the babysitter theorem. The basic idea here is that nobody goes and hires and babysitter to eat at McDonald’s. It’s going to cost you 50 bucks for the babysitter, 10 bucks for McDonald’s. Now, you should’ve said, “Wait a second. I just spent $50 to go to McDonald’s? That makes no senses at all.”

What is the larger — You might spend $50 to go out and get a fancy dinner at a white-tablecloth restaurant. The idea is if you have to spend a lot of money to get out the door, you’re going to go and do something of high quality, not low quality. To apply that result to Honest Tea, in our case, the babysitter is the bottle, the label, the cap.

If we were to fill up a bottle just with air, but put on the cap and the label, it could cost us $0.60 to get out the door. If you’re going to spend that much money on packaging, you might as well spend a little bit more on ingredients. 

The other guy who’s out there were spending a penny or so on tea, a couple more cents on high-fructose corn syrup. We think if we spend a nickel on tea, people can actually tell the difference. It’s going to raise the price maybe from $1 to $1.10. If you’re already spending your dollar, you might as well spend a buck-10 and get something truly amazing in terms of quality. The babysitter theorem helped us get there. 

[0:33:32.3] MB: Being a professor and a student, how did you take this idea and go from concept to purchase order and then, from a purchase order to product on shelves?

[0:33:46.5] BN: It’s a long story, which we tell in our book called Mission in a Bottle. That’s another plug [inaudible 0:33:53.7] I’m allowed. That’s where we had a huge amount of passion. Seth is brilliant. He’s tireless. He is inspiring. We started out using the Lean Startup approach, which in this case meant making tea in our kitchen. Taking an old snaffle bottle and washing off the label. Using rubber cement to glue on a hand-printed label, filling up the bottle with the tea that we had made, putting thee cap on back our self and bring it to a buyer at Whole Foods, who fortunately liked the way the product tasted and he ordered in a truckload. Then, we had a couple of months to figure out how to make it. 

If we couldn’t have sold it to the buyer at Whole Foods, that was our complete target customer, then we would have realized that we hadn’t truly understood the market that we thought we’d understood. 

[0:34:41.1] MB: For the listeners out there, we will definitely include in the show notes Mission in a Bottle. We’ll include the negotiating course on Coursera and links to everything else we’ve talked about. That stuff will all be in the website. Everybody can make sure to check that out. 

[0:34:54.2] BN: I’ll take you back again to one last bit. Imagine — We’ll stick in the beverage world. Imagine for a moment you get to be the CEO of — I don’t know. Pepsi-Coke, and you have the opportunity to get Coca-Cola’s secret formula. It’s not an ethical issue. It’s not a legal issue. What would you do with that if you had it?  

[0:35:16.7] MB: That’s a good question. Do you produce it or not? I don’t know. 

[0:35:21.5] BN: Yeah, that is the question. I’ll give you one shot and then I’ll flip the cards . 

[0:35:27.8] MB: All right. Fair enough. I would say — This almost happened with the fiasco with new Coke back in the 90s and it was a disaster for Coke and they ended up completely reversing course. I think you probably — I think it’d make sense to produce it in some way or another whether it’s under the same label or not. If nothing else just to try to kinda knock ‘em down a peg.  

[0:35:50.0] BN: Let’s say that you did that, and now there’s sort of another thing out, they taste just like Coca-Cola, separate from whether or not the taste of Coke is really what matters the most or is it really its association with being America and the brand. If that happened, that’s likely to bring the price of Coca-Cola down, because they’re now going to have to compete more aggressively against this perfect substitute, this generic version of coke that’s really more than generic, it’s a perfect replica. 

When the price of Coca-Cola comes down, what is that going to do to the folks at Pepsi?  

[0:36:21.8] MB: Lower their prices.  

[0:36:24.2] BN: It’s going to probably force them to respond with lower prices. The last thing you want to do is make the world of Coca-Cola more competitive, because that is going to come back and bite you. By playing out the moves and countermoves, you can see that, actually, the best thing to do along with probably the ethical and legal thing, is to throwaway that recipe and never look at it.  

[0:36:48.9] MB: That makes a lot of sense. In many ways, it’s kind of the same concept behind cartels like OPEC that would prefer to keep prices at a certain level in order to maintain all of their margins.  

[0:37:01.0] BN: Sure. They are actually doing what would be illegal in United States in terms of restricting output. Here, there’s a question of, “Do I want to go and make a perfect copy of what my rival is making in the market? I may not even be able to, but if I could, the answer is I probably don’t, because that forces my rival to be in a more competitive situation. If my rival’s prices come down, then mine will probably come down too.” That’s not colluding by not doing it. There’s no requirement that I go and force my rival into a greater competitive environment. In fact, I generally want to differentiate my products from a rival rather than copy them. 

People’s first instinct is often, “Ho-ho! I got it. I can really screw the rival. I can feed them up. I can trash them. Let me go and do it.” It’s a little bit of a game theory insight. You realized that unlikely to actually help you in the long run.

[0:37:57.8] MB: That’s a great lesson and, again, shows game theory is not something that just exists in textbooks with something that’s incredibly applicable to all kinds of different fields of our lives. I’m curious, what would be a piece of advice you’d have for somebody who’s listening who might be kind of an inspiring entrepreneur that wants to follow, in one way or another, kind of the footsteps of you and Seth and what you’ve done with Honest Tea? 

[0:38:24.9] BN: Find something that truly makes a difference. In economics, we sometimes say that if you’re just 10% better, or 1% better, the whole world will come to you, but it turns out that’s not true. You have to be radically different and better in order for folks to care and pay attention. I think that’s a starting point. That also is a reason to have a passion. 

I think there are two aspects you need to have a successful business. One is you have to have the solution to a great problem. That’s what most people focus on. You also have to figure out why you’re going to continue to succeed even after the world knows about what you’ve done. That is why won’t others copy you, or having copied you, why that you’ll still succeed after they’ve copied you? That’s an often harder challenge. The problem is that most good ideas are good ideas for somebody else. They’re not good ideas for you. Work on that as well when you’re trying to come up with your great entrepreneurial idea. 

[0:39:30.8] MB: What prevented Honest Tea from being copied?  

[0:39:33.7] BN: Let me take a step back. Before doing Honest Tea, I thought about mixing orange juice and club soda. I do that myself. I think it’s a great drink. If you’d like, it’s an organic, all natural soda that’s half the calories of oranges. You could sell it and it would have half the cost of making orange juice and sell for the same price. Better margins. It’s all good. 

The problem is that if we had made that, I think you’d be test marketing for Tropicana. If they could come in, perfectly will copy what it is that we’ve done, and I would be a bitter professor saying that others had stolen my idea. For 10 years, I didn’t do anything on this, because I was afraid that in the end it will be copied and you [inaudible 0:40:17.4] succeed, we wouldn’t have known anything in the long run. 

The nice thing about tea is that the way in which we are making it, literally, boiling water and putting in tea leaves, it was not something that the big players, whether in the time it’d be Nestle, or Snapple. Arizona were doing — They’re using syrup and concentrates and powders. Our more artisanal way of doing things was a little hard for them to copy in what they’re up to. 

It was also the case that they would suffer some cognitive dissonance, which is if we’re saying what they’re making as liquid candy, then it’s hard for them to also go and make a product which isn’t so sweet, because their customers are expecting stuff that’s really sweet. They’d had to say to their customers, “Look. If you like our regular product, now you’re probably not going to like this, and so don’t drink this.” That’s not so easy for them to say. That does mean in the end, that folks won’t copy you, but it means it will slow them down and allow us to have more a foothold than build up a brand which we’re able to do. 

[0:41:27.4] MB: What is one kind of simple actionable piece of homework that you would give to somebody listening that wants to take a first step towards implementing or learning more about what we’ve talked about today?  

[0:41:39.5] BN: Other than buying Mission in a Bottle, I’d say go and make your prototype. The best market research I think is will I able to pay for it? I’ll give you one quick example of some students of mine wanted to make and sell organic cotton shirts. How could you figure out if there’s a market for that? I think you could show them pictures. You could tell the story. What you could also do is go to a custom tailor and have the person make you organic cotton shirt. Then, you could show the person, they could look at it, they could hold it, they could touch it, and they could say, “Okay. Yeah, I’d buy that.” You’d say, “Great. Write me a check.”

Then, you know that it’s a real piece of demand, not just a hypothetical piece of demand. I may not have to have thousands of pieces of inventory, but having one piece of inventory makes a project look so much more real and will allow you to truly gauge demand in a much better way.  

[0:42:38.4] MB: It’s a great piece of advice and something that’s so critical. It’s very easy for people to say, “Oh, yeah. That sounds like a great idea.” Unless they’re willing to put hard dollars on the line and actually support it, that’s where the rubber meets the road. 

Where can people find you and Mission in a Bottle and all of your other books online? 

[0:42:58.9] BN: Well, missioninabottle.net, [inaudible 0:43:00.9] a little preview there, barrynalebuff.com have links to everything. The Coursera Course has the free online negotiation. That’d get you pretty well started, I’d say. Of course, Amazon, pretty much has everything. 

[0:43:16.3] MB: We’ll make sure to have all of those included in the show notes. Barry, thank you so much for being on the show. It’s been a fascinating conversation. I enjoyed having the tables turned and testing my SAT test taking abilities and my game theory knowledge. Thank you so much for coming on here and sharing all your wisdom. 

[0:43:33.4] BN: Thanks for being a good sport and for having me. I appreciate it. It was fun.  

[0:43:37.3] MB: Thank you so much for listening to The Science of Success. Listeners like you are why we do this podcast. The emails and stories we receive from listeners around the globe bring us joy and fuel our mission to unleash human potential. I love hearing from listeners. If you want to reach out, share your story, or just say hi, shoot me an email, my email is matt@scienceofsuccess.co. I’d love to hear from you and I read and respond to every listener email.

The greatest compliment you can give us is a referral to a friend, either live or online. If you’ve enjoyed this episode, please, leave us an awesome review and subscribe on iTunes, because that helps more and more people discover The Science of Success. I get a ton of listeners asking, “Matt, how do you organize and remember all these information?” Because of that, we’ve created an amazing free guide for all of our listeners. You can get it by texting the word “smarter” to the number 44222, or by going to scienceofsuccess.co and joining our email list. 

If you want to get all these incredible information, links, transcripts, everything we just talked about and much more, be sure to check out our show notes. You can get them at scienceofsuccess.co. Just hit the show notes button at the top. 

Thanks again, and we’ll see on the next episode of The Science of Success.

March 30, 2017 /Lace Gilger
Decision Making
PaulBloom-01.jpg

Are Babies Racist? Is Empathy Bad for Society? And More with Dr. Paul Bloom

February 23, 2017 by Lace Gilger in Mind Expansion, Decision Making

In this episode we start with a dive into evolutionary psychology and how biases have been programmed into you by millions of years of evolution, look at why our guest condemns the concept of Empathy, how the science demonstrates that empathy has no correlation with doing good in the world, how empathy creates disastrous outcomes, and more with our guest Dr. Paul Bloom

Dr. Paul Bloom is a Professor of Psychology and Cognitive Science at Yale University and received his PhD from MIT. Paul is the coeditor of the journal Behavior and Brain Sciences and author of several books including Just Babies: The Origins of Good and Evil, and most recently Against Empathy: The Case For Rational Compassion.

  • We dig into Paul’s research on babies and their innate sense of right and wrong

  • A surprising and extremely powerful source of bias that babies innately have

  • The in-group vs out-group and how babies slice up and divide the world

  • How dividing a group by coin flips can create serious behavioral biases towards your own group

  • Evolutionary psychology and how biases have been programmed into you by millions of years of evolution

  • The morality of evolution and how kindness evolved

  • How people, from an evolutionary point of view, think about strangers

  • The definition of empathy and how Paul defines it

  • Why Paul criticizes the concept of empathy

  • Why feeling the feelings of others is a really lousy moral guide

  • Why the science shows that empathy has no correlation with how much good people do in the world

  • What happens when soccer fans see someone shocked and how theyre brains respond completely differently if its a fan of their team vs their opponents team

  • How our natural empathy response is riddled with extreme bias

  • How empathy creates disastrous political outcomes

  • The "Willy Horton incident" and how the empathic response resulted in more rapes and murders

  • Why Paul says controversially that mass shootings are objectively less than a rounding error

  • Why being against empathy doesnt mean we should turn into cold blooded monsters

  • The distinction between empathy and compasion and why its so critical

  • How buddhist philosophy lead Paul to move away from empathy and towards compassion

  • Why its so critical to be aware of your biases before you can shift them and overcome them

  • Why we are more than just our biases and limitations

  • Pauls vision for the human future and how an awareness of our biases is critical to build a future where rational and logical thinking can move us to a better future

  • And more!

Thank you so much for listening!

Please SUBSCRIBE and LEAVE US A REVIEW on iTunes! (Click here for instructions on how to do that).

SHOW NOTES, LINKS, & RESEARCH

  • [Book] Against Empathy: The Case for Rational Compassion by Paul Bloom

  • [Book] Just Babies: The Origins of Good and Evil by Paul Bloom

  • [Twitter] Paul Bloom

  • [Yale Bio] Paul Bloom

  • [Article] Empathy and compassion by Tania Singer and Olga M. Klimecki

Episode Transcript

[00:00:06.4] ANNOUNCER: Welcome to The Science of Success with your host, Matt Bodnar.

[00:00:12.4] MB: Welcome to The Science of Success. I’m your host, Matt Bodnar. I’m an entrepreneur and investor in Nashville, Tennessee and I’m obsessed with the mindset of success and the psychology of performance. I’ve read hundreds of books, conducted countless hours of research and study, and I am going to take you on a journey into the human mind in what makes peak performance tick with the focus on always having our discussion rooted in psychological research and scientific fact, not opinion.

In this episode, we start with a dive into evolutionary psychology and how biases have been programmed into you by millions of years of evolution. We look at why our guest condemns the concept of empathy, how science demonstrates that empathy has no correlation with doing good in the world. How empathy creates disastrous outcomes and more with our guest Dr. Paul Bloom. 

The Science of Success continues to grow with more with more than 780,000 downloads, listeners in over 200 countries, hitting number one in New and Noteworthy, and more. I get listener comments and emails all the time asking me, “Matt, how do you organize and remember all this incredible information?” A lot of our listeners are curious about how I keep track of all the incredible knowledge you get from reading hundreds of books, interviewing amazing experts, listening to awesome podcasts, and more.

Because of that, we created an epic resource just for you. A detailed guide called How To Organize and Remember Everything. You can get it completely free by texting the world “smarter” to the number 44222. Again, It’s a guide we created called How To Organize and Remember Everything. All you have to do to get it is to text the word “smarter” to the number 44222 or go to our website, scienceofsuccess.co and put in your email.

In our previous episode, we discussed the paradox of happiness, why pursuing it makes you less happy, and what you can do about it. We dug into the research about what really makes people happy. We broke down happiness into its essential components and discussed how to cultivate it, and much more with our guest Tal Ben Shahar. If you want to live a happier life, listen to that episode.

[0:02:20.8] MB: Today, we have another exciting guest on the show, Paul Bloom. Paul is a professor of psychology and cognitive science at Yale University and received his PhD form MIT. He is the co-editor of the journal Behavior and Brain Sciences and the author of several books including Just Babies, The Origins of Good and Evil, and most recently, Against Empathy: The case for rational compassion. 

Paul, welcome to the Science of Success.

[0:02:45.1] PB: Hey, thanks for having me on.

[0:02:46.9] MB: Well we’re very excited to have you on here. For listeners who may not be familiar with you, tell us a little bit about your background and your story.

[0:02:54.3] PB: I’m Canadian, born in Montreal. For a long time I thought I’d become a clinical psychologist and treat children. My brother’s autistic, which is why I got into psychology, but I began to become increasingly entranced with broader philosophical questions and experimental research.

Now, I’m a professor at Yale University in New Haven, I study babies, I study adults, I study toddlers in between, and in between doing experimental research, I write books and articles for a popular audience.

[0:03:22.4] MB: I’d love to begin by diving into some of the research that you’ve done on babies, which I find really fascinating. Would you share kind of some of those findings?

[0:03:29.6] PB: Yeah, absolutely. This is work done in collaboration with my colleagues at Yale, particularly Karen Wynn who is my wife and collaborator. She runs this infant lab and we do all sorts of experiments on babies looking at their social understanding, their physical understanding, and recently about their moral understanding; their understanding of right and wrong. 

This might seem crazy to talk about a six month old having a moral understanding but we discovered some really cool things. For instance, you can show babies a one act play where there’s somebody trying to do something like trying to get up a hill. Then a good guy comes and gently nudges our character up the hill. Then another guy comes, a bad guy, and shoves him down. If I was to ask you, show you the film, and you can look at online on my webpage, if you looked at the film you’d say, “Well, yeah, one guy’s a nice guy, the other guy’s a jerk.” 

So we wanted to see what babies felt about this. You can’t ask babies, they can’t tell you but they do all sorts of things. We found out that babies prefer to reach for the good guy than for the bad guy. They prefer to give treats to the good guy or versus the bad guy. They prefer to take away treats from the bad guy over the good guy. That’s just one example. We’ve done many experiments of this sort and it finds that babies long before their first birthday have some sort of understanding of right and wrong. 

Other studies find that babies have some sort of compassion. They like to help others, they like to support others. One body of my research explores the moral powers of the baby. At the same time though, the morality we have inborn with us, the product of evolution is in some ways very limited. Babies don’t have a natural compassion for strangers, they are insensitive to sort of moral insights like the wrongness of slavery or racism and sexism.

After writing my baby book, Just Babies, and after thinking about this issues, I began to struggle with the question of what makes us different from babies and what makes a person a good person? That led to a lot of my work now on empathy and the emotions.

[0:05:37.4] MB: So, do babies have a kind of initial or in-born prejudices and biases?

[0:05:43.5] PB: They do and they don’t. It’s not like a baby is born and, you know, doesn’t like black people, or doesn’t like gay people, or Asian people. Babies don’t have any specific biases but they are very quick to develop them. Very early on, for instance, babies prefer to look at people who look like those that they’re raised with.

A baby who is raised with all white people will prefer to look at white people, all black people look at black people. In one study involving Ethiopians in Israel, babies get to look at white people and black people, those babies don’t show any preference.

It’s not just sort of looking and you can say, “Well who cares about what babies like to look at?” Later on, these preferences manifest themselves in all sorts of biases like who they prefer to interact with, who they prefer to give toys to. Some of the best work has looked at a really surprising source of bias that’s extremely powerful. More powerful than gender, more powerful than race, and it’s language.

Very early on, as young as you can test, babies prefer people who speak the same language that they do and they prefer to interact with them, they prefer to make friends with them. Even a slight accent pisses babies off and they prefer to go for somebody who doesn’t have the accent. Of course you see the same sort of biases in adults. Although for adults, It’s more complicated, adults view some accents better than others. 

But one reason why we believe that language is so important for the baby is that language is a wonderful queue to social group and if somebody speaks a different language than you or even a different accent, it’s an excellent indicator they’re not from your community. Because babies are extremely prone to split the world up into “in group” versus “out group”, they look towards language as a way to do it.

[0:07:27.3] MB: Tell me a little bit more about the kind of in group, out group distinction and how babies draw that?

[0:07:33.7] PB: Well, the question could be asked about babies and could be asked about you and me. There’s no human who is perfectly impartial from one group to another. There’s nobody who loves their own child to exactly the same extent that they love someone else’s child. There’s no one who doesn’t feel more of a connection to their friends and their lovers and their family than to strangers. We split the world up to “in group” and “out group” and that way, we split it up into countries, we split it up into ethnicities and to clubs.

One of the findings from baby studies is that babies are extremely willing to do so. They come in predisposed to break the world into us versus them. You can demonstrate that in the most minimal ways. One experiment that’s been done with adults, has recently been extended to kids. You just randomly put them up. You say, you guys, for adults who say, “Let’s flip a coin. Heads go in this corner, tails go in this corner.” It’s utterly random, it’s obvious it’s random, for kids, you hand out different colored gloves and it turns out, even this ridiculously small manipulation ends us splitting people up has a powerful effect. 

We prefer our own group even if it’s just a heads group or the tails group. The yellow gloves or the blue gloves group. We like to give them more and we are happier punishing the other group. One of the aspects of human nature which I think is caused, maybe the most trouble is present from the very get go.

[0:09:02.9] MB: I think there’s a study that you’ve talked about in the past revolving around kind of babies and graham crackers or something like that. I’d love for you to share that research example.

[0:09:12.0] PB: This is some work done by Karen Wynn. You do a study where babies get to choose between two things they like and I think — I forget exactly. I think they’re graham crackers versus cheerios. Babies, you know, like one versus other, whatever. They choose one. Then they want someone else to make a choice and the weird thing that you wouldn’t have expected as babies are very sensitive to what the other person does.

They like when somebody chooses the same thing that they do and they get annoyed when somebody doesn’t. In some of the studies, they get so annoyed when somebody chooses something different. I choose graham crackers, you choose cheerios, they get so annoyed that they want to see that person punished.

And Karen in her work sees this as a sort of grounds for ideological conflict later on where as adults, we can get enraged when someone makes different choices from us. Now, when the stakes are very high, like going to war or abortion laws or whatever, that’s kind of understandable. But even when the stakes are ridiculously low, we freak out. This too I think is part of our initial equipment.

[0:10:22.0] MB: For listeners who may not have as good of an understanding of kind of the concept of evolutionary psychology and how this biases sort of get programmed into us via evolution, I’d love for you to just kind of explain that concept.

[0:10:36.0] PB: Well, just like our bodies, our brains are the products of natural selection. What this means is, the fact that we think the way we do that we have to taste and motivations and desires that we have is to a large extent because our ancestors who did this reproduced more than those that didn’t.

This is pretty obvious for some things. It’s kind of a no brainer why people like sex. People like sex because their ancestors who didn’t like sex or would rather copulate with a rock or a tree didn’t produce offspring while their ancestors that did like sex did considerably better at producing offspring, it’s why we love our children. 

If you didn’t love your children, if you ate your children, well, your children won’t do too well in life. It’s why we prefer to drink water than to eat mud, a lot of our taste and desires at the low level make perfect sense for a creature that’s been evolved through survival and reproduction. This pertains to morality as well. It was one thought before the time of Darwin, that evolution is sort of red and tooth and claw; evolution is a relentlessly selfish force, making us care only for ourselves. We know and Darwin knew that his is nonsense. 

Evolution makes us kind because creatures who are kind in certain special ways, like favoring their family over their friends, engaging in long term alliances and mutual benefit. Animals like that do better than animals that don’t. If you and I were in the Savannah and you cooperated with people and helped them out and took care of your family and all I cared about was myself, well your genes would do better than mine. 

Evolution has shaped our morality as well but this is kind of a tragic part because from an evolutionary point of view, who gives a damn about strangers? Strangers are nothing. Strangers are at best potential threats and so the fact that we right now recognize that we owe a moral obligation to the strangers, we can’t kill them, we can even help them under some circumstances.

Suggest that we’ve used our intelligence to transcend evolution. Of course we do this all the time, we evolve perceptual systems that allow us to look over the world and see trees and water and so on. But through science, we understand what we’re really seeing are objects that are composed of tiny particles and fields of energy.

Similarly, we have a sort of stone age morality that’s evolved through evolution but we’re also smart enough to transcend it. The user are capacity for introspection and for generalization and logic. To realize that some of our innate morality’s unfair and capricious and that we could do better.

[0:13:19.5] MB: I think that really dove tails into your somewhat controversial view on the concept of empathy. Before we kind of dive into that, I’d love to understand, how do you define the concept of empathy?

[0:13:32.3] PB: Yeah, that’s a good question because people see the title of my book Against Empathy and they freak out. I have a collection of emails like you wouldn’t believe. I think it’s because it has different meanings. One of the issues.

Some people use empathy just to mean everything good. We should have more empathy means we should be kind, we should be loving, we should be moral and I have no objection to that. Other people use empathy in a narrower sense, having an understanding. I don’t have an objection to that either.

Although, understanding other people is morally neutral. You do need to understand other people to make the world a better place. You also need to understand other people if you’re going to seduce them or calm them or torture them or bully them.

The sort of empathy I’m interested in is putting yourself in someone else’s shoes. Seeing the world as they do, feeling their pain and a lot of people have argued, this is really fundamental to morality. Empathy serves as a spotlight that zooms us in on people and makes them matter. What I argue in my book is that this is mistaken.

That empathy has all sorts of terrible effects. It makes us biased because we empathize with those who look like us and who are attractive and who belong to our group over others. It’s innumerate because empathy makes us value the one over the many, and at least capricious and arbitrary and often cruel acts. A lot of violence is prompted by empathy for a victim. At least the stupid policy decisions. It’s because of empathy that governments and populations care more about a little girl stuck in a well than they do about a crisis like climate change.

Even in personal relationships, empathy can mess you up. An example I like to think about, because it’s from my own life is that if my teenage son comes up to me and he’s freaking out because he hasn’t done his homework and it’s due tomorrow and he’s very anxious. I’m not being a good father if I feel empathy from, I feel his anxiety and I share his anxiety and get anxious myself. I’m best as a parent if I have some distance, if it’s a, “Dude, calm down, let’s take a break, let’s go for a walk,” and I love him and I understand him but I don’t feel what he feels. 

I think it’s the same for friendships, it’s the same for romantic relationships. If I’m really depressed, I don’t want my wife to see me and get depressed herself. I want her to try to cheer me up and try to make my life better. What we want from people and what makes it a better world isn’t echoing their feelings. It’s responding lovingly and intelligently to them.

[0:16:06.5] MB: Your definition is empathy is essentially the feeling of sharing the emotions or kind of actually feeling the pain or whatever someone else is feeling as opposed to this sort of broader understanding that might encompass compassion and other things that are sort of, could be defined as distinct from looking at it from kind of the psychology literature.

[0:16:29.1] PB: That’s exactly right. I’m using it the way a lot of people in the field use it. I’m not the language police. I’m totally comfortable with empathy anyway they want. Some people use empathy to fold together all sorts of things, some that are good, some that are bad. 

The point of my book, the point of my argument isn’t about how to use the words. It’s about how we should live our lives and the case I make is that feeling the feelings of others, whatever you choose to call it is a really lousy moral guide, it leads to messy policy, it leads to bad relationships and we’re so much better when we try to understand people, when we care for people, when we care about people but we don’t feel their pain.

[0:17:14.7] MB: When people hear your stance about empathy, what are some of the kind of typical reactions?

[0:17:21.3] PB: I’ve been making this argument for a while and I’ve gotten some great responses, some very intelligent responses. People will argue that maybe empathy isn’t perfect but without it, we couldn’t be moral people if we didn’t feel other’s suffering, we’d never be motivated to help them.

People argue that those without empathy are cruel people, they’re psychopaths, they’re monsters, people argue that children start off being empathic and then compassion and other things and learn from it, it’s an important start and there’s many other arguments and I think it turns out that all of them are mistaken. I think for instance, there’s a lot of evidence that you could be kind to somebody and care about them and you can also want to make it a better world in general without feeling empathy.

It turns out there’s been a lot of research where you measure people’s empathy and then you see, how does that connect with what kind of good person they are? The answer is, it doesn’t. If I wanted to know whether you’re going to try to rob me or kill me or even just you know, talk badly about me. Your score on empathy test will tell me very little, actually, pretty much nothing.

The real predictors of bad behavior in people are a kind of malicious nature and lack of self-control. Empathy in whatever sense, feeling the pain of others, understanding others seems to play no role at all in good behavior or bad behavior.

[0:18:49.6] MB: That’s the finding that’s backed up by a lot of science right? It’s not just kind of conjecture.

[0:18:54.6] PB: Absolutely. There is an industry involving testing people’s empathy and looking at the relationships between their behavior, there’s a lot of research where you put people in FMRI scanners and you look at the brain responses, reflecting to empathy.

One of the cool findings for instance is, you know, there’s this metaphor I think made most famous by Bill Clinton where you say I feel your pain. It turns out, we literally feel other people’s pain. If I was to watch you get stabbed in the hand and my brain was wired up to an FMRI machine, it would reveal that parts of my brain would light up, that would be pretty much the same parts that would light up if my own hand was being stabbed.

There’s a lot of research on this. The research shows what I’ve been saying, the research shows that the individual measures of empathy don’t predict good behavior, bad behavior, they show that the neural measures of empathy are tremendously biased. This brings us back to the in group, out group work we were talking about before.

They did a study in Europe where they tested European soccer fans, you’re sitting there, your brain is all being measured and you want somebody else being shocked and half the people are told, “You see this guy being shocked? He’s a fan of your soccer team.” Turns out, when you do this, people say, they feel high empathy and their brain’s reflective. Parts of the brains light up that correspond to empathy.

Then, in another group, they’re told exactly the same thing but they’re told, “See this guy? He’s a fan of another soccer team.” You do that, the neural correlates of empathy shut down, you don’t feel empathy and in fact, you watch him be shocked, you feel a bit of pleasure. The studies confirm what we knew from other sources which is how incredibly biased empathy can be.

[0:20:39.2] MB: I’d love to dig in a little bit more to kind of the bias effects on empathy and you know, things like racial bias et cetera and how they can impact or how empathy can kind of negatively create outcomes.

[0:20:51.6] PB: There’s bias in a couple of ways, there’s sort of a natural bias we carry with us. One study looked at people’s empathic reactions to suffering of those they found disgusting, like homeless people or drug addicts. It turns out, the empathy is just silent. If someone grosses you out, you don’t feel their pain at all, you don’t feel anything for them.

Others studies find that attractiveness plays a real role. If there’s an attractive eight year old girl, a pretty little eight year old girl and she’s in pain, you freak out, you feel great empathy. Someone less attractive, someone maybe a bit scary, no empathy at all.

Our natural empathic responses are biased and similarly, empathy can be moved around by politicians, by rhetoricians, by people who want to make a moral point, to try to get you to feel empathy for this person or that person. Sometimes it’s done for causes you might think of as good, like when you direct a lot of concern and focus on the drowned Syrian child.

Where you say, “Look, you used to feel great empathy for his family and the suffering must have gone through. So let’s use that to motivate some good policy.” But often, empathy is directed to get you to hate people. If I want to get you to support attacking some other country or expelling some group from the United States. One excellent way to do so is to tell you about this group’s victims and get you to feel empathy for them.

It’s an observation as old as the Adam Smith in the 1700’s, which is when you watch somebody suffer, you feel empathy for them, you feel commensurate rage for those who have caused that suffering. This is no secret among those who want to motivate cruelty and violence.

[0:22:30.9] MB: You touched on a number of examples in the past of ways that empathy can negatively impact public policy. I’d love to hear the story of, I think it’s Willy Horton, or some of the other examples that you’ve shared previously about how kind of one story of empathy can lead us to make what ends up being a really terrible decision.

[0:22:52.4] PB: There catalytic examples of this, you might say that right now, going through the politics that we’re reeling with at this very moment; bailing out a company because you feel bad for its workers may have great short term effects for the workers and then sort of scratch your empathic itch but have horrible long term effects in the future.

Let’s go to the Willy Horton case. The Willy Horton case from the 1980’s, it came up during the presidential election between the competition, between Michael Dukakis and his republican opponent and what came out was, when Dukakis was governor, he had a furlough program and then the furlough program where prisoners are released for a little while, someone named Willy Horton was released. 

Willy Horton went out and did some terrible things; he raped somebody, he assaulted somebody and Willy Horton was a large and threatening African American. So his opponents put pictures of Willy Horton everywhere. As soon as this incident happened, furlough program was shut down. Dukakis was condemned to apologize for it over and over again while people were stoked up by the terrible things that this man had done. 

Now, it turns out that this for a little program by most measures made the world a better place. That is, even including the crimes that were done by prisoners released and furloughed, the fact that the furlough program exists led to less crime overall and so a rational person would say, “Well let’s do the numbers, apparently the furlough program is doing good.” But that’s not how we think. That’s not how the mind works. With the mind, we are swayed by these sympathetic cases. 

Our empathy is triggered and so we end up doing acts like shutting down the furlough program that in the end cause more harm than good. I mean another example just to get you thinking about is a hypothetical example where imagine there is a vaccine program and a little girl gets very sick. We’d probably shut down the program even if a dozen people are saved by the program each year because you could empathize with the suffering of a little girl who gets sick and her family and everything. 

But you can’t empathize the suffering of people who would have got sick but didn’t. Empathy works in the here and now. It feeds off real cases of suffering and ignores other considerations or take a third example, which is an example I begin my book with, which is school shootings, mass shootings. I begin my book with the story of Sandy Hook Elementary School in New Town, this horrific mass murder of 20 children and I point out that this causes an enormous amount of focus and concern. 

And many people would view it as the biggest policy problem we have but it also turns out that when it comes to murders, to homicides in the United States, mass shootings take up about 0.1% of them. What that means is if you could snap your fingers and make it so that there would never be a mass shooting in the United States again, nobody would notice. It would be indistinguishable from random noise and so these are cases where a good, wise, compassionate policy maker says: 

“I’m going to ignore the pull of my emotions. Particularly I’m going to ignore my racists bias, I’m going to ignore these things that really cause my tears to flow and ask myself the hard question of how to make the world a better place,” and I think these are cases where empathy leads us astray. I think there’s individual cases, there’s cases of charitable giving, there’s a lot of people who give to charity and I used to be one of them and still am to some extent. 

Where I give to things for some sentimental reasons, for the cuteness of the picture, for personal connections and this is a lousy way to do it. When we give to charity we shouldn’t be trying to give ourselves a warm glow or happy buzz. We should be trying to make the world a better place and so I’d like to see a shift away from empathy based decisions towards decisions that are based on reason. 

[0:26:43.0] MB: And, you know, it’s funny, the example that you give at the beginning of the book about mass shootings and I think it was 500 deaths from that in the last 10 years or I don’t remember the exact stats, but that made me think of another instance. I was watching the news the other day and they were arguing about terrorism and they threw out the stat of how many people have died from terrorism in the United States in the last 10 or 15 years and it was 150 people. 

I mean it was a staggeringly low number when you think about the fact that it’s such a huge focal point and that example and the Willie Horton example for me, of course when I picked up the book I think I had the reaction to everybody. It was like, “Why is this guy against empathy?” and the more I start understanding that and those concepts of how this one vivid story, which can really mislead us into making what are objectively worst decisions for our society. It was pretty fascinating. 

[0:27:43.2] PB: I find these stories very moving in how they illustrate in how we can go wrong and it’s not that we should blame empathy for everything. There’s all sorts of other things going on here. For the Willie Horton case, certainly racism played a huge role. I think even if empathy was stripped from our heads, powerful stories will always move us but the argument I make in my book is empathy is so vulnerable to these biases. Empathy always searches for the one. 

It always zooms us in on the one person. It ignores the many, it ignores hypotheticals, it ignores statistics and so it misguides us over what’s important or what matters and it leads to lousy policy and this brings us back to our earlier discussions of definitions of empathy, which is the solution isn’t that we should become cold blooded monsters. The solution is that we should still feel for people, feel real kindness and concern and compassion for people, but we should try to rid ourselves of the habit that we have of zooming in on individuals. 

And so towards the end of my book, I discuss the distinction between empathy and compassion, between feeling the pain of others — empathy — versus just wanting to help them — compassion. I even talk about some fascinating work on meditation and meditative practices which both illustrate the distinction. They get people to do empathy training, they get people to do compassion training, they find all sorts of differences. 

But also, they showed it as possible to make yourself somewhat less empathic but also kinder, which I think would be an indispensable skill for all of us. But particularly people like doctors and nurses and first responders and police and firefighters, people who deal with emotional and difficult situations. The best of them can shut down empathic responses while still caring for other people. 

[0:29:38.2] MB: I’d love to dig into that a little bit more, the distinction between empathy and compassion and we’ve actually had a previous episode where we went deep on the concept of compassion and distinguished it from empathy. In that episode, we touched a little bit on the idea of the main negative thing about empathy, was the idea of empathy burnout and how you can become overwhelmed with trying to bear the cross of feel the emotions of the suffering of others and if you instead focus on how to help them, you can be more proactive. But I would love to hear a little bit more about your take on the distinction between those two things. 

[0:30:11.9] PB: So my take is exactly that take where I got into it actually by reading a bit of Buddhist philosophy. There’s a lot of Buddhist philosophy which asks the question of, “How are you to be a good person,” and how a Buddhist philosopher’s distinguish between what they call sentimental compassion and great compassion and sentimental compassion is what we’ve been talking about as empathy. It’s feeling other people’s pain and feeling other people’s suffering. 

The Buddhist scholars say, “Don’t do this. It might give you a short term buzz but in the long run, it’s bad for you. It will burn you out, it will exhaust you”. People, the term burnout I think is from the 70’s but hundreds of years ago people worry about this. So the alternative is great compassion, which I’m just calling compassion, which is caring about people, loving them but not feeling their pain and the cool thing is that this great compassion seems to be pleasant, invigorating, energizing. 

It makes you a better person but it also makes you a happier person and so a lot of contemporary meditative practice uses — it’s called loving kindness meditation. It uses these techniques to motivate people to be better people and one argument is that they work so well because the meditative practice dampens your empathic responses and a lot of what I’ve been talking about now is theology and philosophy and so on but there’s real evidence for this. 

There’s some wonderful work done by the neuroscientist, Tanya Singer in collaboration with the biologist and Buddhist monk, Mathew Ricard, where they put people in scanners and they have them meditate in different ways, exercised their empathy or exercised their compassion and they find all sorts of different responses and what they find is inevitably you were just much better feeling compassionate. 

[0:31:56.2] MB: I’m curious, you touched on earlier and I’m starting to think about how can somebody listening start to implement this in their lives? What is a concept of a warm glow altruist? 

[0:32:08.9] PB: I’m not sure where the phrase came from but it was discussed by the philosopher Peter Singer where he talks about how some people give to charity and he says, some people give to charity, what they do is they have some money and they spread it around to all different charities.” They give a little bit to Ox fam and a bit to Save the Whales and a bit to their local arts community and a bit to their high school football team and they won’t give that much anytime. 

They spread it around and this is either consciously or unconsciously, a wonderful tactic to feel good about yourself. Each of the different charities you give, you had a little dopamine blast of feel good. But Singer points out, if you want to feel good you’ve come across a great technique. If you want to make the world a better place, if you really want to help people, do it differently. If you really want to help people, figure out where your money and your resources could do the most good and put them there. 

Ignore pictures of adorable babies, but what you should do is go online and see what people say with these charity. Does the charity tests it outcomes? Is it effective? Try to figure out how to make the world a better place and this applies even beyond money. I have a friend of mine who is a wealthy Yale professor and she would go work in a homeless shelter and there’s nothing wrong with that. That makes the world a better place but the problem was she was doing this instead of giving money. 

And the truth is with her salary she could have given a lot more money to do a lot more good than her time at the homeless shelter, which could have done by anybody and that sounds, I know I’ve talked to people, that sounds really cold. It sounds cold and unromantic and what about the warm feelings of connection and so on? And my response is it depends on what you want. If you want to feel good about yourself like a special person, a real helper, get a real connection and make yourself a man of the people and all that stuff, well there’s all sorts of things you do. Be a warm glow giver. 

But if you want to really help people, do something different. So it depends on your goals. My feeling is and I am an endless optimist about human nature is that most people really care about other people who want to make the world a better place and if you remind them, if you prompt them. If you get them to recognize that their emotional pulls are a poor guide to their behavior, they will work hard in doing better. I know I have.

[0:34:35.3] MB: And I think that that to me was the crux of this argument and helped me really understand it, which is what you just said, that your emotional pulls often mislead you and that if we zoom out from the spotlight of getting really caught up and the emotions and the vividness and the story, we can make what are objectively more rational, more statistically relevant and important interventions as opposed to getting caught up in this emotional whirlwind. 

[0:35:10.7] PB: That’s a perfect summary of my argument and you know some people could be skeptical. You asked about responses to my ideas and one response I often get is, “Well maybe you’re right but what are we going to do about it? We’re always going to captured by our emotions and our gut feelings.” 

But again, I’m more optimistic and I give an analogy to racism which is we’re naturally racists. There’s a thousand studies showing we’re biased to favor our own. Even in cases where we really don’t want to and don’t think we are, but does that mean we have to throw up our hands and say we’re stuck with it? Not at all. 

There’s all sorts of ways we can circumvent and avoid our racism. We can engage in practices that diminish it. We could set up technical means within our society like blind reviewing or quota systems that — and they are very different ideas what they share is they take the decision out of our hands. They avoid our biases. If you want to be a good person, you should be aware of your biases, both your moral biases but also your rational biases and so on and then think hard about how to override them. 

[0:36:17.4] MB: I think that’s a great point as well which is that in order to move beyond these biases, we first have to cultivate an awareness of them and in many ways, the dialogue around this can often cut off the conversation before we really get to the point of acknowledging and accepting that biases do exists. 

[0:36:39.4] PB: That’s right. So to some extent I think the great contribution of psychology to modern times has been making us aware of our biases and limitations. Where some psychologist go wrong, I think, is that they jump to the conclusion that we are nothing more than our biases and limitations and I think instead there’s a duality that we’ve been talking about. We are biased, we are limited, we are swayed by irrational things but we’re also smart enough to know it. 

We can use our intelligence and our self-control and our desire to make a real difference to try to override the more emotional parts of ourselves and we’re just talking here about making decisions, making moral decisions and moral actions. I have nothing against empathy in general. Empathy is a wonderful source of pleasure, of intimacy, it’s part of sex, it’s part of sports, it’s part of reading a novel or watching a movie. It’s just as a moral guide, it’s a sort of thing that we should really distrust. 

[0:37:38.9] MB: You know for a man who is against empathy, I think you have a very uplifting view of the direction of the human future and I think that’s a great way to think about it in the sense of, I think you are totally right that many psychologists think that we get almost too far to the other extreme in saying, “We can’t overcome any of these biases.” But I really like your uplifting perspective that we have to be aware and know that these biases are real but we also have the logic and the reason and the ability to move beyond them and build a better future. 

[0:38:13.0] PB: Yeah, I mean you could see it. You could see the intellectual history not just of psychology but how people talk in newspapers and in blogs and online and how we think about ourselves where there was a time of enlightenment where we thought of ourselves as perfectly rational beings, for the most part, the age of reason. 

And then it swung and where we are now is basically many of my colleagues will say, “People are idiots. We’re just incredibly limited, we’re just so foolish in so many ways,” and one of the many goals in my book is to try to push that pendulum back a bit to acknowledge all of these limitations but also to have this optimistic view that puts a lot of focus on our reason. 

After all, we wouldn’t be having this conversation about our biases. We wouldn’t know there were biases unless we had this other more powerful, more rationale capacity. 

[0:39:07.6] MB: So for somebody who is listening and wants to concretely implement the concepts we’ve been talking about in their lives, what’s one simple piece of homework that you would give them as a starting place? 

[0:39:21.2] PB: Well one thing, which we touched upon a few times here is meditative practice, which is something that I am working on myself. But I think there is a more general answer, which is — and this is an answer regarding all of our biases, which is when you are very calm and not caught up in anything look at your life and look at your decisions and try to contemplate the extent to which you’re being held swayed by irrational biases. 

And then if you think you are, if you think for instance that some of your actions are short sighted or too empathic or racist or something like that and you don’t like it, you could work to combat it and you could work to combat it in clever ways. I have a friend of mine he gives the simplest example; he wants to give to charity but he knows that when it comes when he’s asked to give to charity he says, “Well I have other personal ways I could use the money. I could go out for a drink or whatever.” 

He feels bad about this. He doesn’t feel that this is the right way to live but he can’t fight it. So at one point he said, “Look here’s what I should do” and he set up automatic deductions on his paycheck. Very easy to do so now, he could still change his mind. He could shut it down but now he doesn’t have to decide whether to help, he doesn’t decide whether not to help. He changed what the baseline is. 

It’s sort of the moral equivalent if you’re on a diet of not keeping giant bags of M&M’s in your house. The moral equivalent if you’re trying to give up smoking, don’t go to a bar where everybody is smoking. We could be smart enough to recognize, “I am going to fall into this trap,” but to then think and plan ahead so that the trap could be circumvented and that in very general terms is, I think, how we can help defeat those aspects of ourselves that we believe should be defeated. 

[0:41:12.2] MB: For listeners who want to learn more, where can people find you and your books online? 

[0:41:17.9] PB: I have an academic website, which you could find by just typing in Paul Bloom Yale. But I’m mostly on Twitter these days. I’m just one word paulbloom@yale and I endlessly tweet about these issues, about academic gossip, about politics and some excellent bad jokes. So that’s where I recommend people to go to. 

[0:41:37.9] MB: Well Paul, thank you so much for sharing this insights. This has been a fascinating conversation and I think on the surface, it seems very controversial to be opposed to empathy. But I think peeling back the hood a little bit there’s a lot of merit to this framework and your understanding of reality and I think the acknowledgement that we have biases but also the rational optimism that we can work through them and build a better future is something that’s really inspiring. So thank you so much for being on the show and for sharing this wisdom. 

[0:42:09.3] PB: Thank you so much for having me on. This has been a wonderful conversation. 

[0:42:12.8] MB: Thank you so much for listening to the Science of Success. Listeners like you are why we do this podcast. The emails and stories we receive from listeners around the globe bring us joy and fuel our mission to unleash human potential. I love hearing from listeners, if you want to reach out, share your story or just say hi, shoot me an email. My email is matt@scienceofsuccess.co. I’d love to hear from you and I read and respond to every listener email. 

The greatest compliment you can give us is a referral to a friend, either live or online. If you’ve enjoyed this episode, please, leave us an awesome review and subscribe on iTunes because that helps more and more people discover the Science of Success. I get a ton of listeners asking, “Matt how do you organize and remember all these information?” Because of that, we created an amazing free guide for all of our listeners. 

You can get it by texting the word “smarter”, to the number 44222 or by going to scienceofsuccess.co and joining our email list. If you want to get all this incredible information, links, transcripts, everything we just talked about and much more, be sure to check out our show notes at scienceofsuccess.co. Just hit the show notes button at the top. 

Thanks again and we’ll see you on the next episode of the Science of Success.


February 23, 2017 /Lace Gilger
Mind Expansion, Decision Making
57 - The Hard Truth About Psychology, Learning New Skills, & Making Mistakes with Dr. Art Markman & Dr. Bob Duke-IG2-01.jpg

The Hard Truth About Psychology, Learning New Skills, & Making Mistakes with Dr. Art Markman & Dr. Bob Duke

January 12, 2017 by Lace Gilger in Best Of, Decision Making, Mind Expansion

In this episode we discuss whether time speeds up as we get older, why your life story only makes sense looking in reverse, whether or not brain games actually work, the importance of proactive learning instead of passive learning, why psychology confirms all your worst fears about studying and getting smarter – and much more with a special TWO GUEST interview featuring Dr. Art Markman & Dr. Bob Duke!

Dr. Art Markman is a Professor of Psychology and Marketing at the University of Texas and Founding Director of the Program in the Human Dimensions of Organizations.

Dr. Bob Duke is a Professor and Head of Music and Human Learning at The University of Texas at Austin, He also directs the psychology of learning program at the Colburn Conservatory of Music in Los Angeles.  
Together they co-host the NPR radio show Two Guys on Your Head and recently co-authored the book Brain Briefs.

We discuss:

  • Does time speed up as you get older?

  • Why your brain pays less and less attention to things that don’t change

  • How you underestimate the power of new experiences to have a positive impact on you

  • Brains are efficient, and efficient is another word of lazy

  • Why your brain wants to keep doing what it did last time

  • How Dyson vacuums were created (and what sawmills have to do with it)

  • The importance of learning things that seem like they “don’t matter” right now

  • The downside of a linear and close-minded path of achievement

  • Why “everyone they know who is successful knows A LOT about A LOT of things” and you can’t know ahead of time what key information will make you successful

  • Why you shouldn’t edit your life story in the forward direction (and what that means)

  • Is your memory doomed to fail?

  • Why one of the worst things you can do for your memory is to worry about your memory!

  • Do brain games actually work?

  • How do you engage the mind a way that develops thinking?

  • The difference between reading and writing and how they impact your brain

  • The importance of proactive learning instead of passive learning

  • What the data says about regret and how to deal with it

  • How learning is effortful when it actually works, and why without effort, there is very little learning

  • Is it true that we only use 10% of our brains?

  • Your brain is 3% of your body weight, but uses 25% of your daily energy supply

  • Does listening to Mozart make you smarter?

  • Why we can’t get something for nothing (and why you should stop looking for “get smart quick schemes”)

  • Why psychology confirms all your worst fears about studying and getting smarter

  • How curiosity is vital to your thinking ability

  • Why its OK to get stuff wrong, as long as you repair your error

  • Why every bit of skilled performance that you see has a deep reservoir of hard work hidden behind it

  • The critical importance of perception and self awareness in growing and improving

  • Why you are worst at judging your performance when you are bad (isn’t this one true!)

  • Why “expert performers” are really good at identifying all of their flaws

  • How to cultivate self awareness of your flaws in a way thats non-threatening to you and your ego

  • Mistakes are not the problem, but denying them is

  • The critical importance of sleep

  • How sleep clears toxins out of your brain, helps you form better memories, learn more, etc

  • Think about what has brought you joy, what brings you joy, and schedule those things into your life regularly

If you want to master your mind - listen to this episode! 

Thank you so much for listening!

Please SUBSCRIBE and LEAVE US A REVIEW on iTunes! (Click here for instructions on how to do that).

SHOW NOTES, LINKS, & RESEARCH

  • [Book] Brain Briefs by Art Markman and Bob Duke PhD

  • [Podcast] Two Guys on Your Head

  • [Book] Smart Thinking by Art Markham

  • [Book] Smart Change by Art Markham

  • [Book] Mindset: The New Psychology of Success by Carol S. Dweck

EPISODE TRANSCRIPT


[00:00:06.4] ANNOUNCER: Welcome to The Science of Success with your host, Matt Bodnar.

[00:00:12.4] MB: Welcome to The Science of Success. I’m your host, Matt Bodnar. I’m an entrepreneur and investor in Nashville, Tennessee and I’m obsessed with the mindset of success and the psychology of performance. I’ve read hundreds of books, conducted countless hours of research and study and I am going to take you on a journey into the human mind and what makes peak performance tick, with the focus on always having our discussion rooted in psychological research and scientific fact, not opinion.

In this episode, we discuss whether time speeds up as we get older. Why your life story only makes sense looking in reverse. Whether or not brain games actually work. The importance of proactive learning instead of passive learning. Why psychology confirms all your worst fears about studying and getting smarter, and much more with a special two guest interview featuring Dr. Art Markman and Dr. Bob Duke.

The Science of Success continues to grow with more than 700,000 downloads, listeners in over a hundred countries, hitting number one in New Noteworthy, and more. A lot of our listeners are curious about how to organize and remember all this information. I get tons of listener emails and comments asking me how I keep track of all the incredible knowledge I get from reading hundreds of books, interviewing amazing experts, listening to podcast and much more.

Because of that, we created an awesome resource for you and you can get it completely free by texting the word “smarter” to the number 44222. It’s a guide we created called How to Organize and Remember Everything. Again, to get it, just text the word “smarter” to the number 44222 or go to scienceofsuccess.co and put in your email.

In our previous episode, we discuss the daily practice that works to develop self-love, how fear is often the signpost for what we most need to do next, the lessons from a 550 mile pilgrimage through Spain, how seeking too much knowledge can be often counterproductive and much more with our guest Kamal Ravikant. If you want to be inspired starting out this new year, listen to that episode.

[0:02:07.2] MB: Today, on The Science of Success, we have a special episode. Two guests at once. We have Dr. Bob Duke who is a professor and the head of music in human learning at the University of Texas in Austin. He also directs the psychology of learning program at the Colburn conservatory of music in Los Angeles. 

We also have Dr. Art Markman who is a professor of psychology and marketing at the University of Texas and the founding director of the program in the human dimensions of organizations. Together, they cohost the NPR radio show, Two Guys On Your Head and recently coauthored the book Brain Briefs. Gentlement, welcome to The Science of Success.

[0:02:39.2] AM: Thanks a lot for having us.

[0:02:40.3] MB: Well we’re very excited to have both of you guys on here. For our guests who may not be familiar, can you each kind of introduce yourselves and say hi and tell us a little bit about yourself?

[0:02:49.1] AM: Sure, I’ll go first. Yeah, I’m Art Markman, I am a professor of psychology, I study the way people think so I’m interested in reasoning and decision making and motivation and for me, in addition to writing lots of papers that get read by 30 of my closest colleagues, it occurred to me not so long ago that almost everybody I know has a mind, almost nobody knows how that mind works.

I try to spend a lot of my time, in addition to doing research, to bringing insights from the field of cognitive science outward to other people in the hope that they might use that information to live their lives differently and probably better.

[0:03:25.3] BD: I’m Bob Duke and as you said, Matt, I’m a professor of music and human learning here at the University of Texas. Throughout my career, I’ve been studying learning and memory, not only in the context of music making but in other context as well. It’s always been of interest of mine because I work with a lot of people who are preparing to be teachers, what are the mechanisms by which people develop skills for memories, refine your skills over time.

Art and I had had several informal interactions over the years before we actually got started doing the radio show and it’s been now I guess going on four years now, right Art? It’s been a wonderful collaboration that it’s been a great deal of fun to be a part of.

[0:04:03.4] MB: Well, you guys have so many fascinating topics that you’ve written about and talked about. I’d love to start out you know, the way that the book, Brain Briefs, is kind of structured, you have all this amazing questions and you kind of go into answering a bunch of them. I’d love to start out and kind of go through a few of this questions that I found really interesting and kind of get your take on it and share some of those insights with our audience. One of the first that I found really fascinating was, does time speed up as we get older?

[0:04:30.5] AM: The older you get, the more that you begin to worry about that. But since Bob’s the older one, I’ll let him share his experience on this first.

[0:04:37.6] BD: Well the short answer is, yes. Of course what we mean by that, it doesn’t actually speed up but certainly our perceptions of the passes of time change as we age and there are a couple of explanations for that that I’ll let Art tell you about. But one of the things that’s sort of interesting about that is that when you look back into your past right?

Our perceptions of what we recall, what we remember change over time for reasons that have to do not only with an aging brain but also with just the proportion of experiences that we’ve accumulated over the course of many years of a lifetime. 

[0:05:09.6] AM: Obviously one thing that makes time feel like it’s sped up is that the older you get, the more experiences you’ve already had relative to what you’re going through right now. A year of your life when you’re six years old is an enormous proportion of your life, whereas a year of your life when you say 50 is a much smaller proportion compared to what you’ve experienced. But in addition to that, as you get older, your life tends to become more routine. You tend to rely on things that you’ve done before and as a result, you don’t lay down lots of new landmarks in your life the way you do when you’re younger.

When you’re younger you have your first time on a bicycle, your first time going to school, your first time getting in a fight on a schoolyard, or whatever it is. When you get older, you tend to do the same stuff over and over again and then when you look back on it, it’s hard to separate out all of the events, which does have the happy fact that if you continue to create lots of new experiences for yourself, like say by starting to do a radio show or something like that, then you have the opportunity to slow time down a little bit.

[0:06:13.3] BD: Yeah and I think one of the things that’s embedded in what Art’s talking about is how much our brains in their efficiencies pay less and less attention to things that don’t change. One of the ways that that routine issue that Art was talking about affects what happens to our memories is that our brain recognizes that there’s no real reason to keep reforming this memory because it’s just like the memory that’s already in there. 

I think all of us have probably experienced driving to work or driving home from the office and, you know, having many things on our mind and getting home and not remembering the trip. Well, that’s an example of how our minds can be other places when things become highly routinized.

[0:06:54.2] AM: Which, by the way, isn’t a terrible thing since the last thing you’d want to do is to clutter your mind with all the details of your daily commute. But it does make the time seem a little bit shorter when you look back on it.

[0:07:04.6] MB: I find it so fascinating and I think the idea that it’s sort of a proportion of your life right? Like you said, if you’re a six year old, on year is a massive portion of your life, whereas the older you get, a year is sort of incrementally less and less of your total life experience.

[0:07:19.0] BD: Thanks for the reminder.

[0:07:23.7] MB: You know, one of the things that you said I found really fascinating is the idea of landmarks, and how our memories are formed by unique new experiences. I once heard an example of a dinner party and someone was saying, “How can you make a dinner party more memorable?” And they said, “Instead of having everybody sit in the same room and listen to the same music for four hours, change the room you’re in and change the vibe, change the music every hour.” So Instead of having kind of one memory that your brain lumps together, you suddenly have four distinct memories that feel longer even though it’s the same amount of time.

[0:07:53.3] AM: Yeah, that sort of thing is great and I think, by extension, I think people should be a little bit mindful of trying on some new experiences, trying out some new things in order to create those landmarks in your daily life so that it’s not just remembering the dinner party, it’s also remembering October.

[0:08:13.7] MB: That touches on something, this is not a question from Brain Briefs but something I know you’ve talked about, which is kind of the importance of openness to new experiences. I’d love to hear a little bit about that and why it’s so relevant.

[0:08:24.0] BD: Yeah. Well, you know, I mean. In most of our lives, this is a good thing to follow up on, what you just asked about the passage of time. Our brains make memories when there are things to pay attention to that we need to pay attention to. The more predictable our lives are from day to day, the less our brains need to pay attention because we know what’s going to happen and it pretty much happens the way we expected it to.

There’s not much to really think about or to lay down memories for. When you create new experiences for yourself, and Art mentioned this a couple of minutes ago about aging. When you create new experiences as you age, you’re creating more memories that make your life seem more full and more interesting and more engaging.

I think often, we underestimate how much new experiences actually can do for us for our mood or sense of wellbeing and everything, but we have to acknowledge the fact that many people are not so open to new experiences. They like routines and they like to know what’s coming up. In everybody’s life, the challenge is to find a balance, a personal balance for you about how much newness, how many new experiences do you want in a given span of time, and how much do you want to rely on the predictable things that you know are going to happen every day? 

I think if anybody examines our own life, I mean, certainly for me, there are routines that I have in my day that I like very much, the fact that those are routines. But having the job that I have and the job that Art has, we get to experience a lot of new things in any given week and that also makes our lives seem that much more energized and vital.

[0:09:55.1] AM: The thing is, you have to remember that, as Bob likes to say, brains are efficient and he usually follows that up by pointing out that efficient is another word for lazy, which means that brains really want to keep doing what they did last time. So one of the reasons why they’re such a strong driver to keep doing the comfortable and familiar thing is because it actually feels good in the moment to do that. 

You know it’s going to happen, you know how it works and so you settle into this routine and as a result, you’re often a little bit hesitant to engage in some new thing because it seems like an awful lot of work and so we often don’t do those things. We actually do in the book, talk a little bit about openness in the first chapter because, you know, Bob and I as he said are privileged to be in careers where we have the opportunity to do all sorts of new and interest sting things. 

Nonetheless, when our producer Rebecca Macenroy asked us, “Hey, would you guys like a show on the radio?” Which is something we had never really considered before. We sort of stared at each other at first. I think our initial reaction was, “What? That seems a lot of work.” But then our openness to experience kicked in and we thought, “Yeah, sure, why not?” We ended up doing this brand new thing that neither of us had ever envisioned for ourselves and it’s turned out to be a wonderful part of our lives.

I think that that first hesitant reaction is one we often give in to. But by not giving in to that and trying that new thing, we create all sorts of opportunities that we didn’t envision in advance.

[0:11:26.0] MB: In a previous talk that you guys have given, I think you shared an example of Dyson vacuums.

[0:11:32.1] AM: Yeah.

[0:11:32.8] MB: I’d like to hear that story.

[0:11:33.1] AM: Sure. So James Dyson, he was an interesting guy and one of the things about him that was so interesting was that he just learned a lot of stuff about a lot of stuff without regard for why it might be valuable later. One of the things he learned about was sawmills, which most of us don’t have much experience with saw mills. 

My personal experiences usually in cartoons, right? Saw blade, log, body on the log. A real sawmill has no bodies on the log in general but definitely logs in saw blades and a lot of saw dust. What he learned about them was that the way they get rid of all that sawdust is by sucking it out of the air and then using a giant contraption called an industrial cyclone to pull the sawdust out of the air. 

Now, he learned about this without any real sense of “wow, this is going to be important to me later”. Until one day he was contemplating how to make vacuum cleaners work more effectively and in particular, how to keep the bag of a vacuum from filling up and getting its pores clogged in ways that lessen its efficiency and he realized that you could take the industrial cyclone that a sawmill uses and build a small home version of it and put it into a vacuum cleaner and that that would actually change the need for a bag in a vacuum.

I think what’s most important about that is we live in an era, educationally, in which we are told what to learn in our education system and then we’re told, “Learn this stuff in particular because it’s going to be on the exam,” which leads to my least favorite question as a professor, which is when students come up to me and say, “Will this be on the exam?”

After years of struggling with that question, it occurred to me that the proper answer to students is when they say, “Will this be on the exam?” I say, “Yes but it might not be my exam,” because you never know when that piece of information you learn is going to turn out to be valuable.

[0:13:24.4] BD: That really speaks to, I think the way many people think about planning out their lives and what’s going to happen and I think there’s become an unfortunate trend in certainly achievement oriented people in American culture that the thing to do is to plan out this linear trend, “I’m going to get this degree and I’m going to do this internship and then I’m going to go to graduate school and then I’m going to get this job.”

All of those plans are built around the idea that “I know now, exactly what I’m going to need to do and need to learn and need to be able to do 10 years from now”. That is a fiction, right? Everyone we know and I do mean everyone who is really successful at what they do knows a lot, as Art said, about a lot of things that when they learn them, really, there was no indication that that would be one of the central things that would allow them to be successful.

So the questions that people think about whether they’re college students or even younger students or young adults who are just starting out in their life and thinking, well what kind of things do I need to know to be able to be successful in this thing. Well there’s certainly is a package of stuff that’s important for you to be able to function. But beyond that, the people who really excel, the people who have all the features that employers and admirers claim to want — they’re creative, they’re insightful, they’re good problem solvers — didn’t get there through a linear path of activities and learning experiences.

They got their through some circuitous path going through some things that seem to be pointless at the time, other things that didn’t seem to be particularly interesting, other things that were fascinating but maybe weren’t going to be useful and then ended up being useful. I think the openness to experience idea really is about that issue, about exploring things that you might be curious about that might be interesting to you. That might be enlightening in some way even without the guarantee that in the long run it’s going to be useful.

[0:15:17.0] AM: Just to follow up on Bob’s point for a second. One of the things that’s really important is, I think a lot of people tend to edit their life story in the forward direction. Meaning, they have this idea of what their life is going to be like and then they seek experiences that are consistent with that idea of where their life is going and they avoid experiences that don’t seem to fit the narrative that they’re creating.

The problem is that when you look at the life stories of successful people, that life story generally only makes sense when you look back on it. In the forward direction, it’s pretty chaotic. They tried all sorts of things, some of which worked out, some of which didn’t, some of which turned out to be important, some of which didn’t and in the moment, it was often very difficult to determine what the pivotal pieces of learning were, what the pivotal experiences were. Yet they were just open to trying those things, knowing that some number of us were going to turn out to be valuable in the future.

[0:16:10.8] MB: I think that’s such a powerful insight and something that I think you guys did such a good job explaining and really impacting for the listeners. In the vein of something you touched on a little bit earlier, the idea of the brains kind of efficiency or laziness, another question that you asked in the book is, “Is our memory doomed to fail?” And I’m really curious what you think about that.

[0:16:30.8] AM: Bob, do you remember when we wrote about that?

[0:16:32.3] BD: I can’t remember a thing. I don’t know. I mean, the short answer to this, this is how you turn something, little ideas into a book, you have a short answer and then you talk about it for the next six pages. But I mean, the short answer is, well, our memories are doomed to decline in terms of the retrievability of things in our memory. 

My favorite thing in art says, I see we’re both saying each other’s lines on this podcast is that you know what? By the time you reach your new 20’s, your brain starts the long and slow decline, that’s the bad news. The good news is that the decline is long and slow. Even though there are certain diseases and injuries and other kinds of things that lead to rapid declines in memory and cognitive function.

For a typical human being who is relatively healthy, that decline is so slow that it’s mostly imperceptible even though, as we get older because we’re attuned to the idea that our memories are likely to fail, we are on heightened alert to notice every instance when we can’t find our keys or I can’t remember somebody’s name or whatever happens to be when in fact, those are things that are probably have been a part of our lives for many years it’s just as we’re getting older, they seem to loom larger in our perception.

[0:17:46.4] BD: yeah, the fact is, we’ve been forgetting things our entire lives and we don’t start worrying about that forgetting until we get older because we believe that that is now a sign of an impending cognitive apocalypse and I always like to point out, I have three kids and when they were younger, they would constantly forget stuff, they’d forget to do homework, they’d forget to take out the trash, they would forget all sorts of stuff and I like to say that at no point did any one of them ever say, “Wow, I just had a senior and high school moment.”

Then you get older, you turned 50 or whatever age it turns out to be for you and you forget something and now you think well it’s over. It turns out that one of the worst things you can do for your memory as you get older is to worry about your memory. What the studies show is that older adults who are worried that their memory is getting worse perform worse on memory tests than people who are getting older and don’t worry about their memory getting worse.

You can even induce that in a study, you can induce that worry about your memory and see that effect. What this means is, relax. The fact is yeah, look, studies show that if you want to know where somebody’s cognitive peak is, that long, slow cognitive decline means that in your 20’s, you process information fastest and you remember new things the quickest. In terms of what makes you really smart, because that has to do with what you know, you’ve accumulated lots of knowledge over the course of your life. 

So the people who are actually acting most intelligently, tend to be people in their 60’s and 70’s because they have a huge base of experience and knowledge that they can draw from. Yeah, there might be a couple of things here and there that they have forgotten but that huge store of knowledge actually gives them an advantage over younger people. In many ways, younger people need to be faster because they don’t know as much.

[0:19:33.1] MB: The processing power itself kind of slows down a little bit but the benefits of the accumulated wisdom and knowledge, essentially outweigh that slowdown for a number of years?

[0:19:43.0] BD: Particularly for people who remain mentally active, right? We know very clearly that the more new things you continue to learn throughout your life and the more new things you experience, the longer the deficits in memory that begin the accrue are held at bay. They don’t become noticeable to you because the way we retrieve memories from our memory store is by ways of all of the things that each memory is connected to, right? 

So the more interconnections you have among the things in your head, the easier it is to retrieve them. If you’re experiencing new things, one of the things that that’s prompting your brain to do is to create new connections among things that may be related in ways that when you learn them 10 years ago, you didn’t really recognize that relationship and now you do.

As Art was saying, the advantage of older adults, and being one I’m happy to claim this advantage is that not only do I have a lot of stuff in my memory but that stuff is organized in a way that lets me access it in ways that are very advantageous. We talk often about why would you have people memorize a lot of things when you’ve got an encyclopedia, a map of the earth in your pocket, in your phone? You can retrieve all kinds of information from the phone. 

But the issue with that is, you can only work with so many things at a time in your so called working memory, your processing part of your memory. The more time it takes you to get the stuff, you’re going to stick in your working memory, the slower you are. If you’ve already memorized some things and you’re pulling out information that’s already in your memory, I’m sure it’s clearer how much more efficient that would be then have to start typing on a keyboard or on a phone to go and find something out.

[0:21:24.1] AM: The other thing is, the brain has so many great ways of accessing that information based on the similarity between the situation you’re in right now and stuff that you’ve learned before. Whereas if you’re trying to find that information on the computer, you have to find the right question to ask. Had Google existed in the late 1970’s when Dyson was thinking about trying to remove the bag from the vacuum, if he had been able to Google “how do you get rid of the bag in a vacuum cleaner”, he would have gotten a whole bunch of websites and probably educational videos about how to change the bag in your vacuum.

But at no point would any of those sites have said, “Oh and by the way, consider replacing that bag with an industrial cyclone.” You got to have that knowledge in your head if you’re going to do really interesting stuff.

[0:22:10.9] MB: One of the things, I’m a huge fan of Charlie Monger and we talk about him a lot on the podcast, and he talks about the idea of kind of mental models and organizing your memories and your knowledge in a kind of a coherent lattice work that this easily accessible. I think that’s such a great point.

[0:22:27.2] AM: Yeah.

[0:22:27.5] BD: Yeah.

[0:22:28.3] MB: On the ideas of sort of remaining mentally active, one of the questions that you guys touched on, it’s something I’m really curious about is do brain games work?

[0:22:37.0] AM: Shortest chapter in the book.

[0:22:40.2] BD: Well if work means, do they help you learn to play brain games? Absolutely they work. Whether they do anything beyond that, there’s not a lot of evidence that that’s the case.

[0:22:52.7] AM: It turns out that brain games tend to focus on very specific tasks and well intentioned at first, right? I think the idea was that we know for example that this concept that Bob was talking about a working memory, the amount of stuff you can hold in mind, is related to performance on all kinds of tests of intelligence and things like that.

There was a real interesting question of, if we could expand your working memory capacity, would that in fact make you smarter? But it turns out that there isn’t really a compelling way of changing the brain’s architecture in a way that increases that working memory capacity in a way that creates general intelligence.

As Bob was saying, what you learn when you play these brain games is how to play the game. But you may as well, if you’re going to practice something, you may as well practice something that you may actually encounter again later outside of the context of sitting on your phone or your computer.

[0:23:49.5] BD: Yeah. You know, for anybody who enjoys brain games just for the fun of the game, well then great. They should play whatever things they want to download. I’m an Angry Birds fan but nobody claimed that that was a brain game, right? If you think about what really engages the mind in a way that develops thinking, it’s not just responding to other things, but it’s creating new things on your own. 

People who read have a different experience than people who write because writing requires a different set of activities in your brain than reading, watching a good video, whatever it happens to be, which mostly receptive kinds of responses were. We know that brains are trying to figure out what they need to do. If you’re engaged in something where you’re receiving input from somewhere else, it really doesn’t matter what you do, this stream of input keeps coming and whatever, well then, there’s not really a lot for your brain to be engaged in.

But if you happen to generate something on your own, it engages not only the parts of your brain that have to control whatever motor activity or whatever has to do the stuff but it also requires you to draw from different parts of your memory. That might not even have been connected before because of the nature of the task you’re trying to accomplish.

I’ll let Art talk about this too, but one that springs to mind is that Art as an adult had always wanted to play the saxophone and rather than waiting until his family was surrounding him on his death bed, saying whispering, “I wished I played, I always wanted to play the saxophone.” He actually went out and learned to play the saxophone.

I’ll let him talk about that experience a little bit.

[0:25:24.5] AM: Yeah, I think that’s absolutely right, you know? As Bob points out, it’s really important to engage in activity. In fact, B.F. Skinner who is one of the grand daddies of behavior of psychology kind of gets a bad rap in modern times because there were limitations to behaviorism. But one of his fundamental insights was that in order for the brain to learn something, you don’t just expose yourself to information, you also engage in activity. 

Activity was a fundamental part of the learning process that he was working on and I think that that’s something that’s actually gotten lost a little bit. As Bob was saying, I think it is really important for us to continue to do that throughout our lives and so when I was in my mid-30’s and was thinking about stuff I would have always liked to do and I had read some research on regret, actually. The research on regret shows that if you ask a bunch of college sophomores, what they regret, it’s almost exclusively dumb stuff they did like getting drunk at a party. But if you ask older adults, people in their 70’s, 80’s and 90’s what they regret, it’s almost exclusively stuff they didn’t do.

One of the reasons that that data point is so important is because we all have a remarkable mental capacity for time travel where we can project ourselves to the end of our lives and then look back and ask, “Is there something I would regret not having done?” For me, one of those things was I had never learned to play the saxophone and so in my mid 30’s, I got up one day and said, “All right,” I went out and found a teacher and bought a saxophone and said the fairly realistic goal that in 10 years I wouldn’t suck. That’s worked out okay. I’m in Austin and I’m in a band, which almost obligatory if you live in Texas.

[0:27:03.5] MB: As a corollary to kind of thinking about brain games and by the way, actually before I say this, I love the point that you guys made about the critical importance of active learning and not just sitting there passively. Whether it’s watching YouTube or reading or whatever it might be but really, engaging your brain in the learning process.

I’m curious, writing as you guys touched on is obviously kind of one potential way to do that. But for somebody that’s maybe outside of school that’s graduated, that’s in the working world, what are some ways that we can kind of actively learn and really engage with information instead of just being passive consumers of it?

[0:27:38.0] AM: I think, if you’re in a community that’s large enough, that there are various clubs and where people who share a given interest can go and engage together in something. It doesn’t have to be necessarily an intellectual only task or even a musical task. There are many community choirs that people can sing in, if music is what you’re in to and what you’d like to do. Some people take up a new sport. They learn, if they never played handball they learn to play handball or they learn some other skill that requires some effort and one of the things that Art and I talk about a lot is that learning is effortful if it works. 

If you don’t feel like you are putting much effort into something, you’re probably not learning much as you might think you are or as much as you are intending to. I think if you are engaging in something that makes you happy like for Art playing the saxophone, well then the effort is well-spent because you feel like, “My God an hour ago I couldn’t do this and I’ve been practicing for an hour and now I can do this. That’s a pretty cool thing and it’s enjoyable because I like music and I like playing the saxophone,” and when you contrast that to a brain game as you say, “God my score an hour ago was X and my score now is X plus whatever value. Okay and what?”

[0:28:55.6] AM: “I’m going to call mom!” 

[0:28:58.9] BD: Yeah, right. 

[0:28:59.8] AM: I think that is absolutely right and the fact is that technology provides all sorts of opportunities for people to be more active in a way that they learn. So 25 years ago if you wanted to practice your writing you might keep a journal but for many people just keeping a journal or writing something that you kept to yourself wouldn’t necessarily feel that rewarding. Now you can go in the internet and have a Google blogger’s site set up in eight minutes. 

And then you can start writing and putting it out there for people to see and so there are all of these opportunities to engage with material that you think is important and interesting to write about it and while you may have the opportunity to educate or influence others with that, you are also solidifying your own knowledge by engaging with it in that active way. So I think there’s just more avenues for doing that that don’t require just sitting and playing little games. 

[0:29:54.9] MB: So changing directions a little bit, I’m curious, one of the other topics that you guys talk about is the idea that we “only use 10% of our brains”. I’d love to hear your insights on that. 

[0:30:06.4] AM: Yeah, well that is one of the great myths that’s out there and as a cognitive psychologist, probably the question I get asked most frequently in some form or another and so one of the things we wanted to do is to understand where that sentiment came from because of course the brain, we actually use all of our brain all of the time. It’s an extraordinarily energy hungry organ. It’s about 3% of the human body weight, it uses 20 to 25% of someone’s daily energy supply. And that’s really the amount of energy that’s required just to keep the lights on. 

The physiological processes that are required to keep the brain active are very expensive from an energy standpoint which is why most beings in the planet don’t have large brains relative to their body size. So where does this myth come from? And it may come from one of two places. One is that early neuroscientist when they were exploring the brain found that only a small mass of the cells in the brain are neurons. 

The ones that actively carry signals and most of them are support staff, glial cells and other things like that support what the brain is doing. And so you could argue well only about 10% of the cells in the brain are the ones that are actively engaged in the thinking process and a lot of the rest of it is cells that are working behind the scenes, but another issue has to do with brain capacity.

One of the amazing things about the human brain is that we’re continually able to learn stuff and the brain doesn’t get full. There isn’t some day at which you try to learn some new thing and your brain says, “Sorry can’t do that, can’t learn anything else,” and so a number of writers, from William James on forward, have made the point that we may very well only use a small fraction of our capacity for thinking and so that 10% number may reflect that also. 

[0:31:55.8] MB: Another question that I thought was interesting out of the book is, “Does listening to Mozart make us smarter?”

[0:32:02.8] BD: So wouldn’t that be lovely if it did? I’d be so smart, I listen to Mozart all the time. Like many things in the sciences, and Art and I talk about this in many different contexts, somebody publishes an article that is caught by the media and portrayed in a way that it’s not quite as circumspect as it should be. And then it just takes off and in 1997, I think it was this article came out almost 20 years ago now that these psychologist in California had people listen to Mozart and then take a special reasoning test, which is one dimension of IQ. And the people who listen to Mozart got higher scores than people who didn’t, it sort of became the Mozart Effect. 

Now the term “Mozart Effect” is copyrighted and people publish things that they sell for babies and all this kind of stuff and actually when you look critically at the data, there’s no evidence that listening to Mozart really does anything that doing a lot of other things would do. There was one study that I don’t think is ever published but this guy put this up online. He had people stare at a moving computer screen saver and their scores went up as much as they did listening to Mozart. So a lot of it has to do with… 

[0:33:12.7] AM: The flying coaster effect. 

[0:33:14.1] BD: Yeah, right. Exactly. So a lot of this has to do with arousal and attention and what we know basically if you’re going to stimulate somebody such that they might do perform better on some cognitive task, for people who don’t like Mozart, if you make them listen to Mozart they’re not going to perform better. They’ll probably perform worse. So what people actually are responding to are ways to heightened arousal and heightened attention. 

You would understand how that would be evolutionarily a smart thing for brains to do, right? When you’re aroused in some way, you’re a little more attentive, you’re thinking a little more faster. I mean all those things that allow us to navigate the world are in play here but like many things that sound too good to be true, this is too good to be true. 

[0:33:57.9] AM: And I want to follow up on one thing because if you juxtapose playing brain games and listening to Mozart you also get this other piece, which is a lot of times, we want to find ways of getting something for nothing, right? We all know from school that in order to get a good grade on a test, you have to read the textbook and answer some questions and study and study early off and we know that but what we keep hoping is that there’s an easier way. That if we could only put the book under the pillow or let it play while we’re asleep or listen to Mozart or play this fun video game, then that would obviate the need to do the hard work that’s required to learn stuff. 

And what I tell any student that I teach in a cognitive psychology class is that psychology confirms all of your worst fears about studying. You have to do the work and while it may, at the front end, seem unappealing to have to take that big book down and slog through it that is in fact what you have to do in order to learn stuff. You have to actually do the work and face the knowledge, there really isn’t a shortcut but man, wouldn’t it be great if there were? And that’s I think what a lot of people respond to when they see effects like that. 

[0:35:10.4] MB: And that’s something we’ve had previous psychologist on the show that have talked about the exact same phenomenon, which is that maybe instead of “get rich quick schemes” people are constantly looking for this kind of “get smart quick schemes” and the reality is the way to become smarter, the way to become a better decision maker is to just put in the work and it’s a long journey. It’s a challenging journey, but at the end of the day it’s one that’s really worthwhile. 

[0:35:31.9] BD: I think Matt what leads people to be attracted by the ideas of brain games or whatever other thing that have offers some promise of getting you smarter or more creative or whatever is that when people say this to somebody, we have to put in the work. A lot of people are asking, “What the hell does that mean? Work at what? What do I do?” and I think when you look at people who are generally adept at dealing with the circumstances that they confront in their lives, those people tend to be generally curious people, right? 

They wonder about things. They say, “Well, why is that like that and why does that thing take so much more time than this other thing does?” Or whatever happens to be that they are considering at the moment, and that kind of curiosity is enlivening in terms of your memory, in terms of your perception, in terms of your general thinking ability. Because you’re asking a lot of questions and what brains are willing to expand the effort to do is solve a problem and so by creating little problems for yourself, even just asking the question, “Well why is that?” Well now you’ve got a problem to solve and that ongoing problem solving is beneficial to your thinking overtime. 

[0:36:37.1] AM: But this actually raises another point that we talk about in the book a little bit but it seems relevant here, which is we have a very strange relationship with errors and failure. We don’t like to not know stuff. We don’t like to not know how to do stuff and if you think about our education system, one of the things that it teaches us is mistake minimization. The way you get good grades in school is by getting stuff right. Not by getting something wrong and then repairing your error, which is actually what makes you smart in the long run. 

And so this is a real problem because what it means is that a lot of people are a little bit afraid of really digging into some new thing because they don’t like that feeling of being in this nether region in which they are aware that there’s this thing they don’t know anything about but they don’t know it yet. And I think one of the things you have to do if you’re going to really broaden that base of experience and do the work you need to do to be smarter is to be willing to tolerate both the knowledge that, “Hey, here’s something I know I don’t know and I’m going to work for a long period of time to repair that gap.”

[0:37:43.1] MB: And I am a tremendous fan of Carol Dweck, and the book Mindset and the whole distinction between the fixed and the growth mindset, I think it’s so important to accept and embrace your mistakes and to try to move your ego out of the way whenever you’re thinking about your own mistakes. 

[0:37:57.7] BD: I absolutely agree with you, Matt. I’m also a Carol Dweck fan but the thing is schools don’t make that easy, right? Because I know of very few instances where not getting things right provides you with opportunities to correct what you’ve done and actually get credit for the correction, you know what I mean? Usually what schools cultivate, as Art was saying a minute ago, is get it right when you get asked or when the paper comes due or whatever happens to be. 

I think Art and I have the privilege of working at a major research university and so we get paid our exorbitant salaries to be confused most of the time. I mean we are trying to solve problems that no one has solved before and answer questions that nobody has answered before and it’s confusing and we get a lot of stuff wrong. But without the opportunity to try and fail and then retry and maybe retry many times after that, it’s impossible to make any intellectual progress.

[0:38:51.8] AM: Carol Dweck is great. Carol and I were colleagues together for a while at Columbia before she went off to Stanford and I came down here to Texas and I completely agree that that mindset of being willing to try things that may fail is so important, particularly because when we evaluate the skilled performance of other people, we discount all of the work that they’ve done. So when people hear your podcast or when they read a book that they really enjoy, they are seeing a final product of something. 

They are not seeing all of the work that went into creating that. They are not seeing all of the attempts that didn’t go as well. They’re not reading the first drafts of the pros. Bob has the privilege, the way we wrote this book in general is I like to fill blank pages, Bob likes to edit and so it was a match made in heaven. One of the things that that means is that Bob got to read a tremendous amount of half-baked pros that ultimately became what came out in the book but nobody else gets to see that and I think that it’s important for people to realize that almost every bit of skilled performance that you see required a tremendous amount of work and effort and revision and practice to get there and then that is the critical insight underlying the mindset work that Carol Dweck works on. 

[0:40:14.9] MB: So I’d love to segue into something that you talked about in the very beginning Bob that relates to this, which is that you said your expertise is helping people develop skills and thinking about how they form memories and how they refine their skills overtime. I’d love to dig into that a little bit and some of the major lessons you’ve learned about how we can become more skilled, how we can really focus in on refining our skills overtime. 

[0:40:38.1] BD: Yeah, one of the things that is central to this whole idea of becoming more skillful is you have to become more perceptive about what you were doing. A lot of people who were practicing a skill, whatever the skill happens to be who aren’t noticing the somewhat smaller features of what they’re doing, really has no opportunity to improve and anybody who watches somebody teach a really good lesson or take a really good lesson, what you see is what really excellent teachers do is they help people know what to pay attention to. 

And that’s what’s a big part of the teaching is telling them what to do, right? Because when we develop skills, it’s not because someone told us to do something and now we do it. I mean would that it were that easy, right? But the part of our brain where skill memories are activated and where they went off is not something you can tell verbally or consciously to say, “Okay, do this now.” You have to just do it and as we were talking about a few minutes ago, in doing it you’re going to make some errors and you’re going to have to make adjustments. That are even below being able to control consciously. 

I mean Art plays the saxophone, the saxophone is one of the most inherently out of tune instruments, in terms of the way it’s built, of the wind family. I mean it is terribly out of tune. So if a saxophonist is going to play a scale in tune and all the notes are going to be in tuned, the saxophonist has to make all kinds of adjustments to the tension in their mouth and the placement of their tongue and the speed of the air and there’s no way to tell somebody, “Now this is where your tongue comes up a little bit, and this is where you squeeze a little bit with your arbiter.” 

There’s no way to do that. What you do is you listen to the sounds that you are making and somehow your body figures out through trial and error what kinds of things you need to do to play the scale in tune but that’s not going to happen if somebody doesn’t hear what an in-tuned scale sounds like and recognizes the discrepancies between the scale and playing now and the in tuned scale. So that’s a real challenge. 

I think a lot of people who see or if you are a golf fan. I am not a golfer but I bet that if you really love golf and you watch pros or you watch these videos that help you become better, one of the things that really, when you watch a great teacher whether you’re a pitching coach or a gold pro or whatever happens to be and you say, “What are they talking about the most?” They are getting the students to notice more about what they’re doing. Because if you don’t know really clearly what the goal is you’re trying to accomplish and recognize the discrepancies between what you’re doing now and what you’re trying to do. Well then the likelihood of improving at what you’re doing is really, really low. 

[0:43:06.5] AM: And what we know from a lot of studies is that the lower your level of performance in an area, the worse you are at judging your own performance. So that the least good performers are the ones who most over-estimate how good they are at whatever it was they just did and one of the things, and Bob talks about this a lot, one of the things that expert performers are really good at is identifying all of the flaws in what they just did so that they can improve them. And I think it’s just that self-monitoring ability is so crucial for improving your skills because you can’t fix and area you are not aware of. 

[0:43:44.8] BD: Yeah, exactly. 

[0:43:46.8] MB: That phrase, that line, is so important. “You can’t fix an area you are not aware of” and I think many times a lot of it comes from this kind of framework of mistake minimization that people are taught in school and elsewhere. There is such an almost subconscious incentive to bury your mistakes. To hide from your mistakes, to pretend like, “Oh I didn’t make any mistakes.” What are some ways that people can cultivate that self-awareness of their flaws in a way that is non-threatening to them? 

[0:44:10.9] BD: One of the most important things to do is to hang out with other people who acknowledge their flaws and you see this in industries. My favorite example is, and I talk about this a lot is the FAA. The airline industry you would think that if ever an industry wanted to hide it’s flaws it would be the aviation industry because if you scared people into thinking that aviation was unsafe then people wouldn’t stick themselves in a metal tube and allow themselves to be hurled through the air at hundreds of miles an hour. 

In fact, if you are a member of the aviation industry and you make an error, if you report that error through the system the FAA has developed within 24 hours and your error was not the result of breaking the law like coming to work drunk, then that error can’t influence your status with the company you work for. You can’t be fired, you can’t be reprimanded for that error and the reason for that is because the FAA actually takes all of those mistakes and catalogs them and uses that to figure out what changes in procedures, what changes in maintenance schedules are needed to keep aviation safe, which is why airplane flight is as safe as it is. 

The reason that this works is because the entire industry has decided that single mistakes are not the problem. The cascade of errors that leads to catastrophic failure is the problem and I think that by extension, whenever you spend time with a community of people who are willing to acknowledge their mistakes, it makes you much more comfortable in doing that yourself and I think that that’s just absolutely crucial for allowing yourself to continue to improve in all of the things you do.

[0:45:52.6] MB: I’d love to segue into a different topic just for a moment. You’ve talked about the importance of sleep. I’d love to hear your thoughts about why it’s critical to sleep and why sometimes doing things like pulling all-nighters is often not the most effective strategy. 

[0:46:07.4] AM: So we live in a chronically under-slept society in which people think that sleep is something that they’ll do when they’re dead. And it turns out that you spend about a third of your life asleep which means that it must play some important function and it really does. The brain is actually extraordinary active while you’re sleeping and it’s doing several different things. 
One of the things that brain is doing during sleep is actually clearing toxins out of the brain that build up over the course of the day partly just through the things that build up from using energy. And partly from other toxins that may come in through other activities people engage in. But on top of that, the brain is actually actively helping you to remember and to forget while you are asleep. So one of the stages of sleep actually helps with your skill learning. So if you’re learning to play a musical instrument and you practice a scale over and over, you get a little better while you’re practicing and then you get more better when you sleep. It actually smooth’s out the performance, the motor performance. 

In addition to that there are other stages of sleep that influence what’s called memory consolidation, that is it actually helps to burn in some of the most important memories. So if you study for a test before you go to sleep then after you wake up you have better memory than if you study for that test and then stay awake for the same amount of time. So sleep ends up having a big influence there as well and not only does it help you to remember, it also helps you to forget some of the less desirable things. 

So details of your day that were somewhat mundane tend to be lost while you are asleep and the emotional impact, particularly the negative emotional impact of things that happen to you will fade as you sleep and that’s important. Because we all know, we all have things happen to us where somebody gets really angry at somebody else for something they did and in the moment they’re really angry but overtime and in part because of sleep you begin to disengage your memory for the event from the emotional content of that event. Which is part of what enables you to get on with your life and to do other things with those people who may have done something to bother you. 

[0:48:22.9] MB: What is one piece of homework that you would give to people who are listening to this episode?

[0:48:27.6] AM: Bob you got some homework for people? 

[0:48:29.1] BD: I do and you know, I think I would spend a few minutes speaking about what are the things that I experienced, I have experienced in the past that bring me joy and I would schedule those into my week. I think a lot of people do a lot of drudgery that they think, “Well I’ll get this over with and then a week from now, a month from now, this summer or whenever they are thinking about it, I’ll schedule in a little happiness here,” and I think it’s important to schedule happiness into every day. 

That’s easier for some people than others because some people’s lives are easier than others. They have more privileges, they have more opportunities for choice, those kinds of things. But I think irrespective of your life circumstances, to be able to put yourself in situations where you think, even if it’s for five minutes, “I will have a conversation with a friend that I haven’t spoken to in a while or I’m going to take a walk,” or whatever it is that brings you some feeling of happiness and joy that that should be a part of every day.

[0:49:27.9] AM: Yeah and I’m going to add one thing to that, which is I think that as another piece of homework, find somebody you haven’t talked to in a while and ask them to talk in some amount of detail about what they’re doing and why they’re doing it and learn from the people around you. Learning doesn’t have to be drudgery. It doesn’t have to involve sitting in front of a big book and struggling through it. 

We learn a tremendous amount because we’re such social species from the people around us and taking the time to really sit down and have a great conversation with somebody and understand the way they think about things, can be a really valuable learning experience and at the same time also be a joyous one and I think having more of those conversations is a great thing to do. 

[0:50:10.6] MB: Where can people find the two of you and the book online?

[0:50:14.9] AM: I’m the designated self-promotion person in this duo. So the podcast we do, the radio show is called Two Guys On Your Head. It can be found wherever podcasts are found, so iTunes, Stitcher. You can go to twoguysonyourhead.org. If you’re on the Austin, Texas area of course we’re on KUT Radio in Austin and you can also find our book Brain Briefs, pretty much wherever books are sold except that our publisher is a division of Barnes & Nobles. So it’s not available as a Kindle book. The hard cover is available on Amazon, but they refused to make a Kindle so the seven nook readers have access to it. 

[0:50:53.5] MB: Well Art, Bob, I just want to say thank you so much for being on the show. I’ve really enjoyed this conversation and I know the listeners are going to get a ton out of all the incredible insights that both of you shared. 

[0:51:04.5] BD: Well thanks Matt, it’s been a real pleasure. Thanks for inviting us on. 

[0:51:07.3] AM: Yeah, this was great. Thank you. 

[0:51:09.4] MB: Thank you so much for listening to The Science of Success. Listeners like you are why we do this podcast. The emails and stories we receive from listeners around the globe bring us joy and fuel our mission to unleash human potential. I love hearing from listeners. If you want to reach out, share your story or just say hi, shoot me an email. My email is matt@scienceofsuccess.co. I’d love to hear from you and I read and respond to every single listener email. 

The greatest compliment you can give us is a referral to a friend either live or online. If you’ve enjoyed this episode, please, leave us an awesome review and subscribe on iTunes. That helps more and more people discover The Science of Success. I get a ton of listeners asking, “Matt how do you organize and remember all this information?” Because of that, we’ve created an amazing free guide for all of our listeners. You can get it by texting the word “smarter” to the number 44222 or by go to scienceofsuccess.co and joining our email list. 

If you want to get all the incredible information, links, transcripts, everything we talked about in the show and much more, be sure to check out our show notes page. You can get it on our website, scienceofsuccess.co. Just hit the show notes button at the top. We have show notes for this episode and all of our previous episodes. 

Thanks again and we’ll see you on the next episode of The Science of Success.



January 12, 2017 /Lace Gilger
Best Of, Decision Making, Mind Expansion

Making Better Decisions, The Sophomore Jinx, & The Illusion of Objectivity with Dr. Richard Nisbett

December 15, 2016 by Lace Gilger in Decision Making

In this episode we discuss the errors people make in their reasoning and how to correct them, we explain a number of statistical principles to help sharpen your thinking and make you a better decision maker, why every $1 spent on a “scared straight” program creates $400 of cost for the criminal justice system, the illusion of objectivity, why you should NOT rely on your intuition and much more with Dr. Richard Nisbett. 

Dr. Richard Nisbett is a professor of psychology at the University of Michigan. He has been awarded the Distinguished Scientific Contribution Award of the American Psychology Association, the William James Fellow Award for Distinguished Scientific Achievements, and the Donald T. Campbell Award for Distinguished Research in Social Psychology, among others. He is the author of the recent book Mindware, as well as The Geography of Thought, Think Differently, and Intelligence and How To Get It.

  • The errors people make in their reasoning and how to correct them

  • How to apply the lessons of statistics to making better decisions

  • Is your intelligence fixed and unchangeable?

  • How the industrial revolution massively transformed the way people think

  • We discuss the skills, not on an IQ test, that you must have to be able to function effectively in today’s age

  • Why job interviews are totally useless and have almost no correlation to job performance

  • How misunderstanding the law of large numbers can lead you to make huge mistakes

  • Why does the rookie of the year almost always have a worse performance the following year?

  • Understanding regression to the mean and how it creates extremely counterintuitive conclusions

  • Why Performance = Skill + Luck

  • Why deterministic thinking can drastically mislead you in finding the root cause of a phenomena

  • We explain a number of statistical principles to help sharpen your thinking and make you a better decision maker

  • The concept of "base rates" and how they can transform how you think about reality

  • We walk through a number of concrete examples of how misunderstanding statistics can cause people to make terrible decisions

  • If you’re like most people, then like most people, you think you’re not like most people (but you are)

  • Why every $1 spent on a “scared straight” program creates $400 of cost in criminal and incarceration costs

  • Why the “head start” program is a massive failure and what we could have done about it

  • How you can use the experimental method to make data driven experiments in your life

  • The illusion of objectivity - Why you should NOT rely on your intuition

  • How we massively distort our perception of reality and why our perceptual apparatus can easily mislead us

  • How many of the structures we use to understand the world are highly error prone

  • Why we are amazing at pattern detection but horrible at "covariation detection”

  • Why the traditional rorschach test is bogus and doesn't actually produce any results

  • Why you are likely are “horrendously miscalibrated” in your assessments of people’s personalities

If you want to make better decisions - listen to this episode! 

Thank you so much for listening!

Please SUBSCRIBE and LEAVE US A REVIEW on iTunes! (Click here for instructions on how to do that).

SHOW NOTES, LINKS, & RESEARCH

  • [Book] Bell Curve: Intelligence and Class Structure in American Life by Richard J. Herrnstein and Charles Murray

  • [Scholarly Article] Objectivity in the Eye of the Beholder by Emily Pronin, Lee Ross, and Thomas Gilovich

  • [Book] The Signal and the Noise: Why So Many Predictions Fail--but Some Don't by Nate Silver

  • [Book] How Not to Be Wrong: The Power of Mathematical Thinking by Jordan Ellenberg

  • [Book] Thought and Knowledge: An Introduction to Critical Thinking (Volume 2) by Diane F. Halpern

  • [Book] Thinking, Fast and Slow by Daniel Kahneman

Charlie Munger Resources:

  • [Book] Poor Charlie's Almanack by Peter D. Kaufman, Ed Wexler, Warren E. Buffett, and Charles T. Munger

  • [SOS Episode] How To Stop Living Your Life On Autopilot, Take Control, and Build a Toolbox of Mental Models to Understand Reality with Farnam Street’s Shane Parrish

  • [SOS Episode] The Psychology Behind Making Better Decisions with Global Financial Strategist Michael J. Mauboussin

  • [Farnam Street Blog] Creating a Latticework of Mental Models: An Introduction

  • [Safal Niveshak article] Mental Models

  • [Lattice Work article] Charlie Munger on Elementary Wisdom and Mental Models by Brian Hertzog

EPISODE TRANSCRIPT

[00:00:06.4] ANNOUNCER: Welcome to the Science of Success with your host, Matt Bodnar.


[00:00:12.4] MB: Welcome to The Science of Success. I’m your host, Matt Bodnar. I’m an entrepreneur and investor in Nashville, Tennessee, and I’m obsessed with the mindset of success and the psychology of performance. I’ve read hundreds of books, conducted countless hours of research and study, and I am going to take you on a journey into the human mind and what makes peak performers tick, with the focus on always having our discussion rooted in psychological research and scientific fact, not opinion.


In this episode, we discuss the errors people make and their reasoning in how to correct them. We explain a number of statistical principles to help you sharpen your thinking and make you a better decision maker. We look at why every $1 spent on a Scared Straight program creates $400 in additional cost of the criminal justice system. We talk about the illusion of objectivity and why you should not rely on your intuition. and much more with Dr. Richard Nisbett. 


The Science of Success continues to grow with more than 675,000 downloads, listeners in over a hundred countries hitting number one in New and Noteworthy and more. A lot of our listeners are curious how I organize and remember all this information. I get tons of emails and comments asking me how to keep track of all the incredible knowledge I get from reading hundreds of books, interviewing amazing experts, listening to a ton of podcasts, and much more. 


Because of that, we’ve created an amazing free resource for you. You can get it completely free by texting the word “smarter” to the number 44222. It’s a guide we created called How to Organize and Remember Everything. Listeners are loving it. We’re getting emails all the time about people telling us how this has changed our lives, how this has helped them stay more organized and keep track of all of the stuff that they’re learning. Again, you can get it completely free by texting the word “smarter” to the number 44222 or by going to scienceofsuccess.co and putting in your email. 


In our previous episode, we discussed the radical mismatch between your intuitive sense of risk and the actual risks you face. We looked at why most experts and forecasters are less accurate than dart throwing monkeys. We talked about how to simply and easily dramatically reduce your risk for the most major dangers in your life. We explored the results from the Good Judgment Project, which is a study of more than 20,000 forecast and we talk about super forecasters, what they are and how they beat prediction markets, intelligence analysts that had classified information and software algorithms to make the best possible forecasts and much more with Dan Gardner. 


If you’re thinking about planning for next year and you want to be able to predict the future better, listen to that episode. 


[00:02:46.4] MB: Today, we have another fascinating guest on the show, Richard Nisbett. Richard is a professor of psychology at the university of Michigan. He’s been awarded the Distinguished Scientific Contribution Award of the American Psychology Association, The William James Fellow award for distinguished scientific achievements, and the Donald T. Campbell award for distinguished research in social psychology among others. He’s the author of the recent book, Mindware, as well as well as The Geography of Thought, Think Differently, and Intelligence and How to Get It. 


Richard, welcome to the Science of Success.


[0:03:16.0] RN: Thanks, glad to be here.


[0:03:17.3] MB: Well we’re very excited to have you on here today. So for listeners who may not be familiar, tell us a little bit about yourself?


[0:03:22.9] RN: Well, the thrust of my career has been studying reasoning and fairly early on, I got interested in studying the errors that people make in reasoning. And after I had been doing that for a while, I began to think, “Well, can I correct this errors?” And at the time — we’re talking now about the 70’s, early 80’s — psychologists were quite convinced that there really wasn’t much you could do to change the way people think, reasoning is done at a very concrete level, you can’t just insert abstract rules and expect that to affect reasoning.


So I bought that and I don’t know exactly why I decided to test it anyway, but I did. I started to see if I could make people be more rational, make better judgments and decisions by teaching them rules like the law of large numbers, the concept of regression, how to think probabilistically, microeconomic concepts like cost benefit analysis, and so on. I found, first of all, people do learn in college, and this is counter to prevailing there. They do learn some general rules that do improve the way they think, although it’s very spotty. Different majors are learning different things. 


So then I decided to see whether I could, myself, teach them this rules in a brief period of time and what I found was that I can teach this kinds of rules and 15 to 20 minutes and they stick with people at least for a few months after that. I know it because I call them in the guise of a survey researcher asking them their opinions about various things and I know if they use the rule that they should then that will come out in the answer. Sure enough, people do to a very significant extent retain those kinds of rules. So that gave rise to the book that I wrote which is brief, breezy descriptions of rules and concrete problems that they can be applied to.


[0:05:29.1] MB: I’d love to talk a little bit more about Mindware. Tell me what inspired you to write the book?


[0:05:34.1] RN: Well, it was this discovery that people are learning something about probabilistic concepts, statistical concepts, experimental methodology concepts, micro economic concepts, some philosophy and science concepts, logic, et cetera. They’re getting some of that in school, they’re not getting nearly as much as they could easily if professors just spent a little more time.


My joke about statistics courses is that they’re taught so as to prevent if at all possible, the escape of statistical concepts in everyday life. If professors of statistics just gave a few concrete examples, I now know that would make a huge difference. They probably would say, “Oh no, you know, we don’t have that kind of time. This material has to be gotten through.” That’s not the way to think about it because the concrete examples from everyday life actually feed back into an abstract understanding of the principles. So they actually could get their teaching done quicker if they gave more ordinary examples. So my book is trying to do that, “what your statistics teacher didn’t do for you, and if you haven’t had statistics, here are some very powerful concepts that will save you a lot of grief in life.”


[0:06:54.9] MB: I’m curious, are you familiar at all with Charlie Munger and his concept of the idea of mental models and sort of the notion of arraying mental models on a latticework of understanding in terms of kind of building a much richer toolset to understand reality?


[0:07:10.1] RN: I’m not, that sounds like something I should know about.


[0:07:12.6] MB: Definitely recommend checking him out. After the show, I’ll shoot you a few links, he’s amazing and we’ll throw some things in the show notes as well. But he is one of my favorite thinkers about kind of a very similar concept, which he calls mental models. Which is basically the idea of, in order to accurately understand reality, we have to master the fundamental principles of all the major disciplines that govern reality, everything from the physical sciences to statistics, mathematics, economics, especially psychology and kind of build a robust framework that incorporates all of those into truly understanding reality.


[0:07:44.2] RN: That sounds like a great idea.


[0:07:45.3] MB: It sounds like in many ways, that’s kind of the same path that you embarked down in terms of taking a lot of these concepts that get easily misunderstood and making them so that people can really grasp them in a simple and understandable way.


[0:07:57.0] RN: Exactly.


[0:07:59.0] MB: I’m curious, one of the things that I’ve heard you talk about in the past is how both the sort of industrial revolution and then the kind of information revolution changed the way that we need to think. I’d love to hear you kind of explain that concept.


[0:08:11.8] RN: I’d be delighted. I’d say 15 or 20 years ago, there was a book written by Charles Murry and Richard Hernstein, very famous at the time, called The Bell Curve. It’s all about intelligence, which basically says intelligence is basically fixed at birth. I mean it’s primarily a heritable thing, there’s not much you can do in the way of the environment to improve intelligence, or IQ. Oh and incidentally, some ethnic groups have lower IQ’s for genetic reasons than others.


Every single thing I just said is wrong, and a book I wrote called, Intelligence and How to Get It, shows how wrong all of that is. But I would also say that intelligence is broader than what you test on IQ tests. I began to be aware of it doing studies historically or studies with people who have had no formal education, no experience with the modern world, is that the industrial revolution absolutely changed the way people think. I mean, profoundly. 


Prior to the late 18th century, people were not really capable of thinking in abstractions, they were not capable of applying logical rules to thought, they were not capable of counter factual reasoning. This is not the way the world is, we both know, but suppose the world where that way, what would follow from that? That was impossible for them we know because we know people with so little education today are unable to do those thing. 


So the industrial revolution, it taught people the three R’s, reading and writing and arithmetic and then for free, we got all of these abstract reasoning skills. We continue to improve in those kinds of skills. Over the last 70 years, IQ has increased by more than a standard deviation. That’s like approximately from a hundred, where the average was a hundred 70 years ago, the average on that same test would turn out to be a 115, 16, 18 today.


That’s the difference between somebody that we would expect to graduate high school and maybe have a year or two of junior college, versus someone we would expect to surely finish a four year college and possibly go on to post-doctoral work. That’s the kind of difference that we get as a function of additional cultural changes, improvements in education, and so on. Even a lot of activities that are just, they’re not undertaken for instructional reasons but for fun.


I Love Lucy was a great TV show but it didn’t place many demands on your intellect. But I watched TV shows now, I can’t keep up with them. Who is that guy? What is he trying to achieve? That kind of entertainment is much more sophisticated today and of course we have computer games. Also we know that some of those are improving intellectual skills.


Okay, so that’s the history of IQ and some kinds of intelligence that are related to IQ. But we live in a new era, the information age and the IQ skills are still highly relevant, but there are a lot of skills that are not represented on IQ tests at all that you have to have in the new age to be smart enough to function in our age. I’ll give you one example. 


If I ask people to tell me what they think would be likely to happen if you looked at the boy births versus girl births per day in two hospitals in a given town, one with about 15 births per day and one with about 45 births per day and then you ask, “At which of this two hospitals do you think there would be more days in the year when 60% or more of the babies born were boys?” Now, half the people will tell you, it makes no difference and of the remainder about half will say it would be the larger hospital that would have more such days and about half say it would be the smaller hospital.


In actual fact, if you think about 15, well at 60%, that would be nine boys versus six girls. That doesn’t sound, you know, that’s the kind of thing that can happen frequently. If you had one more girl birth instead of boy then it would be eight-seven and you can’t do any better than that in coming close to 50/50, which we assume is the population value for percent of boys born. With 45 however, it’s really very unlikely. You’re now looking at 60% difference, which you would see only three or four times a year.


I mean, it’s because the larger your sample, the closer you come to the population value, if you have a very small sample, you can be way off. Suppose there were three births. You’re going to automatically be hugely off the population value. As long as you’re sampling randomly, which basically is the way to think about births, the more the cases you have, the more it’s going to resemble the population value. 


So that’s a useful kind of thing to know for that kind of numerical example and there’s lots that happens in the world that you’ll think about differently if you know it. But I apply the law of large numbers to the following kind of problem. I say to people, I have a friend who is an executive and he told me that the other day he interviewed someone who had great recommendations from his previous companies and he had a great record of performance.


But in the interview with the guy kind of seemed kind of lack luster, he didn’t have any very transient things to say about my friend’s business so he decided not to pursue the guy any further. If I say this people say, “Well, yeah okay, fine. What’s interesting about that? Happens all the time.” Well it does happen all the time, but it’s a huge mistake because it turns out that the interview correlates with subsequent performance in college, in graduate school, in medical school, in officer training school and every business profession that’s ever been examined to the  tune of 0.1.


That’s extremely low correlation. It’s enough to take you from picking the better of two candidates from 50/50, a coin toss up to about 53%. It’s a trivial gain and what’s horrendous about that is that typically the folder has a huge amount of information, a grade point average, previous performance, letters from people who have known the person for hundreds or thousands of hours. It’s a huge amount of evidence, that’s becoming much closer to the population value on average than you would ever get for an interview. 


So it’s a mistake actually to interview at all because the gain, if you could confine the judgment about the other person whether the higher the person or not. If you could confine the interview to its appropriate place which is essentially no more than a tie breaker, but we’re not capable of doing that. I’m not capable of doing that. When I interview people, I have this same illusion you know, “I really learned lot about this person’s intellect and personality,” and it’s baloney, I haven’t. 


But to make matters even worse for the interview, it isn’t even a sample of the population you’re after. It isn’t a sample of job performance or a school performance. It’s a sample of interview performance, and those are not at all the same thing we know, imperially. Some people, you know, extroverts are great at interviews and introverts are not so good. But you typically are hiring for skills other than a personality trait. So that would be a typical kind of way that I teach in the book.


I mean, here is a principle stated in some highly informal way and here are some concrete examples and I know for many of the things that are taught in the course, I know that this kind of instruction is powerful and backs the way people think. There are things in the book where I don’t know that, but I have pretty good idea that the principles are sufficiently similar psychologically that everything in the book I do believe can have a big impact on the way people think and the kind of thing that’s necessary for the information age. 


300 years ago we didn’t have the kind of information, we didn’t have the folder that we do now. But people need to be able to collect information, analyze information, analyze arguments based on information, persuade other people based on information, know how to generate reliable information from assessment or from interventions of various kinds. So you’re not information age smart if you don’t know the kinds of things that I’m talking about.


[0:17:11.9] MB: You know, one of my favorite examples of kind of misunderstanding sample size, I think an example that Kahneman uses talking about I believe it’s Kidney cancer rates. And, you know, he kind of starts out with this vignette about how rural counties have the lowest instances of kidney cancer rates and then he asks people to explain, “Okay, why is that the case?” And you know, they think to themselves, “Oh, you know, maybe it’s the fresh air, there’s not as much pollution. They’re spending more time outside, et cetera.” 


He goes, “Okay, also rural counties have the highest rates of kidney cancer,” right? Like different rural counties. The highest and lowest rates are both in rural counties, and then people figure out they make all this explanations to the same way when in reality, both instances are just statistical artifacts from the fact that they’re just small sample sizes and so they have bigger outliers in terms of the results for cancer rates.


[0:18:00.9] RN: That’s a great example. That one was new to me, I had not known about it before I read Danny’s book. But let me give you another example of something like that. I mean, if you ask people, you tell them a fact, “As you may or may not know, the rookie of the year in baseball, that is the best player is rarely the best player of the next year. This is sometimes called a sophomore jigs. How would you explain this phenomenon?” 


For people who had never had statistics, they will always go the causal route, the deterministic route. They will say, “Well, you know, maybe the pictures make the necessary adjustments or maybe the guy gets too cocky and he slacks off.” But actually the principle of statistical regression tells us that it’s almost inevitable that the person who’s best in a given year is not going to be best in the next year. You think about how did that person get to be the best baseball player the first year?


Well, certainly by virtually having a lot of talent, much more talent than the average person but everything else went right. Two, he got just the right coaching, first three or four games he played, he did it extremely well, built his confidence, he got engaged to the girl of his dreams. The next year, the great dice thrower in the sky gave him an elbow injury so he was out for quite a while and sorry to say, his girlfriend, his fiancé jolted him. So the point being, that around any observation that we make, we’re looking at something that’s been generated by what a measurement theorist would say is true score, God’s own understanding of what the facts of the matter are, plus error. There’s always error for absolutely everything. 


Now, for some things, it’s vanishingly small but there is always error associated with every observation and that kind of error is you roll the dice again for this good baseball player and you’re probably not going to get all aces. Everything’s not going to come up so great for this guy because a lot of performance that you’re observing is error. Another example would be I tell people, I have a friend, she’s a foodie but she’s discovered that when she goes back to her restaurant where she’s had a really excellent meal, subsequent meals are rarely as good, why is that? 


People will give you nothing but deterministic answers for that. They’ll say, “Oh well, maybe the chefs changed a lot or maybe her expectations got so high that nothing could satisfy them.” This is again another case of regression. I mean, extreme values are relatively rare. If you think of the bell curve, things are way out there on the bell, there are not many of them out there. So another way to think about it, to massage people’s intuitions about why you expect to not get such a great meal at a restaurant where you had a superb one before, think about this, do you think there are more restaurants in the world where you would get an excellent meal every time or more restaurants where you would get an excellent meal only some of the time?


Most people’s intuition there is it’s the second type. There are probably more restaurants where you would get an excellent meal just some of the time. Well if that’s the case, it has to be the case that if she has an excellent meal the first time, it’s not likely to be an excellent meal the next time because she’s probably sampled one of those restaurants where you can only get an excellent meal some of the time.


So the regression principle is crucial for understanding all kinds of things around us all the time. Extreme scores are rare. Expect extreme scores to regress to the mean. Think of the mean as some kind of magnet, dragging events from extreme and rare circumstances back to some central tendency, which is less extreme.


[0:21:54.7] MB: On the subject of regression to the mean, one of my favorite kind of mental models for understanding that is from a book called The Success Equation by Michael Mauboussin and he talks about envisioning that you have sort of two jars, one called luck and one called skill, which I think you would essentially call sort of true score and error. And any outcome you draw from the skill jar which is roughly a fixed quantity, and then you draw from the luck jar which is a random number, essentially and you add them together and that’s the result that you get. So any great streak is always a combination of essentially sort of tremendous skill with tremendous luck stacked on top of it.


[0:22:28.4] RN: Exactly, yeah. Great way of putting it.


[0:22:30.9] MB: I’d love to, actually before we do this, for listeners to kind of help them just understand this concept a little bit better, when you talk about sort of deterministic thinking or deterministic answers, can you kind of explain that concept and why it’s not always the appropriate way to think about things?


[0:22:45.5] RN: It’s never wrong to model some situations, think what’s going on causally with it. But it’s people who give causal answers for problems like the restaurant problem or the rookie of the year problem, they won’t give a cause and they won’t go down the causal analysis root if they’re familiar with statistics. For example, a single statistics course is enough to get people to say for the rookie of the year problem, “Well, maybe if it was by chance that he did so well.”


That’s right as far as it goes. People who have had two or three statistics courses will say, “Well look, that’s an extreme score, extreme scores are rare, there’s going to be regression back to the mean.” They just never go down the causal route. 


But if you don’t have the concept of statistical regression, what are you going to do? You don’t have anything else other than causal notions to draw on. A lot of statistical principles are ways of thinking about the world that don’t’ get you involved in the effortful business of causal analysis at all because you realize, “Look, this thing has to be true statistically. End of story.” Not that there aren’t — of course there are causal things going on but you wouldn’t be thinking about those things if you were aware of the regression principle.


[0:24:08.0] MB: One of the other statistical concepts that you talk about that I’m a big fan of and I think is under-utilized for explaining and understanding reality are base rates. I’d love to kind of hear your thoughts about that and maybe explain that in a way that listeners can really simply grasp it?


[0:24:23.5] RN: Right, well we often think about events using only the individuating information about that event, rather than thinking about the event as a type of event for which we may have base rate information that would tell us how to think about that particular case. That’s not a very clear way of putting it. So let me give a concrete example of the importance of using base rate and the kinds of things that can operate as base rate, should be thought of in that way.


If I ask undergraduates again who have no statistics, I tell them, “I want to tell you about somebody, his name is David L. He’s a high school senior, he’s going to college next year, one of two colleges which are close to his home, one is a state university where he has lots of friends and those friends like that school very much on both intellectual grounds and personal grounds. The other one is a private college where he also has several friends and they’re not really crazy about it. I don’t’ think they’re getting such a great education there and they don’t have that many friends. 


But David L goes to visit each of those schools for a day and he just doesn’t have a good feeling about that state university place. I mean, a couple of professors he wanted to talk to getting in the brush off, some students that he meets just don’t seem to be very interesting. But at that private college, a couple of professors actually take a personal interest and he meet some sort of really interesting kids at the other place. So which place do you think David L. should go to?”


You will never find an unwashed freshman who will tell you anything other than, “He’s got to go where his heart tells him to go, he’s not choosing for his friends, he’s choosing for himself.” But there’s two things wrong with that. One is sample size, I mean, think about it, you go to a place for a day, that’s a small sample. I mean, just by luck of the draw you get a professor who is rushed and doesn’t have time to talk to you or not interested in you, by the luck of the draw at someplace else, you get a professor who is more willing to. There’s just a lot of randomness to any information you’re going to get in such a small sample.


So if you understand the law of large numbers, you’re not going to make that judgment for David L. The other thing that’s important is understand the base rate because you can think of his friends views of these places, his friends’ experiences as providing a base rate for the experiences to be expected at each of these schools and again the law of large numbers plays in the understanding why you ought to be paying deep attention to the base rate.


They’ve got hundreds or a thousands of hours collectively, experience at this places and so you should use that base rate to decide what to do. People will say it’s resistance to that. They’ll say, “Well, you know, you’re asking me to do what other people are doing but you know, I have my own unique preferences and skills and songs and I don’t know that I should just slavishly follow other people are doing.”


The social psychologist, Dan Gilbert, has a great expression. He says, “If you’re like most people, then like most people, you think you’re not like most people, but you are.” The base rates for human beings apply to you for most things. I’ll give an example, I just saw a musical Hamilton. I have yet to hear of anybody who didn’t absolutely love that musical. I say, “I feel with great confidence, you’re going to like that musical, whoever the heck you are.”


They’ll say, “Well, I don’t like musicals.” Don’t tell me that, I don’t care whether you like musicals or not. I don’t particularly like musicals and I loved it. They’ll say, “You know, it’s hip hop music, I’m not crazy about hip hop music.” Well I’m certainly not crazy about hip hop music but I loved that thing. So you just have to pay attention to other people’s experiences, other people’s views as generating a base rate to be expected of your own experience and don’t try to collect little pieces of information like who is starring in the movie, to individuating information about this particular case. Think about what the base rate of opinion is of other people about that thing.


[0:28:38.2] MB: So essentially, many people get caught up in the trap of thinking only about their own unique situation in trying to gather as much data as they can when often times if you would just sort of zoom out and look at out of everyone who has ever been in this situation, what were the predominant outcomes and at what frequency, you can often make a much better decision.


[0:28:57.3] RN: Yeah, very well put.


[0:28:57.8] MB: As on the side, Hamilton is awesome. I haven’t seen it, but I do love the soundtrack. Anyway, changing gears. I’d love to dig into some of the, you know, we’ve talked a lot about many of the statistical concepts that you lay out in the book and can help people make better decisions. I’d love to dig in to some of the other ideas kind of from the scientific method or how we can apply scientific thinking to be better decision makers.


[0:29:19.4] RN: Great. So you;d like me to just examples of how we can make use of the experimental method?


[0:29:25.2] MB: Exactly.


[0:29:26.7] RN: Well, first of all, let me say that where it’s most important is public policy matters. On 9/11, 9,000 grief counselors descended on Manhattan to work with people and t hey did what seems very reasonable to me. They met with people in small groups, they asked people to tell about their experiences, about their emotional reactions and then they would assure people that their reactions are very common, there’s nothing strange or unusual about them and in the not too distant future, they’re going to be a lot better off.


Sounds like a great idea. Except that it isn’t. It actually makes people worse, and there are things that social psychologist have discovered to do for grieving people that make them better. So here’s this massive investment the society is put in to something that is not doing any good, it’s costly and it’s doing some harm.


Another example would be 20 years or so ago, a bunch of prisoners in New Jersey decided that maybe they could scare kids off from doing things that would put them in prison. So they brought junior high kids to present and they tell them how horrible it is, the food is terrible, it’s incredibly boring, you get up beat up all the time, sexual attacks, and so on. Again, that sounds like a great idea to me. You have a kid who is at risk for delinquency, I mean that might make them think twice about it. But in fact it actually makes kids more likely to become delinquents. 


Now, don’t ask me why, I don’t have an explanation, I don’t have to have an explanation, I just know what the data are. It’s now studies have been done, good experimental studies expose some kids to what’s called Scared Straight programs, don’t expose others and on average it seems to increase the likelihood of delinquency by about 13%. One estimate, looking at a meta-analysis of a number of studies that’s done, comes out with the conclusion that for every dollar spent on Scared Straight, you incur $400 of cost in terms of crimes committed and paying for incarceration. 


Well let’s take something really big, we’ve had with us for about 50 years the Head Start Program. We’ve spent $200 billion dollars on that to this point and we don’t know whether it does any good or not. We would know a few million dollars would have told us what kinds of early childhood programs are effective, if society were in a more experimenting mood. We do know that some forms of childcare are effective, they tend to be more ambitious and better carried out than most Head Start situations are. But it’s just people assume that it’s got to be a good idea, you take a bunch of kids in, you show them some intellectual tasks, you get them to cooperate with each other and probably some version of that’s correct, but we have no idea how close to that ideal our typical Head Start experience comes.


So at a societal level, we need vastly more experiments than we’re getting. People often — all of this cases, they’re obvious to people. They’re obvious to me too, but it’s a great burden being a social psychologist because unlike everybody else, I’m constantly getting my opinions about human behavior contradicted. I mean, I’ve designed experiment — I never do an experiment unless I know what’s going to happen. Why would I do an experiment if I didn’t know what was going to happen or have a pretty good idea of what would occur? I’m not just looking at things randomly, I think this is the way the world is, if I do this, this is what will happen and half the time I’m wrong.


So social psychologists are constantly having their noses rubbed in the fact that their guesses about human behavior, the way we model human behavior is way off, often, and the only substitute for that is to just do the experiment. Then at the individual level, there are all kinds of opportunities for experiments that would be informative. Am I better off if I have coffee in the morning or not? Does coffee make me more efficient or does it make me more jittery and unpleasant? The only way I’m going to know, the answer to that question is by doing a randomized controlled experiment. 


You come down in the morning and you flip a coin to decide whether I am going to have coffee or not? Otherwise you’re drinking coffee in a haphazard way. Oh, you know, I’m drinking it this morning because my husband made it for me or I didn’t have it this morning because I was in a rush. So there’s a huge amount of noise that you’re exposing yourself to and you can get pure signal if you just do the randomized experiment. Same thing for yoga, are you better off with yoga or not? Meditation or not? Flip the coin and meditate today or not. Or meditate for a month and then a month not. Or yoga for six months and yoga for six months not and see what the empirical questions are.


Social psychologist have an expression that they’re using, that they use to each other all the time and I think it should be an expression that’s everybody’s disposal, much more than it is, and that’s “it’s an empirical question.” I mean, instead of “I tell you my model of the world and you tell me your model of the world” and we’re talking about it and in the end, it’s an empirical question. Let’s look it up or if we don’t look it up, let’s do the experiment or if we can’t do the experiment, let’s admit that there is dueling models is not necessarily the way to get you any closer to the truth and when you can do an experiment easily, it’s foolish to just assume that your plausible model allows you to have an opinion about some matter.


[0:35:11.5] MB: I find it so interesting that our intuition’s often can be terribly misleading and in many cases, people who haven’t kind of studied psychology or statistics or any of these methodologies for more deeply understanding both, how the world works and how the human mind works just sort of lean on intuition or lean on their sort of, “I feel like this is the case so that seems like what’s true,” and oftentimes they can just be completely wrong.


[0:35:38.3] RN: Right. My friend, a social psychologist Lee Ross, has a very important concept that I would say it’s at the floor of anything I would want to say about information age reasoning and that is that we have an illusion of objectivity. As I experienced the world, I think I’m registering what’s out there and I’m not, not for anything. Not even for the visual things. Especially not for all things. What’s being recorded on my retina is not what I am using. That’s not the information I’m using to make a judgment about for example, distance or depth perception or estimations of size and it’s easy to show. 


I mean, perceptual psychologist make a living by showing how easy it is to create illusions and make us make a wrong judgment about some illustration or some physical setup in the world. That’s because our perceptual apparatus is not setup to render what the world is in some actual sense. It’s setup to be what’s useful so that we distort the visual processing centers, wildly distort the picture of some object in the service of size constancy. That is, we add a dose of perceptual analysis that will allow us to see an object that’s receding into the distance as being the same size object even though the way it strikes our retina is very different from what’s correct.


Our perceptual apparatus is a very complicated, layered set of mental operations that are designed to give us some correct view of the world. But those same processes can create illusions in some circumstances. So [inaudible] psychologists’ tools that we used to understand reality or things like schemas that has representations of common situations, stereotypes, heuristics, rules of thumb for reasoning and so on. All of these things are this highly error prone structures and processes are what we’re using to understand the world. We’re not registering, we’re interpreting it. We’re interpreting it moreover by structures and processes that we have no awareness of. 


So I think that’s helpful in all kinds of ways to recognize that we do have an illusion of objectivity or what philosophers called naïve realism. So if you understand that, it’s useful for humility. I probably shouldn’t be nearly as sure of my understanding of the world as I am most of the time because I’m using processes which can lead me astray, often.


[0:38:36.8] MB: Changing gears a little bit, I’d love to talk about fundamental attribution error and some of the work you’ve done about how situations versus sort of personalities can impact people’s behavior.


[0:38:48.8] RN: Right, well there’s a story that goes back to 1968 for the publication of a book by Walther Michelle. He’s the marshmallow guy that everybody knows about and he said, the book was about the power of assessment of personality traits to predict behavior. And his generalization was that if you’re trying to predict behavior in one situation, by virtue of knowing about behavior in some other situation, which could be described by the same trait, your correlation’s going to run about 0.1. That is, it’s trivial gain in accuracy of knowing how honest someone is going to be or how conscientious they’re going to be or how extroverted they’re going to be.


You can do better than that if you have a very good personality instrument questionnaire or reputation. Base rate in other words, comes from knowing a lot about many past experiences and applying that base rate to this particular circumstance that you're looking at. Those correlations can go as high as about 0.3. Still not to impressive. Doesn’t mean that people don’t have personalities or that personalities don’t affect their behavior. They do, but you have to have a heck of a lot of information and you’re predicting a heck of a lot of information. It takes lots and lots of observations to predict a battery of other observations. 


There you can get up to predictability at 0.8, 0.85. Now, why is that? Why is it that the predictability from one situation to another is so poor? Well, it has to do with error of various kinds. I mean, you're looking at a set — why did Joe give money to the United Fund? I say, “Well, he’s a generous guy.” Well, actually, his department chairman was going to know whether he gave money to the United Fund or not so he gave it. Why did Bill not give money? Well, because he happens to be that he has a bit of an opinion about one particular program by the United Fund that he’s very much opposed to. Not that he’s ungenerous or uncharitable. 


So situations are normally producing or normally responsible for behavior for much greater extent than we recognize and personality traits or other dispositions like skills or attitudes or needs are often contributing very little. I mean, the situation’s driving the buss. Most behavior, most of the time. So this was a bombshell actually. I mean because he was able to show that nobody’s clinical assessments or personality traits assessments were very accurate in predicting behavior. 


Some things — this wasn’t his original contribution but it all went into his book. Some things that clinicians thought were predictive were absolutely useless. To draw a person test predicts nothing you know? Clinicians were thinking to themselves, “Well, the person draws a person with funny eyes, that guy could be paranoid. Or draws somebody with a big head, well I may have worries about his intelligence. Or somebody draws a person with sexual organs. That person, there’s maybe some sexual adjustment issues.” 


All of which, undergraduates who have no clinical training at all will see in data even though it’s not there, you built the data sat so that none of this things are true but that’s what they’ll see. “Oh, funny eyes, paranoia. I see.” We’re just not that good at covariation detection. Actually we’re shockingly bad at most kinds of covariation detection, which is strange given how very good we are at pattern detection.


If there’s a pattern out there in the world, we can’t not see it. But if there’s a correlation of the given kind, most of the kinds of things, important things even that we really would like to have an accurate idea of, it’s just very hard to understand. They’re primarily determined by what the clinical psychologist actually is selling them. Can’t recall his name, first name offhand, showed, we called it “preparedness”. We’re prepared to see some kinds of association and we’re counter prepared for others. We’re prepared to see this association's, same thing is true for the Rorschach test.


The Rorschach test was given to hundreds of thousands of people costing untold millions of dollars to do these assessments. What is it that people see in these ink blots and what does that predict? No one for decades ever bothered to do the experiment here or to do the systematic observation to say, “Well, how well do these Rorschach signs, how well do they do in predicting behavior?” And it turns out The Rorschach is virtually useless. There’s one or two little things that it can predict, but it’s virtually useless.


So we see a behavior in one situation and we sort of take it for granted, we’ve learned something about a person’s personality traits and it’s easy to show and there are dozens of demonstrations and experiments showing that we are way over confident in our judgment about personality from judging, from looking at just one or two or three situations in which we’ve observed behavior.


There’s a law of large numbers issue here too. I mean, it’s just, you know, one behavior is not a very large sample but we don’t realize that, there are few arenas where we’re aware of the uncertainty of any observation. Interestingly, sports is an exception to that. People are really well calibrated on how much you can predict. Let’s say a basketball score at a particular game from basketball scores at another game. For how well you can predict spelling test performance by elementary school kids by virtue of knowing another spelling test performance.


For the abilities we’ve looked at, they tend to run about 0.5. I mean, from a serious good observation, one game or one test, they tend to run at about 0.5. So they’re informative but they’re certainly not the whole story. With people with any knowledge of sport, understand perfectly well, it’s captured beautifully my idea that on any given Sunday, any team in the NFL can defeat any other team in the NFL. That’s how much of a luck/error, that’s how much of a rule it plays in any given sports outcome. 


Despite the fact that people are quite good at understanding both how well you can predict an event from another event or a set of events from another large set of events, that doesn’t pour over at all onto our understanding of personality trait related behavior. You can show that people are horrendously mis-calibrated about how much information they think they’ve gotten from observing a person in one particular situation.


[0:45:50.8] MB: So obviously we’ve talked about the book and for people who want to dig into and really understand a lot of this kind of mental models or frameworks much more deeply, that’s a great place to start. What would some other resources be that you’d recommend listeners check out if they want to kind of dig in to some of these topics?


[0:46:05.7] RN: Well, I think Silver’s The Signal and The Noise, it’s about statistical concepts. It’s a beautiful information in age book. I mean, it tells you how you need to think about things, information that you haven’t collected yourself that somebody else has collected, how to make use of it, how to avoid making errors and determining it. There’s another lovely book by a mathematician called How Not to Be Wrong, and incidentally, he deals with a law of large numbers at length in his book just like I do in my book.


It’s very similar. I was kind of surprised that a mathematician would be thinking about so many everyday life situations in terms of the law of large numbers and have so many beautiful concrete examples of how we have to think given that all of our observations have errors surrounding them. I was surprised because I don’t’ see statisticians doing that sort of thing. 


Somebody who really wants to get serious about inferential rules in a very systematic way, formal definitions, I would recommend a book by Dian Halpern called Thought and Knowledge. It just march us through, it’s a similar to my book in a way. Although, she spends time on things that I don’t spend much time on. She talks a fair amount about some logical principals and some logistic schemas where I think that formal stuff is not actually something that people can make that much use of.


But some people would like to know about it anyway because there is some people are in jobs which require sometimes some kinds of logical formulations and it can be interesting, it can be fun to look at that stuff. Much of the territory she covers in that book, which is a critical thinking text basically, that’s what it’s intended for. There’s a lot of good stuff in there. So, you know, there is plenty and of course there is Danny Kahneman’s book, which is a near relative of my book. The title there of course is Thinking Fast and Slow.


[0:48:04.8] MB: Yup, great book. Huge fan of that book and Daniel Kahneman. So, where can people find you and the book online? Your book?


[0:48:12.3] RN: Well, it’s on Amazon and it’s in various versions; print, kindle and audible.


[0:48:19.4] MB: Great. Richard, thank you so much for being on the show, it’s been a fascinating conversation, we really explained a lot of this concepts that can seem kind of daunting at first but are really critical component to building a deep understanding of how the world works and how your mind works and how we can make better decisions. So, thank you so much for being a guest on the Science of Success. We’ve really enjoyed having you on here.


[0:48:42.7] RN: Thank you 


[00:48:42.2] MB: Thank you so much for listening to the Science of Success. Listeners like you are why we do this podcast. The emails and stories we receive from listeners around the globe bring us joy and fuel our mission to unleash human potential. I would love to hearing from listeners. If you want to reach out, share your story, or just say hi, shoot me an email. My email address is matt@scienceofsuccess.co. I would love to hear from you and I read and respond to every listener email


The greatest compliment you can give us is a referral to a friend, either live or online. If you’ve enjoyed this episode, please, leave us an awesome review and subscribe on iTunes. That helps more and more people discover the Science of Success. I get a ton of listeners asking, “Matt, how do you organize and remember all this information?” Because of that we created an amazing free guide for all of our listeners. It’s called How to Organize and Remember Everything. You can get it for free by texting the word “smarter” to the number 44222 or by going to scienceofsuccess.co and joining our email list. 


If you want to get all the incredible information, links, transcripts, everything we talked about in this episode and much more, be sure to check out our show notes. Go to scienceofsuccess.co, hit the show notes button at the top. We’re going to have everything that we talked about on this episode. If there was a previous episode that you loved, you can get the show notes for every episode that we’ve done. Just go to scienceofsucces.co, hit the show notes button at the top, and you can find everything. 


Thanks again, and we’ll see you on the next episode of the Science of Success. 
December 15, 2016 /Lace Gilger
Decision Making
51 - How You Can Predict The Future Better Than World-Famous Experts - The Art & Science of Risk with Dan Gardner-IG2-01.jpg

How You Can Predict The Future Better Than World-Famous Experts - The Art & Science of Risk with Dan Gardner

December 08, 2016 by Lace Gilger in Decision Making

In this episode we discuss the radical mismatch between your intuitive sense of risk and the actual risks you face. We look at why most experts and forecasters are less accurate than dart throwing monkeys. We talk about how to simply and easily dramatically reduce your risk of most major dangers in your life. We explore the results from the “good judgment project” study of more than 20,000 forecasts. We talk about what superforecasters are and how they beat prediction markets, intelligence analysts with classified information, and software algorithms to make the best possible forecasts and MUCH more with Dan Gardner.

Dan Gardner is a New York Times best-selling author and a senior fellow at the University of Ottawa’s Graduate School of Public and International Affairs. His latest book Superforecasting: The Art and Science of Prediction, which he co-authored with Philip Tetlock. Superforecasting was chosen as one of the best books of 2015 by The Economist, Bloomberg, and Amazon. Dan is also the author of Future Babble and Risk: The Science and Politics of Fear and previously worked as a policy advisor to the Premier of Ontario and a journalist with the Ottawa Citizen. 

  • How and why people make flawed judgments about risk

  • The radical mismatch between our intuitive sense of risk and the actual risks we face

  • Why we are the safest, healthiest, wealthiest people to live on planet earth (and we don't realize it)

  • Why we focus on vivid, dramatic risks, and ignore the real dangers in our lives

  • How to simply and easily dramatically reduce your risk of most major dangers in your life

  • The power of “meta cognition,” what it is, and why it’s so important

  • Lessons you can learn from the mega successful investor George Soros

  • Why most forecasters are less accurate than monkeys throwing darts

  • The difference between foxes and hedgehogs (and why you never want to be one of them)

  • The inverse correlation between fame and prediction accuracy

  • What cancer diagnosis shows about how averse people are to uncertainty

  • The universal principles of good judgement

  • The importance of intellectual humility and intellectual curiosity

  • Why certainty is an illusion and nothing is ever certain

  • Why everything is a question of degrees of maybe (probabilistic thinking)

  • The results from the “good judgement project” study of more than 20,000 forecasts

  • What superforecasters are and how they beat prediction markets, intelligence analysts with classified information, and software algorithms to make the best possible forecasts

  • The differences between these “superforecasters” and regular forecasters

  • The importance of being “actively open minded"

  • Why you should unpack smaller questions & looking things like base rates

  • How to use “fermi estimates” to solve tough and challenging problems

  • Why the growth mindset had a huge impact on positive ability to forecast

Need to do some planning for next year? Listen to this episode!

Thank you so much for listening!

Please SUBSCRIBE and LEAVE US A REVIEW on iTunes! (Click here for instructions on how to do that).

SHOW NOTES, LINKS, & RESEARCH

  • [SOS episode] Fixed Versus Growth Mindsets

  • [Book] Mindset: The New Psychology of Success by Carol S. Dweck

  • [Book] Superforecasting: The Art and Science of Prediction by Dan Gardner and Philip E. Tetlock

  • [Book] Thinking, Fast and Slow by Daniel Kahneman

EPISODE TRANSCRIPT

[00:00:06.4] ANNOUNCER: Welcome to the Science of Success with your host, Matt Bodnar


[00:00:12.4] MB: Welcome to The Science of Success. I’m your host, Matt Bodnar. I’m an entrepreneur and investor in Nashville, Tennessee, and I’m obsessed with the mindset of success and the psychology of performance. I’ve read hundreds of books, conducted countless hours of research and study, and I am going to take you on a journey into the human mind and what makes peak performance tick, with the focus on always having our discussion rooted in psychological research and scientific fact, not opinion.


In this episode, we discuss the radical mismatch between your intuitive sense of risk and the actual risks you face. We look at why most experts and forecasters are less accurate than dart-throwing monkeys. We talk about how to simply and dramatically reduce the risk of most of the major dangers in your life. We explore the results from the Good Judgment Project, which is a study of more than 20,000 forecasts. We talk about what super forecasters are, how they beat prediction markets, how they beat intelligence analysts with classified information and software algorithms to make the best possible forecasts, and much more with Dan Gardner.


The Science of Success continues to grow, with more than 650,000 downloads, listeners in over a hundred countries, hitting number one New & Noteworthy, and more. A lot of our listeners are curious how to organize and remember all this information. I get listener emails all the time asking me. “Matt, how do you keep track of everything? How do you keep track of these interviews, podcasts, books that you read, studies that you read, all this incredible information?” I’ve developed a system from reading hundreds of books, from doing all this research, from interviewing these incredible experts, and I put it all in a free pdf that you can get.


All you have to do is text the word smarter to the number 44222. It’s a free guide we created called How to Organize and Remember Everything. Listeners are loving this guide. I get emails every day from people talking about how this has helped them transform their lives and keep themselves more organized. You can get it completely for free, all you have to do is to text the word smarter to the number 44222 or go to scienceofsuccess.co and put in your email.


In our previous episode, we discussed why the happiness movement has done us a disservice and sometimes actually makes things worse. How perfectionism creates an illusion of control and distorts your reality. How to become aware of the critical inner voice at the root of your pain and unhealthy habits, and the incredible power of self-compassion, and much more with Megan Bruneau. If you’re struggling with difficult emotions, if you want to become happier, if you have a battle with perfectionism, listen to that episode.


[0:02:48.7] MB: Today, we have another fascinating guest on the show, Dan Gardner. Dan is a New York Times bestselling author, and a senior fellow at the University of Ottawa’s Graduate School of Public and International Affairs. His latest book is Super Forecasting: The Art and Science of Prediction, which he coauthored with Philip Tetlock. Super Forecasting was chosen as one of the best books of 2015 by The Economist, Bloomberg, and Amazon. Dan’s also the author of Future Babble, and Risk: The Science and Politics of Fear.


He also previously worked as a policy adviser to the Premiere of Ontario, and as a journalist for the Ottawa Citizen. Dan, welcome to the Science of Success.


[0:03:23.7] DG: Hello.


[0:03:24.6] MB: Well, we’re very excited to have you on here today. For listeners who might not be familiar with you, tell us a little bit about you and your background?


[0:03:32.1] DG: Yeah, sure. I sort of had a bit of an eclectic background. Initially after law school, I went and worked in politics, and then I got into journalism and did a whole bunch of work in journalism, and then I happened to catch a lecture one year by a man who is a pioneer in the field of risk perception psychology, Paul Slovak, and that lecture really opened my eyes. Made me connect a lot of dots. I started to think about psychology, I started to study psychology heavily, and that’s sort of been the course of my career ever since. 


It’s really been an interesting experience, because when you change your understanding of how people think, how they perceive, how they decide, you change your understanding of people generally, and it was a real water shed in my life.


[0:04:21.4] MB: What is risk perception psychology, I’m really curious?


[0:04:24.5] DG: Basically, it’s a field of psychology that goes back to the 1970’s, when as you may know, there was large and growing controversy about the safety of nuclear power. The nuclear engineers would say, “Look at our data, it’s okay, it’s safe, don’t worry about it”, and the public was worried about it regardless. It didn’t matter how many numbers they were shown, they got more and more worried. That was the point in which psychologists got involved to say, “Well, how do people make these judgments about risk? If they’re not making it on the basis of available data. How are they making these judgments? Why are they so much more worried than the nuclear engineer say they should be?” 


The bottom line on that is that risk perception is in large part intuitive, it’s felt. If you feel that something is a threat, you’ll take it seriously. If you don’t feel that, you won’t. Generally speaking, that applies to any risk. Sometimes that works, sometimes our intuitive understanding of risk, or intuitive sense of risk, is very accurate and will keep us out of danger, and sometimes it is horribly inaccurate. It will not help us whatsoever. 


Simple example is after 9/11. Of course, we all saw the jet fly into the tower. We saw what happened afterward, and all sorts of folks became terrified of flying, thinking that they will be the next victims of deadly hijackings. They still had to get around, so what did they do? Well, they started driving instead, because that didn’t feel like a threat. Well guess what? Driving is in fact considerably riskier than flying.


As a result of this mass shift from flying to driving, by some estimates, as many as 1,500 people died who would not otherwise have died. That’s a great example of how our intuitive perception of risk can steer us in fact into greater danger.


[0:06:23.5] MB: That’s something that I find really fascinating, and especially I feel like people who constantly watch the news or get caught up in stories about terrorism, or mass shootings, or whatever it might be, kind of miss the point that, I think as you’ve said in the past, today we’re actually some of the healthiest and safest people to ever live on planet Earth.


[0:06:42.1] DG: Yeah, I mean that’s just an indisputable fact. We are some of the healthiest and safest people — and wealthiest too, if you want to throw that one in — to ever live, and yet we sure don’t talk or act like it. That’s really pretty unfortunate. Number one, we’re not sort of appreciating the bounty which has been befallen upon us, but also it means that we’re — in large part, we’re missing the real risks very often when we think about what we should worry about and what we shouldn’t worry about.


You’re quite right, we worry about the big dramatic, the vivid risks like terrorist attacks, even though any quick glance in the statistics will tell you that as an individual, are you likely to be killed in a terrorist attack? Almost certainly not. But simultaneously, we ignore the real risks. Sitting on the couch, watching television, eating junk food doesn’t feel like a threat, but if you do it day after day, month after month, year after year? Yeah, it is a real threat, and that’s why there’s some pretty undramatic advice that I always give people.


I always say basically, if you eat a reasonable diet, don’t smoke, obey all traffic rules, get some exercise, you have basically dramatically reduced your risk to all the major killers in modern life. That’s not a terribly exciting message. It’s not exactly great for grabbing headlines.


[0:08:07.0] MB: You know, it’s funny. Often times, the best advice is the most simple and obvious.


[0:08:12.2] DG: Yeah, I mean this is one of those areas where that is absolutely true, but the problem of course is again, it goes back to how do we judge risks? And as I say, sitting on your couch, watching television, eating junk food, it does not feel like a threat, because of our risk perception psychology. Where does that come from? It comes from where the brain evolves, the environment in which it evolves. It evolved in a world completely unlike the world in which we live and so there is this radical mismatch between our intuitive sense of risk and the world in which we live.


The things that we should kind of be worried about, like not getting enough exercise, like eating too much salt, like smoking, those things don’t feel like threats. Meantime, those things that do feel like major threats, the terrorist attack that you see on television or whatever, they aren’t so much. That’s why it’s so absolutely critical that people think carefully about risk judgments.
To ask themselves hard questions. Does this really make sense? Is there really evidence to support this? Don’t let your gut drive the decision.


[0:09:24.6] MB: When thinking about some of these major risks for somebody who is listening now, instead of following kind of their gut instinct, what you’re recommending is think a little bit more deeply about it.


[0:09:33.9] DG: Absolutely. Introspection is absolutely essential, and this is actually a point which I think comes out of psychology in general, comes out of decision making in general. When you ask who are the people who make good judgments and what do they have in common? I would suggest to you that there is at least a couple of points that are universal, and at the top of that list is introspection. 


People who have good judgment tend to think a lot about their thinking. Psychologist call that meta cognition. They think about their thinking. They tend to be sorts of people that say, “Okay, this is what I think. Here’s my conclusion, but does it really make sense? Is it really supported by evidence? Am I looking at the evidence in an unbiased fashion? Have I overlooked other possible explanations?” As I say, when you look at people with good judgment, you find that they have that introspection in space.


My favorite illustration that is George Soros. George Soros is — of course today is controversial, because of politics, but just forget that. Remember that George Soros in the 1950’s to the 1980’s was an incredibly successful investor. Particularly during the 1970’s. That was impressive, because of course that was a terrible time to be an investor, and yet he was very successful during that time. The interesting thing is, when George Soros was asked, “George, why are you so good?” And when you’ve made billions and billions of dollars, you’re perfectly entitled to say it’s because I’m smarter than all you people.


He never said anything at all like that. His answer was always the same thing. He always said, “I am absolutely aware that I am going to make mistakes, and so I’m constantly looking at my own thinking to try to find the mistakes that I know must be there, and as a result, I catch and correct more of my mistakes than does the other guy.” It’s that sort of a very intellectually humble message which he says is the source of his success and frankly, I think you can, as I say, I think you can find that sort of deep introspection in every single person who has demonstrable good judgment.


[0:11:40.1] MB: On the topic of good judgment, I think that’s a good segue into kind of the whole discussion about forecasting. Let’s start out — I’d love to hear the story or kind of the analogy of monkeys throwing darts. Tell me about that?


[0:11:54.1] DG: Yeah, we call that “unfortunate punchline” by coauthor Philip Tetlock. He’s a very imminent psychologist, and — recently at the university of California of Berkley, now at the University of Pennsylvania, the Wharton School of Business. Phil, back in the 1980’s, became interested in expert political judgments. You have very smart people who are observing world affairs, and they say, “Okay, I think I understand it, and I think I know what’s going to happen next.”


They make the forecast. Phil decided, “Well, are they any good?” When you look at the available evidence, what you quickly realize is while lots of people have lots of opinions about expert forecast, that’s all they are. They hadn’t been properly scientifically tested. So Phil said to himself, well how should they be tested? How can we do this? He developed a methodology for testing the accuracy of expert forecasts, and then he launched what was at the time one of the biggest research programs on expert political forecasting ever undertaken.


He had over 280 experts, people like economists, political scientists, journalists, intelligence analysts. He had those folks make a huge number of predictions about geo-political events over many different timeframes, and then he waited for time to pass so that he could judge the accuracy of the forecast. Then he brought together all the data, and crunched all the data, and boiled it all down, and there are vast numbers of findings that came out of this enormous research, which was published in a book called Expert Political Judgment in 2005.


One conclusion that came out of this research was that the average expert was about as accurate as random guessing, or if you want to be pejorative, the average expert was about as accurate as a dart-throwing chimpanzee. Some people really latched on to that conclusion, they really enjoyed that. These are the sorts of people who like to sneer at so called experts. There are other people who like to say that it’s impossible to predict the future, and they always cite this as being evidence of that — demonstrably fallacious conclusion.


This is one of those instances where statisticians like to warn people that averages are often useful and insightful, but sometimes they obscure things, and this is one of those classic illustrations where the average actually obscured the reality. The really interesting finding from Phil’s research was not that the average expert was about as accurate as a dart-throwing chimpanzee. It was that there were two statistically distinguishable groups of experts.


One group did much worse than the dart-throwing chimpanzee, which is pretty incredible when you think about it. The other group had real predictive insight. They did better than random guessing. It was still modest predictive insight; they made lots of errors, but they clearly had real foresight. The really interesting question from Phil’s original research was what distinguishes the two types of experts? What makes one type of expert a disaster, and what makes the other type of experts somebody with real foresight?


He looked at all sorts of the factors that you think might be relevant. Did they have PHD’s, did they have access to classified information, whether they were left wing or right wing, optimistic or pessimistic, and he showed that none of these factors made a difference. Ultimately, what made the difference was the style of thinking. The two types of forecastors had two very different styles of thinking.


To sum this up, Phil used a metaphor which has been used in many different contexts. Foxes and hedgehogs, because there’s a scrap of Ancient Greek poetry in which the Ancient Greek poet says, “The fox knows many things, but the hedgehog knows one big thing.” The one type of expert style of thinking is to have one big idea, that’s the hedgehog. The hedgehog has one big idea, and here that means they have one analytical tool. They have one lens, one way of looking at reality, and they think that that is sort of the secret decoder ring of the universe. 


So they use it over and over again to tell them what is going on. To make forecasta. That sort of expert, they like to keep their analysis simple, they don’t like to clutter it up with a whole bunch of different perspectives and information. They like to push the analyst until it delivers a nice clear answer, and of course if you deliberate — if you push the analysist until it lures a clear answer, you’re more often than not, you’re going to be very confident in your conclusion. You’re going to be more likely to say that something is certain or that something is impossible. 


The other type of expert is the fox, and as the ancient Greek poet says it, the fox knows many things. What that means in this context is, the fox doesn’t have one big analytical idea, the fox will use multiple analytical ideas. In this case the fox may use one idea, and in another case, the fox makes a different idea. Foxes are also very comfortable with going and consulting other views. Here I have my analysis, I come to a conclusion, but you have an analysis, I want to hear your analysis.


If you’ve got a different way of thinking at different analysis, a different method, then I definitely want to hear that. They want to hear from multiple information sources. They want to hear different perspectives, and they drag those perspectives together and try to make sense of all these separate sources of information and different perspectives.


Now, if you do that, you will necessarily end up with an analysis that is not so elegant as the hedgehog’s analysis. It will be complex and it will be uncertain, right? You’ll probably end up with more situations where you have — say you have seven factors that point in one direction, or five factors that point in another direction, and then you’ll say well, you know, on balance, I think it’s maybe 65% it will happen.


They’ll be more likely to say that sort of thing than they will be to say it’s certain to happen or it’s impossible, right? They end up being much less confident than the hedgehogs. Well, the conclusion of Phil’s research was that the hedgehogs were disastrous when it came to making accurate forecasts. As I said, they were less accurate than the dart throwing chimpanzee.


The foxes had the style of thinking that was more likely to produce an accurate forecast, but here’s the punchline. The real punchline from Phil’s research is that he also showed there was an inverse correlation between fame and accuracy. Meaning, the more famous the expert was, the less accurate his forecasting was, which sounds absolutely perverse when you think about it, because of course you would think that the media would flock to the accurate forecast or ignore the inaccurate forecaster.


In fact, it makes perfect sense because remember that the hedgehog tells you a simple, clear story that comes to a definite conclusion. It will happen or it won’t happen. A confident conclusion, whereas the fox expert says, “Well, there are some factors pointing at one direction, and other factors pointing in another direction. There’s a lot of uncertainty here, but I think it’s more likely than not that it will happen.


If you know anything about the psychology of uncertainty, we really just don’t like uncertainty, right? When you go to an expert and you get that fox-like answer that says well, balance of probabilities, that’s psychologically unsatisfying, whereas the hedgehog is giving you what you psychologically crave, which is a nice, simple clear story with a strong clear conclusion and as a result. We find that the media goes to exactly the type of expert who was most likely to be wrong.


That’s a really important and really unfortunate finding, and I wish it were as famous as Phil’s finding about the predictions being as likely to — as accurate as the dart throwing chimpanzee, because it is just so much more important. Unfortunately, there it is. That was through the culmination of Phil’s first enormous research program.


[0:20:05.7] MB: I think it’s such an important finding that the smartest people, “the most accurate forecasters”, as you call them, the foxes, are often kind of the most humble and the least confident and certain about what’s actually going to happen.


[0:20:18.3] DG: Yup. This is, again, if you were asking about sort of the universals of good judgment. One of the universals is a quality that I call intellectual humility. I emphasize intellectual humility because it’s not just humility. This isn’t somebody ringing his or her hands and saying I’m not worthy, I’m no good. By intellectual humility, I mean, it’s almost like a worldview in which you say look, reality is immense, complex fundamentally uncertain in many ways.


For us to understand even a little bit of it, let alone to predict what’s going to come next is a constant struggle. What’s more, we’re fallible people and people make mistakes, so I just know that I’m going to have to work really hard and I’m still going to make mistakes, but I can in fact slowly try to comprehend a little bit and try to do a little bit better. That attitude is absolutely fundamental for a couple of reasons.


Number one, it says you’re going to have to work really hard at this, right? Comprehending reality, let alone forecasting, is not easy. Expect to work hard if you want to do it well and accurately. Number two is, it encourages introspection, you remember I mentioned earlier, that introspection is universal among people’s good judgment. Well, if you’re intellectually humble and you know you're going to make mistakes, you’re going to be constantly thinking about your thinking so that you can try and find those errors, okay?


That is so that introspection flows naturally out of intellectual humility. The third element that comes, flows out of intellectual humility is this. If you have this idea that you know, the universe is vast and complex and we can never be sure, then you know that certainty is an illusion. You should not be chasing certainty because human beings just can’t manage that. What does that mean? That means, don’t think of making a forecast in terms of it will happen or it won’t happen. Don’t’ think in terms of it’s 100% or 0%.


Think in terms of one to 99%. It’s all a question of degrees of maybe right? The finer green you can distinguish between degrees, maybe the better. What I’ve just described is something called probabilistic thinking. It too is very fundamental to people with good judgment, and unfortunately, it’s very unnatural. It’s not how people normally think. In fact, how people normally think is — we sometimes call the three-setting mental dial. You know, you ask yourself, is this thing going to happen? And you say, it will happen or it won’t happen, or if you really force me to acknowledge uncertainty — because I really don’t like uncertainty — I will say maybe. That’s the third setting of my mental dial. 


There’s only those three crude settings, whereas probabilistic thinking says no. Throw out those two settings, it will happen or won’t happen, it’s all degrees of maybe. As I say, this is not natural. This is not how people ordinarily think, but people can learn to do it, and they can make it a habit. Scientists think as probabilistic thinkers, good scientists do anyway, and the super forecasters that we discovered in Phil’s second research program. People with demonstrably excellent forecasting skill, they are real probabilistic thinkers.


It is a habit with them. I mean, I spoke with one super forecaster and you know, just in a casual conversation I said, “Do you read? Do you read much?” He said, “I read lots”, and I said, “Well, do you read fiction or nonfiction? He said, “I read both”. I said, “Well, what proportion of the two would you say that you read?” He said, “It’s about 70/30.” Then he caught himself and thought carefully, and he said, “No, it’s closer to 65/35”, right? This is in a casual conversation. Normal people just don’t think with that degree of fine-grained maybeness.


People who learn to think in probabilistic terms, they can make it habitual, and they can think that carefully. By the way, the data is very clear that that is in fact one of the reasons why these super forecasters are super.


[0:24:38.5] MB: Before we dig into that, because I do want to talk about how we can kind of train ourselves to think more probabilistically, and how we can learn from some of these super forecasters. Touching back on the idea of why people dislike uncertainty so much. Can you share kind of the anecdote about cancer diagnosis?


[0:24:55.8] DG: Sure. Look, when I say that people dislike uncertainty, people, I get it, okay? I dislike uncertainty. I would prefer to have hard facts, it is or it isn’t. Okay, I don’t’ think they quite appreciate just how profoundly aversive uncertainty really is, psychologically aversive, it really is. Let me illustrate in fact with two illustrations. One is a scientific study that was conducted in Holland where they asked volunteers to experience electric shocks. and Some of the volunteers were told, “you are about to receive 20 strong electric shocks in a sequence”, and then they were wired up to be monitored for the physiological evidence of fear,, which is elevated heartrate, elevated respiration rate, perspiration of course.


Then other volunteers were told, you will receive 17 mild electric shocks randomly with three strong electric shocks and they too were monitored for the evidence of fear. Now objectively, the first group obviously received much more pain, much more painful shocks but guess who experienced more fear? It was the second group. Why? Because they never could know whether that next shock would be strong or mild. That uncertainty caused much more fear than the pain itself.


That sort of aversion to uncertainty is very powerful stuff, and you will see it in doctor’s offices. In fact, any doctor will tell you a version of the story I’m about to say. The patient comes in, the doctor has reason to suspect that the patient has cancer, tells the patient this, says, “But we can’t be sure. We have to do more tests, and then we’ll see.” They do the tests, and then the patient waits. And any person who has ever been through that will tell you that the waiting is hell. Then one day, you go back to the doctor’s office, you sit down and sometimes unfortunately, the doctor has to say, “I’m afraid to tell you that the tests confirm that you have cancer.”


Almost universally, what patients report feeling at that moment is relief. They feel better and they almost always say the same thing: “At least I know.” That’s how powerful uncertainty is, that the possibility of a bad thing happening can be a greater psychological burden on us than is the certainty that the bad thing is happening.


If that’s the case, if uncertainty is so horrible to us and we just want to get rid of it, it’s really no surprise then that we will turn to sources that promise to get rid of uncertainty, even when it’s not rational to do so.


[0:27:49.2] MB: Now let’s dig in to kind of the idea of super forecasting, and let’s start with what is a super forecaster?


[0:27:57.2] DG: Yeah, it’s a bit of a grandiose term, I have to admit. It actually has humble origins. A number of years ago, the Office of the Director of National Intelligence in the United States, that’s the office that oversees all the 16 intelligence agencies — including the CIA — in the United States. A number of officials in that office decided that they had to get more serious about analyzing the forecasting that the intelligence community does.


I don’t know if you’re aware, but the intelligence community actually spends a lot of its time not just spying, but also analyzing information to try and figure out what’s going to happen next. If Russia is saber-raffling, they’re going to make a forecast. Will Russia try to seize the Crimea? You know, they’ll try to make forecasts, but on all parts of geo-political events, including economic events like what’s going to happen at the Chinese economy in the fourth quarter, that sort of thing.


The officials within the ODNI decided that they had to get better at this. One of the ways that they decided they would get better at this is to sponsor what became called a forecasting tournament. What that meant was very simple. It sounds like a game, but it’s not a game. It’s an enormous research program, and what they did was they went to leading researchers and forecasting and they said, “You set up a team to make forecasts, and we’ll ask questions, and they’ll be the real world questions that we have to answer all the time. We’ll ask them in real time, so as they arise. If an insurrection breaks out in Syria, we’ll ask something about how that will proceed. So you have to forecast it, and then we’ll let time pass and then we will judge whether your forecasts are accurate or not. We’ll do this for lots and lots of questions and you guys, you researchers, you can use any methods you want, and then at the end of this process, we will be able to analyze the accuracy of all this forecast.


We will see which methods work, which methods don’t, and then try to learn how we can improve what we’re doing. Very sensible stuff, you would think. As I said, they went out to leading researchers, ultimately they ended up with five university based research teams in this forecasting tournament. One of the research teams was led by my coauthor Philip Tetlock, and that team was called the Good Judgment Project.


To give you an idea of the scale of this undertaking, the Good Judgment Project, which as I say was only one of five teams, it involved volunteers. They went out and they were recruited, and — through blogs and whatnot, and said, you know, basically, do you want to spend a little free time making geo-political forecasts, then sign up here.


They got huge numbers of volunteers. At any one time there were 2,800 to 3,000 people involved with the Good Judgment Project. Over the course of the four year tournament, there were more than 20,000 people involved. That gives you an idea of the scale of this and the bottom line result. There were many results that came out of this because as you can imagine, the data are luminous.


The bottom line result was one. The Good Judgment Project won hands down. Number two, the good judgment project discovered that there was a small percentage between 1% and 2% of the forecasters, the volunteer forecasters were truly excellent forecasters. They were consistently good, and I say consistently good because that’s very important to bear in mind. Anybody can get lucky once, or twice, or three times, but if you’re consistently good, you can be pretty sure that you’re looking at skill not luck.


To give you an idea of how good they were, well, at the start of the tournament, the ODNI set performance benchmarks which all the researchers thought were way too ambitious. Nobody could beat this. The super forecasters blew past the performance benchmarks. They beat prediction markets which economist would say shouldn’t be possible. They even beat intelligence analysts who had access to classified information.


Which is particularly amazing because remember, these are ordinary folks. These super forecasters, when they went to make their forecast, basically they had to use just whatever information they could dig up with Google. Yet they were able to beat even people who had access to all that juicy classified information. This is really impressive stuff and then the question is, well why are they so good?


We can quickly dispatch a number of things that you might think would explain this. Number one, you might think that they’re using some kind of arcane math, right? They’re using big data or algorithms, some craziness that ordinary folks can’t understand. No, they didn’t. In fact, to the extent of the youth math, they were a very numeric people by the way. They are very numeric people. I should emphasize that point.


They are well above average in numeracy. To the extent that they use math in making their judgments, it was like high school math, it was nothing particularly dramatic. Another thing that you might say would make the difference. Well, maybe they’re just geniuses, right? They’re just so off the charts intelligence that they’re just super. No, that’ snot the case either. They were tested for — they were given IQ tests, and again, they scored well above average. 


These are not just randomly selected folks off the street. But, they’re not sort of mental level geniuses, they’re not so incredibly intelligent that ordinary folks can’t relate to them. It’s very clear that conclusion that you can draw from this is basically, it’s less what they have than how they use it. The third element that you might think is specialist knowledge, right? You might think, well, okay, these are experts in some fields in the fields that they’re trying to forecast, and no, I can tell you categorically they were not experts in field.


They’re very informed people, right? These are people who agreed to make geo-political forecasts in their spare time. It’s no surprise that they’re smart, they followed the news, they follow international news, they’re interested in the stuff, they’re very informed but they’re not specialists. We know this for the very simple reason that they were asked about all sorts of different questions, and all sorts of different fields, and nobody’s an expert in every field.


So, they’re not any of those things so then, the question is, well what elevates them, what makes them different? I wish they were like one or two simple answers, a couple of clear, crisp bullet points that answers everything, but that’s not the case as so often the case, the reality is complex. There’s quite a list of things that make them different. Number one, they’re intellectually curious.


I think that’s very important, it’s no surprise. These are people who like to learn, they’re constantly picking up bits and pieces of information, and no surprise, when you spend a lot of time picking up this sort of information, eventually you will have quite a number of dots in your intellectual arsenal for you to connect.


Two, these are people who score very high in what psychologist call a need for cognition, which simply means that they like to think. They really enjoy thinking, they’re the kinds of people who do puzzles for fun and the harder the puzzle is, the more fun it is, which is very important because when you look at how they actually make their forecasts, its’ a lot of hard mental effort and so enjoying a hard mental effort sure helps.


Three, they’re actively open minded. This is another term form psychology. Open minded is pretty obvious, that means okay, I’ve got my perspective but I want to hear your perspective. I want to hear somebody else’s perspective. I want to hear different ways of thinking about this problem. Then they’re going to gather all these different perspectives together and try to synthesize them into their own view.


Now, that’s the open-minded part, but of course, there’s an old saying about open-mindedness. Don’t be so open minded that your brain falls out. Well, this folks, that’s the active part, the active open mindedness, and these folks were very active in their open mindedness. Meaning that as they’re listening to all this other perspectives and gathering these other perspectives, they’re thinking critically about them. They’re saying, does that really make sense? Is that actually supported by the evidence? Is that logical?


They’re doing that constantly when they draw these perspectives together and synthesize them into their own view, which again, I would emphasize, that sounds like a heck of a lot of work. It is. Unfortunately, as I said, they like hard thinking. Fundamentally also, they’re intellectually humble. I mentioned intellectual humility earlier. That is absolutely true here, and all the things that flow from that are true. You know, they’re hard mental workers, they’re deeply introspective people, they’re constantly looking at their thinking. Trying to find the mistakes, trying to correct it and improve it, and the probabilistic thinkers, that also flows form intellectual humility.


Another element I would also add is simply this, if you ask how do they actually approach a problem, how do they actually make a judgment? One of the critical differences between a super forecaster and most ordinary folks is, rather than simply vaguely mulling over information and stroking your chin until somehow an answer emerges somehow and you don’t know how. That’s a terrible way to make a forecast by the way.


What they do is that they methodically unpack the question. They take a big question, and they unpack it, and make a whole series of smaller questions, and then they unpack those and they make a series of smaller questions, and they methodically examine them. Each one, step by step by step. Again, this is a very laborious method, a lot of hard mental work goes into it, but it’s demonstrably effective. There’s a famous physicist named Enrico Fermi. One of the fathers of the atomic bomb, who became famous for his ability to estimate things accurately.


He actually taught this method. Fermi estimates basically involve unpacking questions so that you methodically tackle them one after the other after another. People who work in physics or engineering will be familiar with this. Fermi estimates are actually taught in those departments. In fact, engineers to engineers, this is almost second nature, this idea of unpacking the problem and methodically tackling it that way.


It’s probably not. This is a bit speculative, but it’s probably not a coincidence that a disproportionate number of the super forecasters have engineering backgrounds. Software engineers, computer programmers, whatever. People with engineering background sort of get this.


[0:38:35.2] MB: That was fascinating, and I think one of the most important things you said is that it’s not easy. It takes a lot of hard work to make effective decisions or in this particular context, effective forecasts. One of the things that I always say is that there’s no kind of get rich quick strategy to becoming a better thinker. It takes a lot of time, energy, reading, and introspection to really build kind of a robust thought process to improve your own ability to think and make better decisions.


[0:39:05.0] DG: That’s absolutely correct. It also touches on a further factor, which I didn’t mention, which is certainly one of the most important. Which is that these are people who have what psychologists call the growth mindset, which is that they believe that if they think hard, and they work hard, and they practice their forecasting skill, and they look at the results of their forecasts, and they think about how they got them right or how they got them wrong, and then they try again, that they will improve their forecasting skill. Just as you would improve any skill that you practice carefully with good feedback over time.


You might say, but isn’t that perfectly obvious? Doesn’t everybody understand that in order for you to improve a skill, you have to practice it, and the more you practice, the better it will get? Unfortunately, that’s just not true. There’s a psychologist named Carol Dwack who has done an enormous amount of researching skill, and she talks about two mindsets. One is the growth mindset that I just described, but the other mindset is the fixed mindset, which is basically the idea that we’re all born with abilities and talents and skills, and that’s all we’ve got.


If I try something and I fail, I’m not going to try it again, because I have demonstrated the limits of my abilities and it would be foolish of me to waste time trying to improve those abilities. That’s why it’s very critical — and we see this clearly in the super forecasters, they have very strong growth mindset, and more importantly, they put it into action. They were making their forecasts, they were doing post mortems, trying to figure out what went right, what went wrong and why. They were trying to improve on the next round, and they did, there was demonstrable improvement.


It’s very clear that underlying all of this is you have to have some belief in the ability to grow or you won’t engage in the hard work that’s necessary to grow.


[0:41:10.2] MB: Long time listeners on the show will know that on here, we’re huge fans of Carol Dwack and the book Mindset, and we actually have a whole episode on kind of the difference between the growth mindset and the fixed mindset.


[0:41:22.2] DG: Great.


[0:41:22.1] MB: Breaking out all those things. I’ll include links to both of those things in the show notes for people to kind of be able to dig down and really understand those concepts who may not have heard the previous episodes we have about that kind of stuff. Yeah, I totally agree, I’m a huge fan of the growth mindset and I think it’s critically important.


[0:41:39.8] DG: Yeah, there’s no question that in Phil Tetlock’s research, super forecasting research. The data very clearly demonstrate that.


[0:41:47.8] MB: For somebody who is listening, what are some sort of small concrete steps they could take right now to kind of implement some of the best practices of super forecasters to improve their own thinking?


[0:41:58.7] DG: Well, the first thing I would say is, adopt as an axiom, because of course as humans, we all have to have axioms in our thinking. Adopt as an axiom that nothing is certain, right? It’s easy to say that in the abstract, but it’s a lot harder to apply it in our lives, because if you stop and you think about your own thinking, you’ll begin to realize that you use the language of certainty constantly, which is normally fine.


I’m sure in this conversation, I’ve used certainly and that sort of saying, remember at a minimum that any time that you say certain or refer to certainty, there’s an asterisk. All of us, right? The asterisk means almost. Because in fact, in reality, literally nothing is certain. Not even death and taxes. Once you start to think in those terms, you make that an axiom. You can start to make it a habit to say, okay, it’s not certain, how likely is it? Think in terms of probability, and you know, it’s often said that the ability to distinguish between a 48% probability and a 52% probability or even a 45% and a 55% probability?


It sounds like a modest thing, but if you can do that concisely, that’s the difference between going bankrupt and making a fortune in certain environments, such as Las Vegas or Wall Street. Learning to think, to make it habitual to think in terms of probability is I think step number one.


[0:43:32.4] MB: For listeners who want to find you or the book, what’s the best place for people to find you online?


[0:43:38.3] DG: Probably dangardner.ca for Canada.


[0:43:45.7] MB: For listeners who might have missed it earlier, the book that we’re primarily been talking about is Super Forecasting. Highly recommend it, as you can tell form this interview. Dan is incredibly sharp about all these different topics. Dan, for somebody who’s listening, obviously, they should check out Super Forecasting. What are some other resources you’d recommend if they want to learn more about kind of how to make better decisions and how to make better forecasts?


[0:44:08.1] DG: That’s an easy question. The very first book — in fact, I would recommend it before my own books, which is something authors aren’t supposed to do, but here it goes. The very first books folks should read is Daniel Conman’s book, Thinking Fast and Slow. Conman is a course, the Nobel Prize winning psychologist, who is one of the symbol figures of our time, and fortunately, he finally got around to — not long after I read all of his papers and I learned the hard way — he finally got around to writing a popular book, and Thinking Fast and Slow is absolutely essential reading. Anybody who makes decisions in — whether it’s in business, or in government, or in the military, or anywhere eels, anybody who makes decisions that matter should read Thinking Fast and Slow.


[0:44:54.3] MB: I totally agree. It’s one of my favorite books, and I think one of the deepest, most information rich books about psychology that’s on the market today.


[0:45:03.1] DG: Absolutely.


[0:45:04.0] MB: Dan, this has been a great conversation, and filled with a lot of fascinating insights. Thank you very much for being on the show.


[0:45:11.9] DG: Thank you, it’s a lot of fun.


[0:45:13.4] MB: Thank you so much for listening to the Science of Success. Listeners like you are why we do this podcast. The emails and stories we receive from listeners around the globe bring us joy and fuel our mission to unleash human potential. I would love to hear from you. Shoot me an email, send me your thoughts, kind words, comments, ideas, suggestions, your story, what the podcast means to you. Whatever it might be. I read and respond to every single email that I get from listeners. My email address is matt@scienceofsuccess.co. Shoot me an email, I would love to hear from you. 


The greatest compliment you can give us is a referral to a friend either live or online. If you’ve enjoyed this episode, please, leave us an awesome review and subscribe on iTunes. That helps more and more people discover the Science of Success. Lastly, as a thank you to you for being awesome listeners, I’m giving away a $100 Amazon gift card. All you have to do to be entered to win is to text the word smarter to the number 44222. Thanks again, and we’ll see you on the next episode of the Science of Success. 

December 08, 2016 /Lace Gilger
Decision Making
45- Trading Your House For A Tulip, Your Love Life, And What It All Has To Do With Making Better Financial Decisions with Dr. Daniel Crosby-IG2-01.jpg

Trading Your House For A Tulip, Your Love Life, And What It All Has To Do With Making Better Financial Decisions with Dr. Daniel Crosby

October 27, 2016 by Lace Gilger in Decision Making, Money & Finance

In this episode we explore how you can learn from dating mistakes to make better financial choices, the most expensive words in investing (and how you can avoid them), why highly qualified experts are wrong more than 94% of the time, the importance of focusing on process vs outcome and much more with Dr. Daniel Crosby.

Dr. Crosby is a psychologist and behavioral finance expert as well the author of New York Times Best-Seller "Personal Benchmark: Integrating Behavioral Finance and Investment Management” as well as “Laws of Wealth: Psychology  and the secret to investing success.” He was named named one of the “12 Thinkers to Watch” by Monster.com, a “Financial Blogger You Should Be Reading” by AARP and listed on the Top 40 Under 40 by Investment News.com. 

We discuss:

  • How Daniel works to integrate the messiness of human psychology into fields like economics and finance

  • How your emotional state colors your perception of risk

  • How you can learn from dating mistakes to make better financial choices

  • The most expensive words in investing (and how you can avoid them)

  • The insane “tulip” craze and what it says about financial markets

  • Why in our efforts to manage risk we often create the outcomes we are trying to avoid

  • How you control what matters most (often without realizing it)

  • The importance of focusing on process vs outcome

  • Why “you are not special” and how that advice can save you a lot of money!

  • Why experts are wrong 94% of the time

  • Why really successful people automate their day and free up their cognitive power for more important tasks

  • How to be aware of the biases impacting our thinking and get a second opinion

  • The importance of being “not stupid” instead of being smart

  • Existential boundary experiences and how they can transform you

  • How to break out of the glorified business of our daily lives and embrace the inevitability of our own mortality

  • 2 simple and actionable steps you can take right now to improve your personal finance and investment knowledge

  • And much more!

Android Button.png

Thank you so much for listening!

Please SUBSCRIBE and LEAVE US A REVIEW on iTunes! (Click here for instructions on how to do that). 

SHOW NOTES, LINKS, & RESEARCH

  • [Book] Superforecasting by Philip E. Tetlock and Dan Gardner

  • [Book] The Dead by James Joyce and Fasano Thomas

  • [Book List] Irvine Yalom Books

  • [Reading List] Nocturne Capital Reading List

  • [Book] Thinking, Fast and Slow by Daniel Kahneman

  • [Book] Nudge by Richard H. Thaler and Cass R. Sunstein

  • [Book] A Few Lessons for Investors and Managers From Warren Buffett by Peter Bevelin and Warren Buffett

  • [Website] Berkshire Hathaway Inc. Shareholder Letters

  • [Book] The Intelligent Investor by Benjamin Graham, Jason Zweig, & Warren E. Buffett

EPISODE TRANSCRIPT

[00:02:24.2] MB: Today we have another exciting guest on the show, Dr. Daniel Crosby. He is a psychologist and behavioral finance expert as well as the author of the New York Times bestseller, Personal Benchmark: Integrating behavioral finance and investment management, as well as, The Laws of Wealth: Psychology and the secret to investing success. He was named one of the 12 thinkers to watch by monster.com, “A financial blogger you should be reading” by the AARP, and listed on the Top 40 Under 40 by Investmentnews.com. 

Daniel, welcome to the Science of Success. 

[00:02:53.6] DC: Thank you, it’s great to be here. 

[00:02:55.6] MB: Well we’re very excited to have you on. So for listeners who may not be familiar with you, can you tell us a little bit about your background and your story?

[00:03:05.1] DC: Yeah so I have sort of a varied background, I went to school initially to be an investment manager. After a year in school left to go on a mission for my church, so I spent two years in the Philippines. I came back with I think a bigger heart than I left with and decided I wanted to go into a helping profession, so I choose psychology. 

About two or three years into a PhD program in psychology, it was getting a little too heavy for me. I was taking work home with me. It was bumming me out talking to sad people all day and so I said, “You know, I love thinking deeply about why people do the things they do but I think I need to look for a business application of behavioral principles,” and so long story short, I’ve landed in sort of this middle ground of behavioral finance, which is a blend of psychology and decision making and finance. 

[00:03:54.2] MB: That’s fascinating. So for listeners who have never heard that term, behavioral finance, tell us a little bit more about that? 

[00:04:00.6] DC: Yes, so behavioral finance is really just trying to integrate the messiness and the irrationality of human decision making into the financial planning and investment management process. It’s hard to believe I think for people who come from the outside but for years and years, hundreds of years economic models were built on this idea of rational man. So built upon this mistaken notion that people are thoughtful and prudent with their money, which I think we can all point to instances in our own lives when that hasn’t been the case. 

So behavioral finance study is basically the mistakes and the fears and the heuristics that drive decision making and tries to incorporate them in and help people make better decisions. Then on the flip side, some of what I do is how do you make better investment decisions, how do you pick better stocks by taking the other side of trades where people are being greedy or fearful? So there’s a lot to it but basically it’s about integrating humanity back into finance. 

[00:05:04.0] MB: I think that’s so important and something that we talk about a lot on the Science of Success is the idea that many fields, and I think economics, finance, etcetera were definitely guilty of this 10 or 15 years ago, really don’t incorporate the actual reality of human psychology into their evaluation of human behavior. 

[00:05:25.2] DC: Yeah, that’s so true and I mean really, this was done frankly not because anyone believed it per say because, like I said, I think it’s fairly simple to think of reasons of why you could contradict a rational man type theory. But really, I think it was done this way to build elegant, beautiful mathematical models. So your math gets a lot harder, the algorithms don’t get as elegant when you have to plug Joe six pack into the equation and so it’s not quite as pretty but it’s maybe a little more realistic. 

[00:06:00.6] MB: So you have a TED Talk where you talk about the concept of understanding money and how people think about money through the lens of love, can you share that idea or explain that? 

[00:06:14.3] DC: Yeah, so I find it to be my life’s mission to make these things more accessible, some of these notions more accessible. Because I have done, I’ll be honest, basically none of the primary research on the things that I write about. It’s been done by people far smarter than me typically in academic settings but what I have done is I’ve taken these ivory tower concepts and have broken them down in a more simplified way that people can understand.

Because I am from Alabama and that’s what we do in Alabama, we make things as simple as possible and so yeah, I have done three TED Talks and one of them was called Sex, Funds, and Rock and Roll and it’s in that TED Talk I compare romantic love, the irrationality of romantic love to the way that you invest or make decisions around your money. Talk about everything from the irrationality of playing the lottery all the way down to things like the way that emotion colors risk perception. 

When you’re in love with someone, the reason we have a 50% divorce rate or whatever is when you’re in love, you’re not very critical. You’re not a very good assessor of risk when you’re in love because our emotional states tend to dictate how much risk we do or don’t see in our environment and so if we’re feeling great, the world looks great and we don’t tend to see much risk in the world around us, and so in investing and in love, maybe we need to be a little more critical and a little more even-headed but it’s certainly easier said than done especially in romantic love. 

[00:07:50.8] MB: And you shared a couple different biases in that talk, one of them was the, as you called it, the “fixer upper bias”. Which is the idea that if you’re dating somebody that you can change or transform them and how that applies to people’s personal finances as well. 

[00:08:03.7] DC: Yes, so the fixe — sort of the analog, I mean I think we are all familiar with the love part of that equation. You know, we’ve all probably had the experience of dating someone with an eye to changing them or hoping that they would become more the person that we need them to be. The way that that plays out in our investment lives is that we tend to over invest in things that are proximal to us. So this takes a couple of turns, right? 

One is called the “home bias” where we find that people dramatically over invest in stocks of their own country and it’s actually less of a problem in the US than elsewhere just for the simple fact not that we do it any less but the US is a bigger part of the world economy than say Greece. But someone, like people in Greece, tend to invest in Greek companies which is only a very, very small part of the world economy. 

Likewise people in the US tend to be overweight the US economy, which accounts for about half of the stocks and half of the market capitalization of stocks globally. So we tend to think things that are closer to home are safer, that’s not always the case. The other way that this applies is that we think that if we work for a company, we can single handedly make it better. So I spoke with someone recently who had $5 million in one stock, all of their money, $5 million was all the money they had and it’s a great deal of money. 

But they had all of the $5 million in one stock because that was the large publicly traded company they worked for and his thought was, “Why would I spread it around? Why would I diversify where here, I can put it in the company where I worked directly?” well of course the irrationality there is your one person. You’re the 372nd accountant in this large multinational corporation. You can’t move the needle all that much, but just like a bad romantic partner, we think that because we’re involved things will get better by virtue of our involvement alone and that’s not the case. 

[00:10:07.3] MB: Another bias you touched on, and I found this one really fascinating, was the idea of “this time it was different”. Or I think another term for it might be the concept of “new era thinking”? 

[00:10:17.6] DC: Yeah, so “this time it’s different”, those words have been called the most expensive words in investing. So “this time is different” with respect to romantic love, I talk about Elizabeth Taylor who was married, I don’t know and I can’t remember, four or five times at least and the thought there is that, “Well yeah, those past guys were bad for XYZ reasons but this time it’s different,” and we’re always just plunging forward never taking the time to look back and see what happened. 

So we see this type of new era thinking and investing as well in every major bubble and crash has had this new era thinking. You know, if you go back to the turn of the century when we had the “dot com” bubble, I think this sort of new era thinking of the day was that traditional metrics, like price to earnings and even sales and profitability didn’t matter because we were in this brave new world where things like eyeball share and clicks and things mattered in this sort of new economy. 

And the thing that’s so tricky about new era thinking is that a lot of times it is characterized by half-truths. Because as we know, the internet was indeed a big deal. I mean it did revolutionize life and business in ways that I think we probably couldn’t have even imagined 15 or 20 years ago. But what isn’t the truth is that traditional metrics like earnings and profitability and things would be out the window, right? So a lot of the danger of bubbles and bad economic decision making is that they are half-truths. 

And if you go back in history, if you go back to Amsterdam hundreds of years ago, there was a point in Dutch history where a single tulip bulb was trading for as much as a town home and that’s because they were engaged in this new era thinking that says, “Hey, we have this scarce commodity. People will never be sick of tulips. They’re going to appreciate forever, and we’re going to be a very wealthy county.” So we have to check ourselves and say, “Look, there are certain laws of the universe and these things tend to come back down to earth and this time may not be so different after all.” 

[00:12:28.5] MB: So in the context of the current financial markets, where do you think that kind of framework applies?

[00:12:35.3] DC: I think we are in a dangerous position right now I think because we’ve got two things going on. For a lot of people, I don’t know how old you are. I am in my mid-30’s, get it creeping towards late 30’s but in my mid-30’s and some of my first experiences of investing were bad. I mean some of my first experience as an investor, having a job and having enough money to put a little aside and then you’re talking 2008-2009. 

So there is that primacy and recency effect, right? So I have an early memory of a very bad time and I think no matter what people’s age, people still are a little gun shy from such a dramatic come down, what is it now? Seven or eight years ago now. But then I think we have the recent past which is seven years of extremely good returns, very little volatility over the last seven years. So people are simultaneously scared because of what happened seven or eight years ago, and spoiled because of seven or eight years past of really nice returns with very little volatility by historic standards.

So I think we’re ripe to be frightened and make really poor decisions the next time the market takes a dip and I mean it will. It will, this is already one of the longest bull markets of all time and it’s really a matter of when and not if and so people need to prepare themselves a bit for the inevitability of that. 

[00:14:08.2] MB: So tell me the story of one of your first consultations as a psychologist? You had a grad student who wanted to become an epidemiologist, and how do we create self-fulfilling prophecies that can create negative outcomes in our lives? 

[00:14:24.2] DC: Yeah, so my very first every client, so my PhD is in clinical psychology even though I work in a very different field now. I had to get thousands of hours of face to face consultations at clinical hours with clients and so my very, very first client was a beautiful college student, very bright, very talented and very intimidating to me as a brand new therapist. So she walks into my room and she brings with her six envelops and says to me, “Look here is the story.” I go, “Well hey, what did you got there?” 

And she says, “Here’s the story. I wanted to be an epidemiologist all my life. I’ve always wanted to go get a PhD in this. I’ve brought you this six envelops because these are the six programs that I have applied to, to get into a PhD program. They have all written back to me and I cannot bring myself to open these letters because if it’s bad news, I’m going to be crushed. I’m going to be just heart broken by this bad news because this is what I wanted since I was very young.” 

And so very inelegantly and articulately I’m sure, we sort of worked around over the course of the next session or two, to the point where I helped her try and understand that often times in life in our very efforts to manage risk and make ourselves safe, we bring about the certainty of the very thing we’re trying to avoid. So in her efforts to spare her feelings and avoid potential bad news, she was running up against a deadline. You of course have to respond to these schools and tell them if you are coming or not. 

She was running up against the deadline that was going to lead her into a certainty of a bad situation, and as a clinician and as a financial adviser, I see that again and again. I very, very commonly saw people who had been hurt in romantic relations say, “Well I am never going to love again because if I never love again, that’s how I keep from being lonely,” right? And it’s of course very paradoxical because in the act of trying to avoid heartache and loneliness, the possibility of heartache and loneliness, you bring about the certainty of those very things. 

And I see the same thing in financial markets. People fail to invest, they fail to take the ride and endure the volatility because they are scared of losing money and it’s very scary and we all work very hard for our money and it is scary but in their failure to do that, they bring about the certainty that they’re not going to be able to retire. We’re losing 3% a year, you’re losing 3% a year on your money if you are not invested just because of inflation. And so in love and in finance, I think people try and manage risk too closely and in their efforts to do so, bring about negative realities that could have been avoided all together. 

[00:17:21.7] MB: So how can we let go a little bit and not manage those risks so closely? 

[00:17:29.9] DC: You know, in The Laws of Wealth, my new book, I talk about a couple of ways I think in the first couple of chapters. I think one thing that people can learn is that the title of chapter one is You Control What Matters Most, and I think that’s an empowering message that’s little understood by the average investor. Just a couple of stats on that, a recent study by a big asset manager, they surveyed financial advisers and then they surveyed their clients. 

So of the financial advisers, 83% of them thought that the best thing that they could do for their clients was manage their behavior, help them manage their emotions, and make good decisions. Not picking stocks, not managing taxes, not doing any of this. Managing behavior and decision making was what financial professionals perceive to be the number one value add and the research, without getting too boring, the research backs that up. 

But then they turn around and asked the clients of these financial advisers, “Is it important to you to get help around behavior and decision making from your adviser?” And only 6% said “yes”, and so the average investor over the past 30 years the market’s given us about eight and a quarter percent a year over the last 30 years and the average investor has only kept 4% of that because they’ve entered and exited the market at exactly the wrong times. 

They’ve bought in when things were expensive, they’ve jumped out when things were cheap and scary, sort of rinse and repeat and so I think if people better understood that, “Hey, I have more control over this process just by virtue of doing a couple of boring things, like putting aside money every month, staying the course, being calm and collected.” I think the average person thinks it’s in the hands of Janet Yellen or Warren Buffett or the European Central Banks or just some far flung, exotic, hard to understand entity. If people understood that they are in more control than they think, I think that would be a positive first step toward them taking back control of their financial lives. 

[00:19:37.5] MB: So when you say that they have more control than they realize, is that a focus on the process of investing itself instead of the outcomes? 

[00:19:47.0] DC: Yeah, absolutely. I talk a lot about process versus outcomes in the book and there’s this great story that I share in the book by a guy who used to work in the LA Dodgers front office. A guy names Paul DePodesta. He was featured in Money Ball. So he talks about going out with a friend who had had too much to drink and they are playing black jack one night and his friend was drunk and he has a 19 and his friend wants to hit. 

His friends wants another card and so DePodesta is like, “Man, you cannot hit. You are sitting on 19, you can’t hit. Don’t do it, stay put,” and so the friend says, “Get lost. I am doing it, I’m going in.” So he hits and he gets a two and so the friend is ecstatic. He’s jumping up and down because he wins a big hand and he says to DePodesta like, “See? You’re an idiot,” and DePodesta makes the point in his article, you can have a good outcome and still be a moron. 

And that’s what I am trying to help people guard against in the book. I give 10 commandments of investor behavior in The Laws of Wealth to just say, “Look, if you manage the process, if you control the controllable, things are going to come out in your favor overtime,” and the thing about the market is, it is uncertain, it’s unpredictable in the short term. But people who are process oriented and have a behaviorally sound process went out over long terms. So yeah, a lot of people get in trouble in the market because they have early success for the wrong reasons, you know, just getting lucky and they end up chalking that up to skill. 

[00:21:24.7] MB: And being process oriented is something that I am a huge fan of and we actually talked about in previous podcast episodes. We had an interview with an amazing insightful guest, Michael Mauboussin who’s another person actually in the financial world about how you can really be processed focused. So for listeners who are interested, I would definitely recommend checking that episode out. 

One of the 10 commandments that really jumped out of me that I thought was really interesting was the commandment that “you are not special”. Can you tell me about that? 

[00:21:52.1] DC: Yeah, it really goes to being process oriented because I think a lot of people who get into investment management or even retail investors think they have some sort of special edge and you harken back to the gentleman I mentioned earlier with the $5 million dollars. His special edge in his mind is was he had some control over this. I know people who work in tech who invest heavily in tech because they say, “Hey, you know this is my world. I understand it.” 

And being a great investor is about driving out this idea that you have special knowledge or that the rules don’t apply to you because I, again and again, meet people who understand the rules of investing. I mean simple things like diversification, staying the course, dollar cost averaging, which means putting a little money in each month or each year and they just fail to do it because they think that they’re somehow different. 

And this is a very human tendency to be overconfident and in fact, the research shows that you are basically either overconfident or you’re depressed. There is not a whole lot of middle ground unfortunately. So most of us, aside from the sort of clinically sad, have a great deal of overconfidence and I sight research in the book that talks about 94% of men thinking they’re better looking than average and 100% of men thinking they’re more inner personally savvy than average. 

Most of us have a vested interest from an ego and self-esteem standpoint of thinking that we’re better than average. But bring that human tendency to the world of investing is very dangerous. I talk in the book too about our tendency to delegate the dangerous and own the optimistic. Delegate the dangerous, own the optimistic. When we’re asked to rate other people’s likelihood of getting cancer or getting divorced or losing money in the stock market, we can do a very good job. 

But when it comes to rating our own likelihood of getting cancer, of getting divorced, whatever, the numbers get very, very scute because we don’t see ourselves as clearly as we ought to and so this is why I think working with a financial adviser, getting a second opinion, having a partner to check your thinking, I think that’s the reason that all of these things are so important in the world of finance. 

[00:24:18.8] MB: It reminds me of that famous study about drivers, right? It’s the same thing that the majority of drivers think that they are above average. 

[00:24:25.5] DC: Absolutely. 

[00:24:26.8] MB: And it also makes me think of something, I previously used to work on Wall Street and one of the things that I always think when somebody tells me that they think they can beat the market or whatever is, “Do you really think that you can beat these hedge funds that have billions of dollars invested in algorithms and data farms of computer that are micro timing all these trades?” There’s almost no way that you are ever going to actual generate meaningful alpha as a result of what you think is a novel insight that you just saw on CNBC about some company. 

[00:24:58.6] DC: Yeah, I mean it’s a zero sum game and so if hedge funds are winning, someone else is losing by a comparable margin and the odds are it’s you, right? I mean there’s the old saying about “if you get in a card game a few minutes in and you don’t know who the sucker is, it’s you,” right? And I think that the same could be said of investing. 

[00:25:18.6] MB: So you touched on this briefly, but how do we combat that bias or how can we help mitigate some of that overconfidence?

[00:25:26.4] DC: I think that one of the most important ways, one of the things that I advocate for in the book a whole lot is just being rules based. The book is really, I mean it’s called The Laws of Wealth and it really is a book of rules and so there’s fascinating research in the book and I just give the whole book away I guess at this point. Because one of the things that we talk about in the book is how often expert discretion is beaten or mashed by just simple rules. 

One of the studies that I talked about in the book is actually a meta-analysis. So it’s a study of all the studies, it’s a study of over 200 studies on simple rules-based decision making versus human discretion. So like you making your own choice and it studies everything from studies about prison recidivism and parole to stock picking to making a medical diagnosis and it’s found that simple rules beat or match expert, like PhD level discretion, 94% of the time. 

And so following the rules is such a big deal and so what I’ve tried to do in the book is set forth rules for managing money and managing your behavior and just try to put that on autopilot to the extent possible. I like reading about really successful people and one of the hallmarks of really successful people is that they try and automate their day and free up cognitive room for thinking about more important stuff. 

There’s been a lot of talk about President Obama just wearing two types of suits. He just doesn’t want to think about it. He doesn’t want to think about what he’s going to wear, he’s got bigger problems and then I’m from Alabama, so we’ll use Alabama football one. Nick Saban eats the same thing every day. The same thing every day for breakfast, same thing for lunch because he wants his mental energy and his time streamlined and he wants that available to think about other things. 

So I think that investing is one place where the rules be discretion almost all the time and that’s one of the best ways around introducing negative emotion into the process. 

[00:27:32.1] MB: And we talked about, in previous episodes, the importance of meta-analysis studies and how valid they are. One of the things that fascinates me is research by people like Phillip Tetlock who talk about how wrong experts are. Can you dive a little deeper on that topic? 

[00:27:49.9] DC: Yeah, so Tetlock wrote a recent book that everyone should check out called Superforecasting where he refines some of his early studies. But Tetlock’s early work, which really put him on the map showed basically how bad expert judgement intended to be and some of the parts that I like about his original work was he showed that the more popular a pundit was, the less likely they were to be correct. 

So if we think about how a pundit or a talking head comes into notoriety, let’s say in my world of finance and investing, often times it’s by making a dramatic call about sort of an unexpected event. So people who correctly called 2008-2009, if you watched The Big Short, some of those people that profited so dramatically from the housing crisis. So that’s how someone gets famous from making a big improbable call. 

Well probability being what it is, a lot of times those people tend to keep making large improbable calls and then are increasingly off in subsequent years and you saw this with John Paulson, the big hedge fund manager who made the biggest trade of all time, more or less. Made a billion dollars shorting the housing market and then in subsequent years, lost 36% when the market was up double digits. So again, a lot of times people are perma-bullish or perma-bearish. 

They run into one, they run into a nice opportunity where reality coincides with the thing they’ve been saying for five years but then those things tend go away overtime. So yeah, Tetlock found that expert judgment wasn’t all that great. Found that the more famous an expert was, the worst they tended to be, and also found that most experts were very resistant to feedback about how to improve their processes and had lots of excuses like, “I was too early.” Or, this is my favorite, “My prediction actually changed the course of history. You know, I would have been right but because everyone listened to what I said, I actually moved the market or changed history, messed up the space time continuum, as it were.” 

[00:30:03.7] MB: It’s such an important finding because people so often just defer to these experts or authorities, these talking heads, especially in the case of financial news many times and it’s so critical to be aware of your own biases and understand your own thinking to the level where you can see, “Hey, I am clearly falling prey to some serious bias right now.” Like those experts who are coming up with a ridiculous justifications for why they are consistently totally off base. 

[00:30:31.8] DC: Yeah and I think this is where we almost can’t do this ourselves. Chapter two of the book is titled You can’t do this alone and we are programmed not to see our biases. Again, if we think about this optimism bias, that’s in place for a very good reason. I mean we’re happier people because we have this optimism bias and if you think about entrepreneurship, if entrepreneurs correctly assess the probability of having a successful small business, no one would ever start a business, right? It’s only because we have this over-optimism that we see stuff like entrepreneurship because the odds are crummy. 

So what we need to do is enlist an outside view. We talk about the inside and the outside view. So run your idea by that friend of yours that’s such a good friend that he or she can give you critical feedback and it won’t damage the relationship. In the case of finances, I found and I talk in the book about how people who work with financial advisers tend to do two to 3% better per year than those who don’t and it has nothing to do frankly with the financial acumen of those advisers. It has to do with keeping you from doing stupid stuff. So having that trusted outside voice is, I think, the only way. You can educate yourself about the basics of biases but man, it’s awfully hard to white knuckle that when you’re in your own head. 

[00:32:01.9] MB: The idea of not being stupid is something that Charlie Munger, who’s one of my favorite thinkers and Warren Buffett’s business partner. He talks a lot about that both he and Buffett focus on is the idea of that they’re not setting out to be the smartest and greatest investors of all time. They just want to eliminate bias from their thinking and try to be consistently not stupid. 

[00:32:23.3] DC: Yeah. I think that sort of defensive, that first do no harm approach is the hallmark of a good investor and when I look at my own process, the very first thing I do is screen out stocks for risk. I mean that’s the very first thing I do. Because a lot of people don’t see risk in return in finance and elsewhere in life as opposite sides of the same coin. 

So I am wholly on board with this first do no harm, first root out the bad stuff approach to money and to life. I think there is a lot of wisdom there, and like you said, those guys have gotten very rich off what is a decidedly unsexy approach of just buying beaten down every day Staple stocks and it’s worked out extremely well for them clearly. 

[00:33:12.5] MB: Changing gears completely, you wrote a fascinating children’s book called Everyone You Love Will Die, tell me about that? 

[00:33:20.4] DC: So I have three kids. I have a seven year old, a soon to be three year old and then a tiny baby, three months old and so being a dad is the greatest, my favorite thing to do. But one thing I’ve learned with my seven year old is that they start to have tough questions. And so the other day, she’s asking me about God and the nature of life and evil and why do bad things happen to good people and all these different things that her little mind is beginning to take in. 

So we had a friend passed away and so one of the things that I found useful when talking to my kids about everything from the impermanence of life to marriage equality and everything in between is to write poetry. That’s a way that I can communicate with my kids. So I wrote this poem that’s the basic gist of it was there’s lots of ways, everyone dies so you’re here today and so am I. It sounds like a depressing title, Everyone You Love Will Die and it’s of course meant to be provocative. 

But it is actually a sweet book in practice and the gist of it is look, we’re not here forever so let’s make the most of it and let’s put first things first. Put family first and do what matters first and so I wrote this poem that lists all of these funny ways that people could die and then in the end says, “So hey, let’s spend today together.” So I wrote this poem, I put it on Facebook and a talented friend of mine liked it and sent me all these mocked up drawings of the different humorous ways in which people die in the poem. 

And so she said, “Hey we should make this a book,” and so I said, “Okay, what the heck.” So we put it on Kickstarter. It became the Kickstarter whatever, editors pick of the day and it got funded in 10 hours and we printed a book. So it was very fun. I actually made no money off of it. It’s obviously hard to get a book called Everyone You Love Will Die published by a big publisher but it’s one of the professional things I am most proud of. So thanks for bringing it up. 

[00:35:38.8] MB: You know it is such an important lesson and something that I think is easy to be intellectually aware of but really hard to internalize and live. Which is, for somebody who is listening, how can they snap out of the day to day grind of their life and really embrace that lesson that we only have a fine amount of time here and you really have to live your life fully? 

[00:36:04.6] DC: Well for me, it’s funny for me I know that it’s hard for most people to grasp, I was born on the day that my grandfather died. I am named after him, I look just like him, I never met him, he died two years to the day that I was born. So I feel like because of that, I’ve always had this weirdly more acute sense of impermanence than most people. So for me, the things that work are the following: First of all I try and I really read literature that considers the inevitability of that. 

Maybe that’s a really heavy for most people but I find that the inevitability of death does more to energize my life than just about anything else. So for me, literature, art, movies that speak to that and our fears around that are powerful and then other thing is I don’t know what the layperson’s term for this is but the shrink term for it is an existential boundary experience. So to explain, let’s say you’re driving and someone’s texting and they almost hit you. 

You’re like, “Holy crap! It was almost over for me there,” and you have this moment and maybe it’s half an hour, maybe it’s 10 minutes, you have this moment where death is a little closer to you or maybe it’s a death of a friend. You have this moment where everything comes into focus and you say, “Look if I have been hit by that car today, did I do enough?” Like, “Did I tell the people I love that I love them? Did I spend enough time with my family? Did I prioritize work to the exclusion of things that were more important?” 

And I think in those moments, they’re fleeting because you quickly get back to life and busyness, but in those moments, I think you have to journal, catalog them, write them down, make commitments when those moment happen to say, “Hey, I’m going to do things differently,” and have people hold you to those things. Because you’re right, I mean I think a lot of people — I think we live in a society that glorifies business in maladaptive and unproductive ways. I think a lot of us, unfortunately, just stay busy until we pass away and we live a lot of life on the table. So I think it’s an important thing to think about, like you said. 

[00:38:26.5] MB: What would an example be of one or two pieces of literature or movies or whatever that you think might examine that topic? 

[00:38:34.9] DC: I just finished The Dead which is very on the nose, right? I just finished The Dead which is part of James Joyce’s Dubliners collection of short stories. I’d absolutely recommend that. There’s a gentleman by the name of Irvin Yalom who’s a psychiatrist in California who writes very beautifully about death and existential boundary experiences so those are the two off the top of my head that I think I’ve read most recently that put me in that frame of mind. 

But Yalom is sort of the, in my mind, the Freud or the Jung of our day. He’s probably the best guy doing it right now so he’s who I’d point you to in addition to all the Russian literature and other people who are notoriously good at bumming you out. 

[00:39:26.7] MB: Well we’ll definitely include both The Dead and a few of Yalom’s books in the show notes. Kind of broadening that question out, other than The Laws of Wealth, which is a great book about a lot of the topics we’ve talked about goes much deeper into the research and is an incredibly useful tool. What would you recommend for people who want to do a little bit more research and dig into some of these topics? Where would you suggest they start? 

[00:39:52.2] DC: So I get asked this question all the time. So at the risk of plugging my own thing, I came up with my own reading list. So if people just Google “Nocturne Capital Reading List” I have all my favorite behavioral finance books and I have them categorized by the sub-category they speak to. I think just some of the classics though, just off the top of my head, Daniel Kahneman’s Thinking Fast and Slow is about the best and most comprehensive thing out there. It is a little bit of a heavy read. I mean it is a long book but it is very fascinating. 

Richard Thaler and Cass Sunstein’s book Nudge is about the best around in terms of speaking to policy nudging and pushing behavior in a good direction in everything from kid’s school lunches, to smoking bans, to safe driving, so if you are interested in that. And then in terms of the more financial side, I read some of the classics. I read Ben Graham and The Buffett Letters and things like that but I have a pretty comprehensive list of 15 or 20 if you just look up “Nocturne Capital Reading List”.

[00:41:04.0] MB: Well we’ll definitely include the reading list in the show notes as well. 

[00:41:06.8] DC: Great. 

[00:41:07.9] MB: So for somebody who is listening here, what is one piece of simple actionable homework you would give them to implement that they might be able to use to improve their personal finances? 

[00:41:19.2] DC: I think there’s two. I will double down and give you two there. So I think one would be to pick five of the books off of the list, which will be included in the show notes and read five of those books. The interesting thing about investing is there’s such a quickly diminishing marginal returns on investment knowledge like if you read three, four, five books you will have 90% of all the knowledge you need to be a savvy investor and you can read a hundred more books to get to the next five to 10% of the way. 

Just because I think investing is simple, but not easy. So I think that people would do very well to educate themselves on the fundaments of that and I’ve tried to give a good starter with those books and then the second thing I would say is get a financial adviser and look for someone who charges a reasonable fee who emphasizes planning and handholding and behavioral coaching because the other stuff is honestly a dime a dozen. 

You can get anyone to put you in a well-diversified portfolio, that’s not hard to do. What you really need is someone who’s a good fit and is going help you get that extra 3% a year that the research says you get when you work with an adviser by virtue of them helping you to make better decisions. So those are the two easy pieces of advice. Educate yourself, three to five books, and then find someone to help take you the rest of the way and then read books about more interesting things like The Impermanence of Life. 

[00:42:53.4] MB: Where can people find you online? 

[00:42:55.6] DC: Twitter, @danielcrosby and Nocturne Capita,l with an E like the music, nocturnecapital.com. 

[00:43:04.6] MB: Well Daniel, thank you so much for being on the show. This has been a fascinating discussion and I have learned a tremendous amount and we’ve really enjoyed having you on here. 

[00:43:12.6] DC: Thank you, it’s been my pleasure. 

October 27, 2016 /Lace Gilger
Decision Making, Money & Finance

How to Out-Think Your Competition and Become a Master Strategic Thinker with Dr. Colin Camerer

September 22, 2016 by Lace Gilger in Decision Making

In this episode we discuss the intersection between neuroscience and game theory, ask whether you are smarter than a Chimpanzee, examine how simple mental judgements can be massively wrong, explain the basics of game theory, and dig deep into strategic thinking with Dr. Colin Camerer. 

Colin is the Robert Kirby Professor of Behavioral Finance and Economics at the California Institute of Technology. A former child prodigy Colin received his B.A in quantitative studies from John Hopkins University at the age of 17, followed by an M.B.A. in finance from the University of Chicago at the age of 19, and finally a Ph.D in behavioral decision theory from the University of Chicago at the age of 21. Colin research is focused on the interface between cognitive psychology and economics. 

We discuss: 

  • How to out-think (and think one level ahead of) your competition

  • How we make simple mental judgements that go wrong

  • The fundamentals of game theory and how you can practically apply it to your life

  • Are you smarter than a chimpanzee? (the answer may surprise you)

  • The psychological limits on strategic thinking

  • How game theory cuts across multiple disciplines of knowledge from evolution to corporate auctions

  • The concept of a nash equilibrium and why its important

  • The fascinating intersections between psychology and game theory

  • The game theory behind rock paper scissors (and the optimal strategy)

  • Why people don’t think strategically (and why it matters)

  • Discover if you re you a level zero thinker or a “Level K” thinker

  • Why working memory has a strong correlation between making strategic decisions and cognitive flexibility

  • The fascinating results behind the “false belief test"

  • How to make strategic inferences from the knowledge that other minds have

  • And much more!

If you want to make better decisions or have always been fascinated by game theory - listen to this episode!

Thank you so much for listening!

Please SUBSCRIBE and LEAVE US A REVIEW on iTunes! (Click here for instructions on how to do that).

SHOW NOTES, LINKS, & RESEARCH

  • [Website] Primate Research Institute Kyoto University

  • [Book] Thinking Strategically by Avinash K. Dixit & Barry J. Nalebuff

  • [Book] Games of Strategy by Avinash Dixit, Susan Skeath, & David H. Reiley Jr.

  • [Book] Behavioral Game Theory by Colin F. Camerer

  • [Book] Thinking, Fast and Slow by Daniel Kahneman

  • [Social] Colin's Twitter

  • [Ted Talk] The Strategizing Brain: Colin Camerer

EPISODE TRANSCRIPT

Today, we have another incredible guest on the show, Dr. Colin Camerer. Colin is the Robert Kirby Professor of Behavioral Finance and Economics at the California Institute of Technology. A former child prodigy, he received his BA in quantitative studies from Johns Hopkins University at the age of 17, followed by an MBA in finance from the University of Chicago at the age of 19, and finally a PhD in Behavioral Decision Theory from the University of Chicago at the age of 21. His research is focused on the interactions between cognitive psychology and economics. Colin, welcome to the Science of Success.

Dr. Colin Camerer:	Thanks for having me, Matt.

Matt:	We’re very excited to have you on here. Obviously, you have a fascinating background. I’d love to hear the story of how you got started.

Dr. Colin Camerer:	Okay. One of the early experiences, actually, was when I was 12, I started to go to horse races [INAUDIBLE:  0:03:28] with my dad and a friend of his who was interested in the stock market. I was fascinated by the fact that 12 horses come out on a race track, and they all look pretty physically fit, and you could buy a big newspaper called ‘The Daily Race Informant’ that tells you all about...facts about which horses had won before, and who was the trainer, and what was the sire and the dam—that’s the mom and the dad—and how well had they done. Somehow these markets were able to compress all of this information into a number, which was the odds. So, I was really interested in how that process worked. When I went to college I studied math, and physics, and psychology, and I was kind of searching around for a science that I thought had some mathematical structure, and some real scientific rigor, where it was about people. So, I ended up studying economics. 

Then, I went to graduate school at University of Chicago to get a PhD, and at that time the popular view about financial markets was that you can’t beat the market because if there’s any information that’s easy to find about the earnings of a company, or what the CEO was up to, people are highly motivated to find that, and they’ll get it, and they’ll buy and sell and move the price around until the prices are such that there’s no way to easily beat the market based on something that’s easy to find out. That’s called the efficient markets hypothesis. I was kind of skeptical about that because, well, first, a lot of people invest their funds either themselves, or with hedge funds, and what’s called active management. People are trying to beat the market, and people are quite happy to pay 1% or 2% of their money in what’s called, you know, fairly high fees. So, a lot of investors think somebody can beat the market, which the efficient markets hypothesis says shouldn’t be the case. So, I was kind of looking around for something different, and at that time there were a couple psychologists called, Hillel Einhorn and Robin Hogarth, and they were at the beginning of a wave of people who were interested in human judgement and decision making. Their approach was very related to what Tversky and Kahneman later began to study, which was called, Heuristics and Biases. The idea was: maybe instead of making extremely complicated calculations, and using all the information, weighing it just perfectly, people instead use simple shortcuts like what springs to mind in memory, or what’s visually in front of them on a computer screen. So, that was the beginning of what came to be later called Behavioral Economics. So, I got my PhD, and I was one of the first people to really get a PhD in this field of decision theory, or decision economics, and then I went to...ended up at the Wharton School, where I happen to be today—I mean right now was we’re talking not as a faculty member—and they actually were encouraging about the idea of trying to study the psychology...essentially the limits on how much information people can process effectively, and how much willpower people have, and how selfish people are, or how much they care about others. None of those things were really incorporated into economic theory at that time, so that was the beginning of what we call ‘behavioral economics’, or kind of psychologizing economic theory. That was around the mid-1980s. 

So, I was interested in a bunch of studies involving psychological shortcuts and how they might make a difference in what people do. One of the things we studied is called ‘framing effects’, which means...you know, you can describe something in two different ways, and even though they’re mathematically equivalent, it might wither evoke different emotions, or it might change people’s focus of attention so that they treat them differently. For example, the FDA, I think, at one point required salad dressings to label how much fat content they had in terms of percentages, not just on the back. So, suddenly you pick up a salad dressing, and it would say, “6% fat,” or “8% fat,” or “3% fat,” and that’s quite different than if you had said...6% fat is a lot different than 94% fat free. You know, 94% fat free sounds pretty great. 6% fat sounds more, “Ooh, yuck.” So, even though those two are mathematically equivalent statements, you know, 6% and 94% adds up to 100%, but it seemed to shift people’s focus of attention and actually affect choices. Those are the kind of things we began to study in behavioral economics.

Matt:	That’s really fascinating, and I know that you specifically focus a lot on the ideas around game theory, which to some listeners may seem sort of like an esoteric field of knowledge that doesn’t apply to their daily life, but I’m curious: Could you kind of explain some of the basics of game theory and how it could actually apply to interactions that people have every day? 

Dr. Colin Camerer:	Sure, so game theory’s a very powerful mathematical system. It’s probably most developed in economics, but also a little bit in theoretical biology and political science. So, a game, despite the frivolous name, is a mathematical object, which is: a set of players, each player’s going to choose a strategy, and given some information they have about, say, what’s going to happen in the future, or maybe what the other player thinks, or how valuable something is if they’re bargaining. The players have strategies and information. While they all choose their strategies, there are going to be outcomes. The outcomes are...it could be biological fitness like reproduction, it could be territory in a war, it could be profits for companies, it could be a status for people, or for animals, fighting for territory. Then, we assume that the only mathematics really comes in because we assume that the players can mathematically rank how much they like different outcomes. That whole system is called a specification of a game. What game theory is, is to say: if the payers have these strategies, the outcomes, which they value numerically, what are they going to actually do? The interesting thing is to what extent players can figure out what other players are likely to do by kind of guessing. I should add that the players could be animals that have strategies which are kind of innate strategies, like degrees of aggression. They could be much more deliberate. It could be how much a telecom company wants to bid for a slice of phone spectrum that’s being auctioned off by the FCC. That was an actual thing that happened, not only in the US with the FCC, but in many countries where valuable phone spectrum was auctioned off, and tens of billions of dollars were actually bet so that the companies then had to decide: What do I actually bid? They employed a bunch of game theorists to kind of tell them: given the rules for the game, should they bid this much, that much, and what do you think other people will bid? I don’t want to outbid them by too much and leave money on the table, but I don’t want to get outbid and underbid, and lose. So, there’s the kinds of things game theorists used to study. What I brought to the analysis was: the standard idea in game theory is...I should say, the standard mathematical thing that’s computed, and that’s taught in every course, and this is the homework on the final exam, is what’s called a ‘Nash equilibrium’, named after John Nash. Equilibrium is a word that’s kind of taken from physics as sort of a resting point. The idea is: equilibrium, every player has a belief of what the other players will do, and their beliefs are correct, so they’ve somehow figured out what other players will do. In addition, they’re going to choose a best response. They pick the strategy which is the best one given this belief. One way to think of an equilibrium is: suppose you played tic tac toe lots and lots of times, and if you ever made a mistake you corrected your mistake the next time, After lots of play, everyone would know the strategies of the other players, and they would be choosing the best strategy for themselves, and it would be a kind of boring game, but mathematically it would have a nice precise structure. 

So, what we started to look at was non-equilibrium, or pre-equilibrium, game theory meaning: what if people haven’t figured everything out yet? What kind of things could happen then? I’ll give you a simple example that’s not too hard to think about numerically, which we call ‘the beauty contest game’. Let me explain it first, and then I’ll say where that name comes from. In this game, which we’ve actually done in lots of experiments for money, everybody picks a number from 0 to 100, and we’re going to collect the numbers on a piece of paper, or you’re going to type them in a computer, or you’re going to send them on a postcard to the ‘Financial Times’—who actually did this a few years ago—and we’re going to collect all the numbers, 0 to 100. We’re going to compute the average number, and take 2/3 of the average. Whoever is closest to 2/3 of the average is going to win a fixed prize. So, everyone wants to be a little bit below average knowing that everyone else wants to be a little bit below average. If you figure out the mathematical equilibrium—this is the kind of thing that would be on a final exam in a course—the equilibrium is the number which is the only number which everyone, if they believed everyone else would pick it, they’d be best responding, and their beliefs would all be correct then. That’s zero. When you actually do the experiment what happens is you get a bunch of people that pick numbers anywhere from 0 to 100—60, 40, you know. Let’s say the average is around 50. There are a number of other people who seem to think: I don’t know what people will pick. It could be anywhere. So, let’s say they’ll pick 50, so I’ll pick 33 which 2/3 of 50. If you’re trying...if you think others are going to randomly choose in the interval of numbers, and you’re trying to match 2/3 of the average, you’ll pick 33. Other people do what we call ‘second level of thinking’. This is something that’s called ‘level-k model of behavior’. They’ll say, “Well, I think other people will think other people will pick 50, and those people will pick 33. I’m going to outguess them and pick 22.” You can do a couple more steps of thinking, but at some point you’re being, as the British say, too clever by half. If you actually play this game, and you pick 2/3 of 22, or 2/3 of 2/3 of 22, you’re actually picking a number that’s too low, because you don’t want to pick the lowest number, you want to pick 2/3 of the average number. So, typically what you see is an average number around 33 or 22, which is far away from the Nash equilibrium prediction, which is that everyone will somehow figure out how to pick 0. That’s an example of where psychological limits on strategic thinking gives you a better prediction of what people actual do. By the way, as you can imagine, if you play this game again and again, what happens is: the first time you’re playing in a group it might be the average is 28, and 2/3 of that is about 19. So, the winner is Matt Bodnar who picked 19, and everyone cheers that, and next time they think: Wow, people are going to...I should pick maybe 2/3 of 19, or maybe I should think other people will pick 2/3 of 19. So, if you do it over and over you do get numbers that are moving in the direction of the Nash equilibrium prediction. The idea of an equilibrium is actually often a good model for where a system in which there’s a lot of feedback, and learning from trial and error, is going to move over time, but it isn’t necessarily a good prediction of what will happen the first time you play even if it’s for very high stakes. These games had often been done with different groups of people. It doesn’t seem to make that much difference if you are really good at math, or if you played chess a lot, or anything like that. Most people will pick numbers somewhere between say, 10, or 15, or 22, or 33, the first time they play. So, we’ve developed a theory of that type of thinking called ‘level-k reasoning’, which has these kind of steps of thinking. The main idea is: the steps don’t get that far. There’s a little bit of strategic thinking, but it’s limited.

Matt:	That makes me think of a couple things. One is: when I initially heard the beauty contest game, or I guess I also coined it in my mind ‘the 0 to 100 game’, I was too clever by half because my initial guess was the number one, which as you showed in some of your research, that was a terrible guess because people don’t adjust close enough to the equilibrium to make that meaningful. The other thing is: it was a sad day in, I think it was like 7th grade for me, or whenever, when me and my buddy discovered that there’s only like three or four moves in tic-tac-toe, and basically every single game should end in cats.

Dr. Colin Camerer:	Yes. One thing that’s interesting is: some of the games that are actually really fun to play, like rock paper scissors, which is similar to tic-tac-toe—it’s a simple game and you can kind of figure it out—from the point of view of mathematical analysis are kind of boring, but they’re not that boring to actually play. Probably, it’s because people aren’t always in equilibrium, and they’re trying to chase patterns and see things that are other players are doing. If you were to design video games, or a game show on TV, it’s not clear that equilibrium game theory would be as helpful as something that would incorporate more of a concept of human nature, and fallibility, and what’s fun and engaging.

Matt:	I’m curious, actually, that makes me think of another question: Rock paper scissors, is that a game, from sort of a game theory stand point, that has an equilibrium? 

Dr. Colin Camerer:	Yes. Actually, one reason [INAUDIBLE:  0:16:10] equilibrium is very powerful—and John Nash shared the Nobel Prize for this discovery—is you can show mathematically that if a game is finite, in other words there’s not infinitely many people bidding or playing, and they only have so many strategies they can choose like rock paper scissors, or so many numerical bids in an auction, even if it’s billions, as long as it’s not infinity, that there always exists an equilibrium. In rock paper scissors: equilibrium, as well as what we call, ‘mixed strategies’. That means that if you play rock every single time that’s not a best response, because someone will figure it out and beat you with paper to cover rock. So, the only equilibrium is one in which people choose rock paper scissors about 1/3, 1/3, 1/3 of the time. Again, when people play what happens is: usually people won’t play explicitly in that random way, although you could—it wouldn’t be very interesting—and then what happens is people try to pick out patterns and, “Can I predict what you’re going to do next time?” Associated with this is the fact that, roughly speaking, when you ask people to randomize, like if I tell you: imagine flipping a coin 100 times in a row. Write down a series of what you think 100 coin flips might look like. People are actually not that good at generating a truly random sequence. The main thing is they kind of over alternate. So, if you wrote down: head, head, head, then you’d write down tails, and you would actually have too few runs. So, you’d have strings of a couple heads, and a couple tails, and in a truly random sequence of 100 you should have about 50 runs, and usually people produce about 65 runs. In my cognitive psych class I used to do this, and I’d ask half the people to actually...I’d turn around and I’d ask half them to actually flip a coin, and half of them to simulate/imagine doing it, and then I would ask them to hand in their index cards, and I would see if I could tell whether it was human-generated or truly random. So, people aren’t typically—unless there’s special training or special tools—that great at randomization. 

Let me backtrack to one other thing about game theory. Another practical application that we studied, that everyone, I think, can resonate, or appreciate, involves what’s called a ‘private information game’. So, private information is a wrinkle which you don’t have in rock paper scissors, and you don’t have it in the 0 to 100 game, which is that one person knows something the other people don’t know, but everyone knows that there’s private information. For example, the kind of game we studied—and here we go away from the simple clear games in the lab to the messy world—involved movies. The idea is: we assume that the people who produced the movie, and have watched it, and have seen the entire movie, not just a short trailer in an ad, or a short clip that you might show on a TV show for promotion, they have a better idea of the quality than movie goers. So, if people have seen it they can say, “This is, on a 0 to 100 scale, this is going to be an 82,” or, “41.” What we studied we called ‘cold opening’ which means: from about 2000 to 2009 we looked at all movies in the US that were open on a lot of screens, which is 300 or more screens, so that didn’t include some smaller, independent films, but most of the movies are in our sample, and about 10% of the time the movies were not shown to movie critics in time for them to write a review. In the early part of our sample, in 2000, this was in a newspaper like ‘The New York Times’, or ‘The LA Times’, or ‘The Chicago Tribune’. Nowadays, the newspapers have become a lot less influential because trailers leak, and Rotten Tomatoes, and lots of other websites are influential in sharing their opinions about what movies are good. During the part of our sample the newspapers were kind of a big deal. So, about 10% of the time the movies were not shown to critics so that there was a review, and the way you can tell is: if you open up the Friday newspaper—again, this is kind of historical big game theory—in Los Angeles, you would see an ad for say, “Ondine”, which was a Colin Farrell movie, and it would have a bunch of blurbs that would say, “Marvelous. Four stars,” from Manohla Dargis in ‘LA Weekly’, for example. So, the way those stars got there was that a version of the movie was sent to the critics a couple days early, and they would prepare the reviews, and then they would give it to the studios so they could put it in the print ads on Friday. So, in the Friday paper you’d see a print ad that had a...if it was flattering, that had a quote from a critic, and then in the same section of the newspaper you’d see the critic’s review that would say, “I loved this movie, Ondine.” Meanwhile, the movie, “Killers”, with Katherine Heigl and Ashton Kutcher was not shown to critics in time, so if you opened up the print ad, there’s a picture of the two stars, “Killers”, the name of the director, and it has no quotes from critics at all because critics weren’t allowed to see it. Of course the obvious intuition is: the critics are going to tell everyone how terrible the movie is and then people won’t go see it, but game theoretically, that’s actually a little bit surprising because movie goers should be able to infer: if there’s not review it’s probably because it’s really and, and they didn’t give it to the critics. In this case, no news is bad news. If you don’t see a review it’s probably because when the reviews eventually come in—usually movies are reviewed later, like on a Monday or a Sunday—they’re going to be pretty bad. In fact, empirically that’s what happens. So, we collected data from Metacritic...Metacritic is a great website, by the way, which averages from about 5 to 20 or 30 different critical reviews, and you get a beautiful little Gaussian normally distributed distribution where most movies are around a 50 on their 0 to 100 scale. If the movies are a 20 or below...a 25 or below, which included the Ashton Kutcher and Katherine Heigl movie, then the chance of not showing it to critics is much higher. So, if you don’t see a critic review, and you kind of knew about the statistics that we had gathered, you should say to yourself, “A lack of review is the same as a bad review,” basically. We took our theory of level-k thinking and the level-k theory says: some movie goers are just kind of naïve. Those are like the people who pick 50 in the 0 to 100 game. They’re just not thinking strategically: Well, wait a minute. What are other people going to pick? Because I should be responding to them.” So, the naïve movie goers say, “I didn’t see a movie review. That doesn’t mean anything. It’s probably kind of average,” and actually it’s not average. If there’s no review, statistically, it’s below average. The way we could tell that the movie goers were being naïve was: if you write down some very fancy math, and look at the statistics, your prediction is that the movies that aren’t shown to critics will earn about 10% or 15% or more than they really should, given their actual quality, because people are naively guessing the quality is much better than it is, and too many people will go to those movies. So, we looked at all the data, and did a very careful statistical analysis, and it turned out to be consistent with this theory that there’s some degree of movie goer naivety, and the result is: if you make a bad movie, don’t show it to critics and your movie will make about 10% or 15% more than it really should because you’re fooling some of the people some of the time.

Matt:	That’s fascinating. I’d love to dig in a little bit more. Explain, or kind of tell me more, about the concept of level-k thinking, or the level-k model of behavior.

Dr. Colin Camerer:	The basic idea is: we’re going to assume that whether it’s IQ, or practice playing games, or how motivated people are by an experiment, or by figuring out what the movie critiques are doing, that it looks like we can kind of sort people into people who are not very strategic. Those are what we call ‘level zero’, and that means that we think what’s going on is that they’re picking sort of a salient simple strategy. Maybe something just pops out, or they’re exhibiting naivety, like the movie goers. They open the ad, or they see an ad on TV, and the ad doesn’t have any critic information, and they don’t notice that there’s no critic information. So, they assume that no critic information is kind of like ‘average’. Then, level one players are players who think that other people are level zero. So, in the 0 to 100 game that we talked about earlier, those are people who think: ‘I think other people have no real clue. They’re going to pick numbers around 50, like lucky numbers, or their birthdays, or something like that, and “I’m going to pick 2/3 of that.” So, these players are being a little more clever because they have a concept of what others will do and then they’re responding to it. These would be something like a movie goer that says, “Gee, if the studio is smart then they’re not going to show their worst movies to critics, but I can’t tell beyond that how smart they are, or how bad the movies are.” Level two players think that they’re playing level one players. So, they’re going to pick 20 to 33, and so on. So, you can write down a kind of sequence of these types of players. The zeros choose something that’s kind of focal or random. The ones think they’re playing zeros and they respond. The twos think they’re playing ones and they respond. With just a couple of steps, usually zero, and one, and two is the only levels you need. Although conceptually, in principal, people could be doing three steps, or four steps. You might get that sometimes in a very complicated novel, or like a sci-novel, where, “I think that he thinks that she thinks,” and there’s double agents and things, but usually mentally it’s kind of overwhelming. It sort of boggles the mind to think more than two or three steps of reasoning. We’ve applied this type of framework to movies. It’s also been used to analyze some managerial decisions like when managers will adopt a new technology. It depends on how many other managers they think will adopt, and how many manager’s managers think, and so on. 

We’ve also used it to explain a lot of different experiments we’ve run in the lab where the games are much simpler. You can often see...in fact, literally we can see, say, if we put different numbers on a screen which [INAUDIBLE:  0:26:07] the payoffs from choosing different strategies. If you’re a level zero player you’ll look at certain numbers and ignore some numbers. If you’re a level one player you’ll look at what the other player’s payoffs are. If you’re a level two player you look at everything. So, the more levels of thinking you do, we can tie that directly to what you look at, at the computer screen, and we use a measurement technique called ‘eye tracking’, which is basically a tiny camera that looks at your eye. If your eye moves a little bit, like to look at the left part of the screen instead of the right part of your computer screen, the camera’s sensitive enough to see where you’re looking. It can kind of locate where your eye is looking on a computer screen within the precision of about a quarter...a quarter coin. So, if we put the payoffs on the screen in a certain way we can detect, to some extent...not perfectly, but we can roughly detect who’s doing two steps of thinking because there’s certain information they like to look at in order to figure out what to do. So, a combination of eye tracking [INAUDIBLE: 0:27:04] experiments have given us an idea of... And basically, I should add we...typically we estimate for, say, college educated student populations that something like 10% or 20% of people are level zero, they just aren’t really thinking through at all. Maybe 40% are level one, that’s the most common. And maybe 30% are level two. Sometimes you’ll see what looks like much higher level thinking—level three or level four.

Matt:	The first thing that makes me think of is poker, and longtime listeners will know that I’m a big poker player. We’ve previously had some guests on here talk about some of the psychological elements of that game. Poker’s a great example of a game where you, depending on what level of thought your opponent is at, you have to adjust your thinking and play one level ahead of them, but if you play two or three levels ahead of them you can...it can end up backfiring, and being kind of the same thing as picking 1 in the 0 to 100 game.

Dr. Colin Camerer:	Every so often I think we should try to get a grant and study poker, or just study it, because from a game theory point of view it actually hasn’t been studied very much. Although, early in the history of game theory, some of your listeners will know that the Seminal book on game theory [INAUDIBLE:  0:28:13], in the 1940s. It’s somewhat weird in social science that someone writes a book and really creates a whole field, but their book really did. There was some earlier research that they had built on, but their book really made a big splash. They actually have a chapter on poker, but it’s a super simplified version in which you basically get a high card or a low card, and there’s one round of betting. So, they picked a simple enough example that you could fully analyze it and see what’s happening. Of course what makes real poker so interesting is that, you know, there’s some mathematics. You have to kind of figure out how strong your hand is, but it also depends upon, as you said, on what strategy you think the other player is going to play. Are they going to play tight and only bet when they have great cards? Are they going to bluff more? People who play poker a lot often talk a lot about building kind of a model of the opponent, which is essentially a level...what level is this person playing? [INAUDIBLE:  0:29:11] be pointed out, much like in the 2/3 of the average game, if you kind of over play your opponent, as if they’re really...for example, if you think they won’t fall for a bluff, you may not bluff enough. So, you’re kind of leaving money on the table. It’s also a cool game from a psychological point of view because if you play face-to-face you may have all kinds of information conveyed by facial expressions, which is something that neuroscientists have studied for a long time, including with animals. Of course, there’s that evil poker face which is related to what we call ‘emotional regulation’. You know, you have really great cards, and you don’t want to show that in your face, or you have terrible cards and you don’t want to show that in your face. And the concept of tells, in other words, only a certain amount of emotions can be well-regulated by us. So, unless you’re a sociopath, or a fantastic actor, it may be hard to control your emotions fully. So, somebody can really figure out what your tell is when you have terrific cards over hours and hours of watching you, and might be able to infer your hidden information, or what we call ‘private information’ in game theory, from what’s on your face, or from your fingers tapping, or brushing your hair, or so forth. 

Matt:	I personally definitely would advocate you studying poker. I think that’d be fascinating, and I’d love to dig into that research at some point.

Dr. Colin Camerer:	Usually, and especially at places like Caltech, we have a lot of freedom to study what we're interesting in, and the nice thing about poker is I don't think we'd have any trouble getting volunteers to play. And, of course, there's lots of online data. There's no shortage of interest in and ways in which you can dig into poker as a neuroscientific, psychological kind of test bed. And, of course, probably lot of the basic processes are, you know, like bluffing or mind-reading or face-reading, happen in other kinds of things, like bargaining, and other things that are important in political science and economics and everyday life.

Matt:	I'm curious going back a little to the level-k model of behavior. Why do you think people get stuck in level one or level two of strategic thinking?

Dr. Colin Camerer:	Well, one variable that doesn't predict perfectly, but it is correlated, the correlations are around 0.3 or 0.4, where zero is no correlation at all and plus-one is perfect, and in these kind of social science type data, we rarely get plus-ones. So, 0.3 or 0.4 is not too bad. And anyway, a variable that's correlated about 0.3 or 0.4 with steps of reasoning is working memory. And so, working memory is basically, you know, I read you a list of digits--four, three, four, six, one--and then you have to quickly remember how long the list was and get the digits correct in order. And so, some people can remember five or six digits. That would be a pretty short working memory span. Can people can remember eight or nine. And working memory, how many things can you kind of keep track of, turns out to be a pretty good, solid but modest correlate of lots of types of intelligence and ability to be cognitively flexible, and also the number of steps of reasoning you took. So, people with more working memory tend to make choices that are consistent with level two reasoning. So, if I looked at the zero to 100 game, and I looked at people picking around 50 and around 33 and around 22 or lower, or someone like you picking one--which is a good guess if you're playing highly sophisticated people, but not for the first time--you probably would get a nice correlation, a modest but positive correlation between the number of things people can keep in mind like numbers and then how many steps of thinking they do when they're thinking about games. 

Matt:	Changing gears a little bit, I'm curious. One of the things you talked about, and you may have touched on this earlier, is the idea of the theory of mind circuit. Can you extrapolate on that a little bit?

Dr. Colin Camerer:	Sure. So, this is an idea that actually came originally from animal research starting in 1978. There were some beautiful but very early studies with chimpanzees, and the primatologists, called Premack and Woodruff and others, were interested in whether chimpanzees have an idea that another animal could be thinking about something differently than they are. And so, shortly after that, some philosophers actually suggested a really clever test for theory of mind, which is called the false belief test, and the idea is...often it's done with children, with a kind of storyboard, or you could make a little video. But I think I can...hopefully I can describe it well enough that people can get the idea, or they can Google and learn more. And the false belief test [INAUDIBLE 00:33:50], so you see a little cartoon storyboard. Sally-Anne goes into the kitchen and takes a cookie out of a cookie jar. She leaves. Her mom comes in and takes the cookies out of the cookie jar for some reason. Maybe they're melting because it's hot, like it is now in Philadelphia, and she puts the cookies in the refrigerator. Closes the cookie jar lid. And, of course, Sally-Anne doesn't see that, because she went outside. The mom leaves. Sally-Anne comes back. The question is, where does she look for the cookies? And so, if you follow the storyboard, you know that the cookies are in the refrigerator, but if you have theory of mind, you have the capacity to know that Sally-Anne thinks the cookies are in the cookie jar, because you saw something--the cookies being moved from the cookie jar to the refrigerator--that you know she didn't see. And it turns out when children are two or three, they will typically say, "Oh, I should look in the refrigerator." And the reason is the kids know something, which is where the cookies are, and they can't imagine that somebody else doesn't know it. So, they think the cookie goes in the refrigerator. Sally-Anne must know there are cookies in the refrigerator. So, they're not able to maintain a concept of something being true where the cookies are, and somebody else having a false belief. And, as the kids get older, typically around five years old... And this is a very solid finding from many different cultures, and it doesn't seem to matter whether the kids are illiterate or in a developing country. There's been studies in several different continents, including Africa and Australia, and at around five years of age, the kids realize, "Oh, you know, I know the cookies are in the refrigerator, but Sally-Anne thinks they're in the cookie jar." And so, that's the correct answer. So, this test, and a number of other ones, have shown that there seems to be a somewhat distinct mental circuit called mentalizing your theory of mind circuit. It involves dorsolateral prefrontal cortex, which is sort of right in the center of your forehead, maybe an inch or two above your eyebrows, temporal parietal junction, which is kind of back in the temple, and areas in what's called the medial temporal lobe, and also regions of singular cortex, which is a kind of part in the center of the brain. 

	And so, another way to student mentalizing, which is shifting to the neuroscience, is some colleagues of mine have developed what they call the why-how test. And so, you might show, for example, a picture of somebody inserting a screwdriver into a toaster oven, and the how question is, "How are they holding a screwdriver?" Well, left hand, right hand. And that doesn't really require any theory of mind. It doesn't require you to think about the intention of the person or what's in the person's head. It's just a physical activity. So, that does not require theory of mind. The why question is, "Why are they using a screwdriver in the toaster oven?" And the answer might be it's broken or they're trying to get the toast out or something like that. That requires mentalizing. It requires to think about the person's intention, why are they motivated to do things in that way. And so, if you show people a series of why questions and a series of how questions, and you ask which areas of the brain are differentially active when they're figuring out why versus how, you get a nice clear map of what's called this mentalizing network. And a few studies have linked that to game theory, so that people who are doing more strategic thinking, like picking a lower number in the zero to 100 game, or presumably other games, or people who say, "Wow, there was no movie review. That's probably bad news, because I think the studios know if it's good or not, and it's bad, they don't show it to critics." So, they're making a strategic inference about the knowledge that another mind has -- in this case, the studio. And so, there's some evidence that more activity in this mentalizing region is associated with more strategic thinking, in terms of these level-k steps. Some of your listeners, again, will know, one of the reasons people became very interested in this mentalizing circuit is that children who are autistic tend to be slower to get the right answer in the false belief tasks, and the ideas that are part of autism is that, not necessarily a full inability, but a kind of weakness, or what clinicians call a deficit, in the ability to think that other people know things or think things that are different than what you know. So, the weak theory of mind is thought to be associated with autism. That's somewhat debated, because these things are never quite that simple, but the first couple decades of research, I think, are pretty solid about the existence of theory of mind and mentalizing and where it seems to be in the brain. And some of the medical questions about autism are a little more up in the air.

Matt:	You mentioned chimpanzees. Tell me a little bit about the strategic differences between human and chimpanzee brains, and are we smarter than chimps?

Dr. Colin Camerer:	So, we've done a little bit of work on that, and first, any time you work with animals--and the same thing with children, actually--it's harder to make very solid conclusions, because we can't ask the chimpanzees questions and we're never absolutely sure that they understand what we're trying to do. And also, the chimpanzees are usually motivated to do experiments by little cubes of food. So, if they're just not hungry, they're going to look like they're dumb. But it's not that they're dumb, it's that they're not competing for a reward. So, subject to that caveat, my collaborator [INAUDIBLE 00:38:58] who works in Japan, has a theory of what he calls the cognitive trade-off hypothesis. And the idea is kind of a very simple one evolutionarily, which is in the chimpanzee's natural ecology, it's really important for them to be able to play hide and seek games and to keep track of predators and prey and to do certain kinds of rudimentary strategic thinking. So, for example, if a bunch of fruit falls from a tree, it's really helpful if they can keep track of where the different pieces of fruit might've gone and where they are. And that takes a certain kind of working memory, right? Instead of a string of digits, like we talked about earlier, one, six, seven, the working memory that the chimps need is spatial working memory. You know, where did all this stuff go? And if they can do that better than other chimps, they can run and get food more quickly. So, you need some evidence that, especially with training, the chimpanzees are really good at spatial working memory, and the way he does it experimentally is to show them a bunch of numbers on a screen, like 1, 4, 3, 2, 6, in different places of the computer screen, for 200 milliseconds, which is very quick. You can just barely see the number. And then the numbers disappear and are replaced by black blocks, and in order to get a food reward, the chimp has to press the black blocks, which correspond to the numbers in order. So, wherever the digit 1 was originally has to press that box first, and then if the next digit was 2, in order, he has to press that, and if the next digit was a 4, he has to press that. And you can see on their website at the Primate Research Institute, called PRI, you can see some videos of this. The highly-trained chimps who do this thousands and thousands of times--they get really good at it--are really good. They're really good. With 200 milliseconds' exposure and a lot of training with five or six digits in a sequence, they can get about 80 or 90 percent correct. And people actually really aren't as good, although it's a little controversial, because it's hard to get human beings to do it for 10,000 trials. So, there's very few cases where people have been as trained as the chimpanzees. Anyway, so that motivated the idea that maybe the chimps are actually just really good, better than us, at keeping track of sequences of information that resemble something like fruits falling in the forest that's useful for them and their adaptation. And, by the way, the cognitive trade-off part comes in in the following way. So, the chimps are basically kind of like kids up until age two or three, and so a lot of the play they do among...the chimps playing, with chimps, kids with kids, is, you know, play that's kind of like practicing for strategic interactions or games that probably had some adaptive value as they were growing up. So, they play hide and seek, or the chimps are often...status dominance is very important for them, so we'll kind of wrestle and play fight to see who's stronger. And the difference in humans is, once children start to talk, a lot of their mental attention and probably brain matter is now solely devoted to this amazing tool which is called language. And also, children will shift over at age two or three or four to what's called group play. So, kids who were little would just play by themselves. Like, you get a bunch of kids in a room, and they're all sitting and playing completely independently, like little assembly line workers. When they start to talk, then they can start to play much more interesting games that involve talking to one another and bluffing and things like that. But the chimps never advance to that next stage. So, in a way, they get a lot more practice in their playtime in games that may require a certain kind of working memory, like hide and seek. "Where did that person run off to? I'm going to go look for them there." Or, "Where did somebody hide last time? I'm going to switch to a different location so that they'll go to the old location and not the new one." 

And so, [INAUDIBLE 00:42:41] hypothesis is that the chimps get this kind of endless childhood of practice in games that involve working memory and hide and seek. And so, we actually did some experiments with chimpanzees where they don't actually play hide and seek, but they see a little computer screen. It's basically an iPad with gorilla glass, or chimpanzee glass, so they can't smash it, and a little light comes up and you either press on the left or the right. And there are two chimpanzees actually next to each other in a glass cubicle, and for different various reasons, we used mother and sibling pairs, so it's like a mother and a little son, a mother, little son, one mother, a little girl, chimpanzees. And one of the chimpanzees is the hider, which means they want to pick left when the other person picks right. And so, they both see two separate screens, and they're picking at the same time. And so, the hider gets a food reward if they mismatch. "If I hide, I pick left, you pick right. Ha, as if you didn't catch me." The seeker gets a food reward if they match. You know, so if they both choose left, food reward for the seeker. The hider gets nothing. And, when they play this game hundreds and hundreds of time for food, two things happen which are interesting. One is that their choices, they seem to do a better job of keeping track of what the other chimpanzee has done in the past and then respond to that. So, if you're a seeker and you see the other guy has picked left, left, left, they switch to left more quickly. They're kind of learning and they're recognizing patterns. And the other thing is that when you plot the percentage of times they can choose left and right, remember from rock paper scissors, in these games, if you alter how much food you get for different combinations. Like, if I'm a seeker and I choose left and you choose left, ha, now I get three apple cubes. If I choose right and you choose right, I still get food, but I only get one apple cube. If you move around how much, from these different configurations of choices, you can change the mathematical predictions of the Nash equilibrium game theory. And it turns out that if you make a graph, the chimpanzees as a group, if you average across the six different chimps, there's three pairs, one playing hide or one playing the seeker, the chimpanzees are incredibly close to theory. I mean, I claim... I know a lot about this, but maybe not everything. I'm sure not everything, and there's always new studies coming along. But I've said this to several game theory audiences, and no one has ever said, "I've found an interesting exception to your claim," that the chimpanzees, as a group of just the six chimps, come about as close to these predictions of the Nash equilibrium, the balance of left and right play, as any group we've ever seen. And it might be just a fluke, because there's only six. It might be that they're trained a lot. They do this for hundreds of times and they're very motivated. They do it when they're a little bit hungry, so they're motivated to eat. Or maybe they have this special skill, so maybe that the chimps are actually a little better than us at this special type of game that involves hiding and seeking and, most importantly, keeping track of what your opponent has done the last few times.

Matt:	So, in that study, you had some human groups also either compete against them or just measure their activity, and they were further away from the game theoretical Nash equilibrium than the chimps.

Dr. Colin Camerer:	That's correct. And, in fact, for robustness, we did with a group of people in Japan, and they actually used the exact same image. So, they used the same type of iPad and pressing. So, it's not that we give them instructions that are a little bit different. The chimps, we don't really tell them anything verbally. They just have to learn it by trial and error. But we also have a group of African people who worked at a chimpanzee reserve in West Africa, and the difference with them was, well, first of all, we didn't use the computers there. We didn't have them. But we had them play with kind of bottle caps, and they could play with the bottle cap up or down, and that represented kind of like left or right, and one of them wanted to match the other person's bottle cap and one of them wanted to mismatch. And the advantage of Africa was people are poor, and so we could pay them what was a typical amount of money for Americans, but in terms of purchasing power, it's a lot of money. So, sometimes with these experiments, we would prefer that whoever's participating in an experiment is motivated by money so that they're paying attention and they continue to think. And so, the Africans made the equivalent of, in half an hour, 45 minutes of playing a couple hundred times with each other, they made the equivalent in U.S. purchasing power of maybe $150. So, you know, and you could tell by watching them, they were kind or really into this. This is sort of pretty important. But even then, their patterns and their data looked very much like the Japanese people, even though the literacy levels are different and they're from two different continents and their genetic material's probably a little bit different, and their incentives were quite different. But, if you plot the human groups, the two human groups, Japanese and Africans look quite similar, and then the chimps are just off in this land of their own, within 1% of where the mathematical prediction says they should be.

Matt:	So, for listeners who want to dig into not only that, but just game theory more generally, and some of the things we've talked about today, what resources would you recommend that they check out? Books, websites, etc.

Dr. Colin Camerer:	I think one that's sometimes used as an undergraduate text, so it's not too technical and it's well-written, is by Avinash Dixit -- D-I-X-I-T. He actually has a book with Barry Nalebuff, so I'll just give you his last name, since it's easier to spell it. Remember, it's Avinash Dixit -- D-I-X-I-T. So, he actually wrote a kind of popular book, and he also has a textbook, which is often used to teach undergraduates that are kind of not... You can teach game theory, as you might imagine, and it's sometimes taught this way in economics and even computer science and engineering in an extremely mathematical way, but it's really a sort of storytelling about human behavior with some mathematical structure on it. So, the Dixit book with Nalebuff is kind of a chatty, fun introduction with lots of examples. And he has another book. I believe it's with Skeath--S-K-E-A-T-H--that's more like a textbook you would use in a class, but not too mathematical. There are lots of very mathematical books, one by Roger Myerson, who is a Nobel laureate. And I have a book called Behavioral Game Theory, which, again, is not meant for a popular audience, but a lot of people have read it and told me they like parts of it. And it's called Behavioral Game Theory, and that's aimed at, say, advanced undergrads who know a little bit about game theory, but they're mostly just interested in how do people, and sometimes children or chimpanzees, actually play these games, and other principles like this, level-k thinking, besides equilibrium thinking. What are the different kinds of mathematical ways we approach this. And so, I hope... My book, unfortunately, is not a trade book. It's a university press book, so it's not very cheap, but there probably are used copies on Amazon that are not as highly-priced as textbooks usually are. And, again, it's not written... I didn't make a big effort like with Dixit's books to reach a big audience, but I hope at least some of your listeners who are willing to put with a little bit more math would find it interesting. Anyway, there are a bunch of books, although there isn't... Unlike Daniel Kahneman's book, Thinking Fast and Slow, there hasn't been a really great, fun game theory book written with lots of cool stories. Maybe I'll write one someday or somebody else will. But so far, Avinash Dixit's book, I think, is the best one.

Matt:	And what is one piece of homework that you would give listeners?

Dr. Colin Camerer:	Well, I think, you know... Abraham Lincoln, I think, said, "Think twice as much about the other fellow as about yourself." And so, the usual kind of mistake people make is to think about what they can get out of something and not to sufficiently think, what motivates the other person? What are they likely to do? If I'm very tough on negotiation, will I walk away? Yes or no. If I'm really easy in negotiation, something could happen. And so, the level zero players that we're talking about, by definition, are not doing anything strategically thinking. They're not saying, "Why is somebody doing this? What is their motive? What do they know that I don't know?" And so, often, a little bit of analysis like that goes a pretty long way.

Matt:	Where can people find you online?

Dr. Colin Camerer:	On Twitter, my Twitter is CFCamerer. C-F-C-A-M-E-R-E-R. And I do have a website, although it's not up to date particularly recently, and I haven't written... I'm on Facebook, but I don't post very regularly. Twitter, I usually comment on certain things, and I also try to... If I come across a recent research paper, sometimes they're quite technical and sometimes they're more...you know, there's a fun, really instant, interesting takeaway. I'll kind of use it to advertise, sometimes, my own research and other papers I think that people who are kind of interested in science at the level of your listeners might find fun to read.

Matt:	Well, Colin, this has been a fascinating conversation, and I just wanted to say thank you so much for being on The Science of Success.

Dr. Colin Camerer:	My pleasure. Thanks for having me.

 

 

September 22, 2016 /Lace Gilger
Decision Making
38-Master Your Mental Game Like a World Champion with Performance Coach Jared Tendler-IG2-01.jpg

Master Your Mental Game Like a World Champion with Performance Coach Jared Tendler

August 31, 2016 by Lace Gilger in High Performance, Decision Making

In this episode we explore the mental game of world champion performers, examine the emotional issues preventing you from achieving what you want to achieve, how those issues happen in predictable patterns that you can discover and solve, look at why people choke under pressure, and discuss how to build mental toughness with mental game coach Jared Tendler.

Jared is an internationally recognized mental game coach. His clients include world champion poker players, the #1 ranked pool player in the world, professional golfers and financial traders. He is the author of two highly acclaimed books, The Mental Game of Poker 1 & 2, and host of the popular podcast “The Mental Game.”

We discuss:
-The emotional issues preventing you from achieving what you want are happening in predictable patterns, and you can discover them! 
-Why people choke (and what to do about it)
-How to cultivate mental toughness over time
-Why the typical sports psychology advice doesn’t work
-Lessons from 500+ of the best poker players in the world of dealing with mental game
-How high expectations create self sabotage
-Why emotions are the messengers and not the root cause of performance issues
-Why mistakes are an inevitable and important part of the learning process
-The yin and yang of performance and learning
-The characteristics of peak mental performers
-How to deal with “tilt" in poker and the different kinds of “tilt"
-How to use confidence intervals to deal with uncertainty
-And much more!

If you want to improve your mental game - listen to this episode!

Thank you so much for listening!

Please SUBSCRIBE and LEAVE US A REVIEW on iTunes! (Click here for instructions on how to do that).

SHOW NOTES, LINKS, & RESEARCH

  • [Book] The Mental Game of Poker by Jared Tendler and Barry Carter

  • [Book] The Mental Game of Poker 2 by Jared Tendler and Barry Carter

  • [Book] The Power of Habit by Charles Duhigg

  • [Book] Deep Work by Cal Newport

  • [Book] The Feeling of What Happens by Antonio Damasio

  • [Book] Fooled by Randomness by Nassim Nicholas Taleb

EPISODE TRANSCRIPT

Matt:	In this episode, we explore the mental game of world champion performers; examine the emotional issues preventing you from achieving what you want to achieve; how those issues happen in predictable patterns that you can discover and solve; look at why people choke under pressure; and discuss how to build mental toughness with mental game coach, Jared Tendler. In our previous episode, we explored one of the biggest things disrupting your sleep; examined strategies for getting a better night’s rest; dug into sleep cycles; talked about the 30-day no alcohol challenge; and broke down how to read books more effectively with James Swanwick. If you want to sleep better and be more productive, listen to that episode. Today we have another amazing guest on the show, Jared Tendler. Jared is an internationally recognized mental game coach. His clients include world champion poker players, the number one ranked pool player in the world, professional golfers, and financial traders. He’s the author of two highly acclaimed books: The Mental Game of Poker 1 and 2. And host of the popular podcast, The Mental Game. Jared, welcome of the show.

Jared:	Thanks, man. Good to be here.

Matt:	So, for listeners who may not be familiar with you, tell us a little bit about kind of your story and your background.

Jared:	I was an aspiring professional golfer. I was a kid. Kind of got a little bit of a later start, you know, around 13 to 14 is when I started really asking it seriously. This is kind of pre-Tiger in his heyday. I kind of grew up maybe three to four years behind him in terms of amateur golf, so I’m 38 now. I’m saying that, in part, because if you got started as an aspiring golfer at 13 years old right now, you’re severely behind the eight ball. The game has just become so, so highly competitive. So, I was behind the eight ball 25 years ago, and today it would be even worse. But, got to college, and was able to become a three-time all-American. Played some big national events, and in particular, the US Open qualifier, and was finding myself choking. So, I was having a lot of success in sort of the smaller events, more regional events, but when I was getting to the big stage, I was choking. And, you know, it was really on the cusp of being able to break through, but it was sort of my mental and emotional issues that was blocking me. So, rather than become a professional golfer, I’m not one to just try something just for the sake of trying it, I needed to feel like I actually had a chance of being successful. I went to get a master’s degree in counseling psychology. And then, subsequently got licensed as a traditional therapist. Really, to better understand the reasons why I was choking, and the reasons why I think a lot of athletes, in particular, golfers, that their game doesn’t perform under that kind of pressure as well as they’d like. And the reason I did that is because, what I felt like was the predominant mode of sports psychology at the time, was very, sort of, surface-level. It was, “You’re not focused, you’re losing confidence, you're  getting too anxious. We’re going to teach you how to focus, how to be confident, how to relax in those environments.” It didn't really understand the “Why?” Why was I not confident? Why was my focus elsewhere? Why was I thinking about the future or the past? And I think... To me, that was the, I think the essential question to ask in order to find the real cause of the problem, so that sustainable improvements could be made. So, I made a lot of improvements using the typical sports psychology advice. My game got better, I was certainly performing better by the time I was a senior than I was a freshman, but the essential pattern of really breaking down under that big-time stress hadn’t changed. And I felt like there was something deeper that had be found. And so, after I got my Master’s degree and felt like I had kind of understood the problem solving methodology of a therapist, I flew to Arizona and started up my golf psychology practice and was kind of cold-calling and knocking on country club doors, trying to find some swinging structures for me to partner with. I felt like, you know, if I could have some kind of strong relationship between another instructor that the two of us could kind of create a well-rounded team for, especially professional golfers, but even really serious amateurs or junior players. That's what I did, and so I was working with golfers for about three and a half years. Before poker came bout, which, you know, kind of defined my career for the last eight years.

Matt:	So, how did you get into the world of poker?

Jared:	So, poker was somewhat spontaneous. I had actually begun playing some professional golf myself. I was... It felt like I had solved a lot of the issues that I had needed to, and was playing some of the best golf in my life. Got hooked up with a group of guys that...one of which was a former professional golfer, and he, unfortunately had to stop playing golf because he had a heart attack at 22. Was not drug induced. It was some genetic mutation that caused his heart to...the arteries to spasm. And so, he ended up going into professional online poker. And it was an interesting transition. The guy was an incredibly hard worker, with his golf. Growing up, was the guy that spent hours and hours hitting balls and was kind of just the equivalent of a gym rat in golf. He actually broke Tiger Woods’ record for most tournament victories in the state of California in one summer. I think he won 35 events, had a lot of competence in working and obviously as a player, and then saw online poker back in 2004, or 2005 is when he started. This was during the online poker boom, prior to when the government stepped in. There was a lot of money to be made, and he was making around $20,000 to $30,000 per month when he and I met. He ended up seeking my advice for psychologically, was because he was getting so angry that he was literally, like, taking his desktop computer and ripping it out of the wall and smashing it, and breaking monitors and mice and keyboards. And poker, there’s a lot of short-term luck. Imagine a golfer hitting a perfect drive down the middle of the fairway, and it hitting a sprinkler head and going straight out of bounds. And then doing that five times in a row. You’re in a professional golf tournament, and you make a 15 on a hole, 9 or 10 over par, and you don’t even hit a bad shot. In poker, that happens every single day. The better players lose a lot because of the short-term luck. And that’s important as a professional poker player, because that’s where a lot of their money is made. Not necessarily just the differential in skill, but the differential in the perception of skill. Bad players need to win in order to think that they’re good, in order to play against players who are the equivalent of a 15-handicapped golfer, or playing up against a PGA tour player and not getting any strokes to even out the match. There’s never a scenario where that PGA tour player is going to lose to that player. Or, imagine the New York Yankees playing up against a high school baseball team. There’s never scenario where the Yankees are losing. But in poker, that dynamic happens every single day. The best players in the world lose to some of the worst players in the world, and that’s a reality. So, for him, dealing with that reality was incredibly difficult, especially coming from golf where he had a lot more control over his results. So, our interactions began with me kind of doing a typical dissection of my clients. I have them fill out a very detailed questionnaire to try to understand what their issues are, and then we get to work. Within a few months, the results were almost too obvious to note. I mean, it was... He went from, as I said making from $20,000-$30,000 a month, to making $150,000-$200,000 a month. And yes, there certainly can be some good luck involved in that, but for the most part, being able to remain calm, remain focused, be in the zone more, was a big part of his success. So, he happened to be part, being able to remain calm, remain focused, be in the zone more, was a big part of his success, so he happened to be part owner in an online-training site that taught people how to play poker, which was a new phenomenon at the time. And because it was new and there wasn’t really anybody doing sports psychology in poker, it gave me sort of a big avenue for me to take my job. You know, as I said, I started playing some professional golf and so it became a difficult choice point. Do I pursue my dream? Or do I take on this seemingly risky thing to just hop into poker? And I decided that it was going to cost about $250,000 over two or three years to try to make it as a professional golfer. You know, I was getting older at this point, I was 27. So, it was a risk. I decided that poker was the safer bet, and I would just dive into it, continue to play some tournaments and see where it went. And it just sort of took off. I just had a large influx of clients very quickly, and really just saw a huge opportunity within that field. It gave me a chance, really, to work with players longer term. The golfers seemingly were a lot more fickle. They wanted results quickly. They’re the people who buy clubs regularly, thinking that’s the solution. Even the professionals, they wanted things faster than the process would kind of allow for. But for some reason, poker players, maybe because it’s the money, the money was happening every day. It was like working with an employee, or just somebody's who's working a business. Golfers don’t play tournaments every single day, the poker players just seem to be committed to it. Really, it was a lot of fun to me to work with a lot of people who are committed to doing that kind of work. That was eight years ago, 2007 to 2008 when I got started with that website. At this point, I’ve worked with well over 500 poker players, some of the best players in the world, as you mentioned the books that I’ve written. It’s been a very enjoyable ride going through poker.

Matt:	So, I definitely want to dig into smashing computers and dealing with guilt and all of that, but before we do, tell me why do people choke?

Jared:	There are lots of reasons. One reason can be that their expectations are too high relative to their actual capacity. There, sometimes can be some traumatic experiences, and then, you know, those traumatic experiences then continued to get replayed. The mind has the ability to imprint a memory. So, then in a physical capacity, that motor pattern gets replayed, gets triggered when the circumstances cause a lot of stress. From a decision making standpoint, the mind has the ability... Or the brain, I should say. The brain has the ability to shut down higher brain function. People often are familiar with what’s called the flight or fight mechanism. So, if you are in a blind rage, that is the equivalent of choking. Except, we’re talking about the difference between anger and pressure. But, both circumstances are caused by the same tripping of the wiring in your brain where higher brain function gets shut off. If you’re feeling euphoric on your wedding day, or your child gets born, there’s this rush of emotion and it shuts down higher brain function. My daughter is two years old now, if I was told right after she was born, that I had to make some very complex calculations, or I had to help somebody with a very severe problem, there’s no way that I could do that. The emotions are too intense. And that mechanism goes back to your primitive processes in the brain, and I’m sure you’ve talked a lot about this in your podcast. The key in my mind is that we have to understand what creates that tripping. What’s causing that excessive emotion in more normal circumstances, marriage and baby aside. When we’re able to understand what that is, then we can decrease the neurological activity in the emotional center, so that the higher brain functions can actually click back in and you’re able to make decisions, or as an athlete you’re able to think through and see and perceive the environment around you to know what to do. As a golfer, you need your sense to be able to perceive the environment to have your body react to that particular shot. The same is true with a lot of athletes, right? If you lose that perception, then your capacity as an athlete is severely diminished. But what often remains is those exception that you should be able to perform at levels that would be the case without that severe emotion present. And that is what causes, or is a big cause of people choking, is that differential. In their minds, not being able to reconcile that difference. It’s basically like, if you were to... If I were to put you on the edge of the cliff, and it was, let’s say, 30 feet wide. And I would say, “Matt, I want you to jump across that.” You should choke at attempting to do that. You should not do it, because it’s an impossible thing to do. But when players are faced with a similar kind of chasm, they don’t realize how big the gap is between what they’re normally expecting of themselves, and what they’re actually capable of in that moment. And that causes predictable paralysis, and causes people to choke.

Matt:	What creates the tripping or kind of trips the wire of excessive emotion? I know there may be many different causes, but have you seen some commonalities among what triggers that in people? 

Jared:	Yeah, it’s... So, the tripping, I would call a trigger. I think that comes from cognitive psychology, or cognitive behavioral psychology and therapy. So, it’s not a new term. But these triggers, these things that spark the emotion can be... There’s almost like an infinite amount of things it could be. The commonalities would be: Losing, making mistakes, seeing somebody else successful — that might spark judgment, or some jealousy. Actually, winning, can actually cause excessive emotion to tend to. But, you know, it’s the dynamics of the game are varied, right? So, we sort of extrapolate within poker, within golf, within trading — What does winning and losing look like? What do mistakes look like? Those are going to be, by and large, a lot of things that people are going to be triggered by. The reaction that they have is going to be varied, right? Some people are going to feel like losing causes a sense of injustice. Some people are going to feel like they deserve not to get bad luck, or they deserve to win. Some people are going to feel like their sense of competitive balance is off, and they’re going to feel like they’re fighting for their goals, and so they’re going to be triggered in that way. Other people are going to have some wishes that they could win more. They’re going to lose some confidence and have difficulty not being able to control the outcome or believing that when they win, that that means they should always win. There’s a lot of reactions that happen that can cause more of the chaotic array of emotional issues that come about, but I think that’s a lot of it.

Matt:	And what do you advise people to do to, kind of in the moment, decrease that neurological activity that is caused by excess emotions?

Jared:	There’s a few things. Number one, you have to understand the cause of that excessive emotional activity. So, the things that I’ve mentioned so far, you know, they may or may not necessarily get to the root of it, right? So, if you don’t have a sense of the root cause, then your attempts in the moment to control the emotion, which is really all you can do, is minimized. So, for example, we take somebody who has a sense of entitlement, right? That sense of entitlement causes them to get angry at situations where they think the outcome should be different, and they get very pissed off at that, right? A sense of entitlement often comes as a result of a weakness in confidence, right? And some over-confidence. Well, that over-confidence may be caused by an illusion of control. So, they believe they’re in more control of the outcome than is real. So, the reaction that is entitlement, that in the moment frustration that they’re not getting the results that they want requires a reminder that speaks to that illusion of control. So, you might have a statement that says something like, “I can’t control all of the results.” You know, no one can. There’s short-terms luck, there’s short-term things that I can’t control, like the actions of other players or competitors, and so all I can control are XYZ, or all I can control is how well I am focused, how well I’m prepared, how well I’m playing. Whatever might be specific to that person, and they’re using that statement as a way of correcting that deeper flaw, which is critical to long-term resolution of the issue. And in the short-term, it creates some control so that they’re able to decrease a little of that emotion and actually continue to make good decisions, or perform well. But the process I use requires several steps to get to that point. Number one is recognition early on. The longer that it takes for you to recognize that your emotions are rising, the harder it is for you to use that logic, to use that statement, to gain control of the emotion. And it should make sense, right? The bigger the emotion, the more strength is required to control it. The faster you can identify it, when it’s small, the more of an effect it will have. Because that same dynamic is at play. Which is when the emotions rise too high, they shut down higher brain function proportionally to that size of the emotion. So, the bigger the emotion is, the weaker your mind is, and the weaker that statement will have as you say it in those moments. And I actually think this is one of the biggest mistakes that cognitive behavioral therapists have made. Cognitive behavioral therapy is one of the most effective treatments, for a whole range of issues, both clinical, personal, as well as within the sphere of performance and sports and whatnot. But they might a big mistake in not emphasizing this point that I’m making now, which is that you have to use that cognition, use that thought process, at a time when your thoughts are the most powerful. Which is when the emotions are small. So, what I advise people to do is create very detailed mappings of the escalation of their emotions. Right. People in business, or in sports, in poker, in trading, the issues that we experience happen in very predictable patterns. And it’s our job to become aware of that pattern so that we can apply corrections, at times, where the mind can actually receive it. So, it takes a little bit of studying, and so I advise my clients to spend a week or two weeks taking detailed notes of the situations in which they’re looking for control. One of the cool things about online poker is that there’s a high frequency of emotional reactions, and so they may have a bad reaction to losing, happen five times within a particular day. And certain businesses, you may not be faced with those situations. Might have happened several times a year, but when they do happen, your reaction is so severe that it is really impairing your functioning as an employee or as a business owner. So, you’ve got to do your best to, in those situations, go back into your memory bank, and think about how you’ve reacted in similar situations in the past. But you don’t want to do that just once. I mean, you don’t want to spend one day or one hour thinking about it. You want to spend 15 to 20 minutes, five times a week for several weeks really thinking about it. Make it a habit where you’re trying to uncover and articulate this pattern. It is such an important principle I can’t overestimate it. That recognition is the X factor. If you can’t recognize the emotion prior to it becoming to the point where it’s going to shut down our brain function, you have little to no change of actually gaining control. And, in fact, actually, people with very high expectations, really just go completely mental in spots where they’re expecting to be in control, but the emotions are so high. Very often, when the emotions are high, you... It doesn’t mean your brain is completely gone. You still have the ability to think. And you might even know this logic statement. You might know what is logical to correct that emotion. But you’re doing it at a time when emotions are so high that it doesn't have an impact. It’s so... The emotions are so powerful and so strong, that what cognition you have is very weak. But if you have the expectations that what little cognition you have should be able to control that emotion, then your mind is just going to boil up. You’re going to become so angry, like my friend Dusty, my first poker player, who was ripping his desktop computer out of the wall. So, again, first step is to recognize, and then in the moment, once you’ve recognized it and it’s small, then you’re taking a couple deep breaths. Very, very quickly, very, well... I say quickly, more up to the point is efficiency. You don’t have to take these long, drawn out, deep breaths like a meditative kind of thing. The purpose is more about creating separation between the reaction and the correction, which is the third step. The deep breath is the equivalent of stepping out of the room when you’re having a heated argument with a friend or a spouse. If you just keep fighting, or you keep arguing, there doesn’t become any chance of coming to some conclusion or some reconciliation of the issue, right? When you both step out of the room, cooler heads are able to prevail, you’re able to get some perspective, and that’s the idea. The deep breaths give you some space, and some separation from that reaction, to then be able to apply the logic. Now if you’re in an environment where your decision making allows you the opportunity to take some longer, deeper breaths to calm down, then take that opportunity, it's not going to hurt you. But if you’re a poker player, if you’re a day trader, if you're a golfer, you may not have the time or the luxury to be able to spend a minute actually doing some deep breathing to prepare yourself for the logic. That third step is injecting that logic, right? The cognitive behavioral strategy of having that correction to that root flaw. Then the fourth step is what I call a strategic reminder. The reason this is important is because, just because we’ve stabilized or controlled our emotions in that moment, it doesn’t automatically mean that our performance is going to be as high as we want it to be. For poker players, they're being reminded of the common mistakes that they might make. They’re thinking about their decision making process and kind of filling in some of the holes that might typically be there when they’re upset. So, they're forcing their attention to correct those mistakes. A golfer might, you know, focus on a particular part of their technique, or a particular part of their decision making. They might forget to calculate the impact of the wind, and so they’ve got to make sure and force themselves to consider that. Because, just because they’re calm again, doesn’t mean they’re going to automatically think about that part of their decision making, or their performance. So, while you’re competing, you’ve got to go through that cycle of those four steps over, and over again. And that to me is really how you build mental strength. It’s the force that is required to apply these corrections in these moments, and repeating them time, and time again as they happen throughout your day, throughout your performance. And it’s a bit like going to the gym and working out, right? That’s where the strength comes from, it’s pushing yourself at a time that’s very difficult. And this is, you know, less so for athletes that are competing in kind of time dependent scenarios. You don’t want to keep pushing yourself beyond the point where you need to quit, right? You can’t just lift a certain amount of weight at the gym a hundred times, when you can only do it ten times. You want to push yourself to be able to do twelve, not a hundred. A hundred is not doable. So, quitting, taking breaks, resting, is very, very important to the strengthening of the mind, much like it is the body. So, quitting at an appropriate time where you don’t risk rein jury is an important part of the overall whole. We’re creating containment and then day after day, that containment ought to get stronger and stronger, if you’re allowing your mind to recover.

Matt:	So, what are some strategies to boost recognition and train people to more effectively recognize the beginning of an emotional reaction?

Jared:	The first thing is to start with what’s obvious, right. Even if it’s at the point past where the emotions have kind of shut down your thinking. You just start writing it down. There’s a very simple framework that I use which is called the spectrum of emotion, and you just sort of scale it 1 to 10 or 10 to 1--however you want to describe it--one being when the emotion is at its lowest, ten being when it’s at its highest. And you just start to take notes in each of those ten spaces, about what it’s like when your emotional reaction is at its lowest point or at its highest point or somewhere in between. Somewhere around your emotional system is shutting down higher brain function. And you’re also paying attention to the changes in your decision making, the changes in your tactical performance, and so you’re trying to create a map. This is the map. What does the pattern look like, right? So, when it’s very small, the anger issue might appear as some minor irritation, like some kind of extra noise in your head where you’re like “Agghhh!” Or you kind of sigh deeply, or maybe even pound the desk a little bit. Not that serious, but you’re like, “Goddammit!” And so you’re writing down the physical changes, you’re writing down the specific thoughts that you have in your head, like, “I can’t believe I was such an idiot!” If you’re reacting to a mistake. So, it’s physical reactions, emotional signs, the specific thoughts that you have or the things that you say out loud, and any of the technical, sort of specific to your area of performance that changes at each of those different levels. So, your reaction to a mistake might begin with some, just kind of like tension in your head, or you’re like, “Dammit, I can’t believe I did that.” But when it’s at a ten and you’re just in a blind rage about the mistake that you’ve made, or you just can’t possibly even think. It’s like, you feel like you’re just the dumbest person in the world, and can’t comprehend how you’ve made such a bone-headed obvious mistake. And whatever is going on in your mind at the time is what you’re writing down.

Matt:	What do you do if you’re in the heat of the moment and you apply, or try to apply, a correction and it doesn’t work?

Jared:	In that particular moment, it depends on the scenario. If you’re a golfer, a poker player, a trader who’s performance is so time dependent that you don’t really have the ability to take a bigger step backwards, then there’s not much you can do. The only thing you really can do, and this is true for sort of other people as well, is to better understand the pattern. If control at that point is gone, then your option is to better understand the pattern. It is going to happen again, and the reason it happened this time is because you didn’t understand the pattern to begin with. Or, at least understand the cause of it. So, let’s assume that you knew the pattern well but you couldn’t gain control of it. It means that your injecting logic didn’t work. It means that your understanding of the pattern was not strong enough. Or it means that there is an accumulation of emotion that is rapidly overwhelming your mind. It is possible for people in a particular moment to get triggered by something so severely, that their emotions rise so high so quick, that it bypasses our ability to have any option to inject logic or to inject some cognitive correction. In which case, we’re dealing with a much deeper issue, a much more long lasting issue that is not going to be corrected in that moment, and you have to do some real, much, much deeper work to uncover the cause of that and start to break apart that accumulated emotion, and give yourself the option to have some mental control.

Matt:	So, the creation of the map of this pattern, is that the primary tool that you recommend for, let’s say, off the felt or when you’re not actually in the heat of the moment, building that understanding of the root cause?

Jared:	It’s a building of an understanding of what’s going on, but it’s only sort of the beginnings of being able to understand the root cause. So the pattern that you’re writing about is really like the symptom pattern, and then the root cause is the cause of that symptom. So, me thinking I’m an idiot would be the symptom of, let’s say low-confidence caused by high expectations. This is a common phenomenon around a lot of the people that I work with. Perhaps a lot of people that listen to this podcast, who believe that high expectations are a good thing. I’m not saying they’re a bad thing; high expectations have led to a lot of successes. But what happens is that they can often also add to a reduced sense of confidence. Because and expectation implies a guarantee. And goals imply learning a development required to achieve the same end outcome. So you might think that your expectations are goals, but if you think what you’re aiming for is, in essence, guaranteed. Even if you don’t necessarily have the capacity right now to reach that goal. If you assume that you’re going to, then it’s still an expectation. What that does is it makes the learning process more chaotic. You might still end up achieving the same goal, but you’re going to have a feeling like you’re an idiot sometimes. Rather than seeing that the mistakes you’re making today are way, way, less severe than the mistakes you make five years ago. So, how could you really be an idiot if you are already that much more capable, you know? You’re not an idiot, it’s just that you’re overreacting to a mistake because you believe you shouldn’t make them, and so the root cause right here is the flaw in mistaking goals for expectations. So, we take this sort of symptom pattern and then we drill down and figure out what is at the root of it, then you start correcting the root. Over time, that symptom pattern starts to dissipate and disappear. and that is true resolution. That is when you’ve actually defused the bomb. You’ve taken the trigger and made it... It no longer is going to spark, so no I can make mistakes. And I’m not saying I’m happy about it, but I’m at least dealing with the mistake in a much more objective, rational way towards reaching my end goals, which is ultimately... Solving this mistake is an essential part of that. 

Matt:	So, how do we drill down and really kind of get to and understand what that root cause is?

Jared:	That is the most complex part of the whole process. I think at this point probably what is my greatest expertise as a coach is being able to kind of work with my clients to be able to do deduce what’s going on behind the scenes. This is the unearthing of the unconscious processes behind our emotional reactions. There’s a process I use, and it’s in the first book, actually it’s in both books now that I think of it. That helps players to break down their symptoms, their issues, to try to identify that root cause. And these are the steps: The first step is to describe the problem in as much detail as you can. So, you can certainly build off of that map, that spectrum of emotion, to create and articulate the description of the problem. The second step is to describe why it makes sense that you would think, feel, or react this way. Now, this is I think one of the most important steps for many, many people. Because they often think that their emotional reactions are illogical, or irrational, and so if you think that your emotions are irrational, then there’s really no way to solve it. The fundamental flaw is the emotion itself. The anger is the problem. in my opinion, the anger, the fear, the loss of confidence, the loss of motivation, the boredom, the distraction. All of those are symptoms, they’re never the actual problem. They’re sort of like the messenger trying to highlight what’s going on beneath the surface. So, you have to change your mentality about problem solving by acknowledging the reality that everything that is occurring is very logical and predictable. I just don’t know the reason yet. It appears, to me, to be irrational, because I don’t know why it is. So, rationality is that second step. I’m not saying that step is without flaw, I’m not saying it’s correct long-term, but there is a reason why you’re thinking that way. So, my step one description might be, I have very, very strong reactions to mistakes. I really hate making mistakes. Well, why does it make sense that I would feel that way? It makes sense because I have high expectations of myself, because I hold myself to a really high standard and I really want to avoid these mistakes. I think that they shouldn’t be happening. Step three: Why is that logic flawed? And this is where we start to get to the root cause. In the example that I gave before, it’s my high expectations. I’m equating the learning process, the process of accomplishing my goals is occurring without making mistakes. So, my expectations are just excessive. They’re not realistic. So, what is the correct? The correction is: I need to be aggressive in my pursuit of my goal, and I need to look at mistakes as the opportunities to grow and improve, and as really is the essential things to be able to accomplish my things. Because if, and this is something I tell a lot of my clients, if you are pursuing a goal where you’re not going to make mistakes, then it’s not really something that’s worth chasing. It’s too basic. You’re not really pushing yourself. You’re not really trying. Anything that you’ve got to try and really push yourself to accomplish, you have to make mistakes. It’s inevitable. So, that step four, what is the correction, often times becomes the injecting logic statement. Step five is: Why is that correction correct? And this just sort of looks to get at a little more of the theory behind it. It’s correct because the learning process isn’t predictable. I can’t always know the mistakes I’m going to make. That would require me to be a psychic, and I’m not psychic, so I have to make these mistakes. That theory becomes extra footing helping to root the correction in our minds, because I kind of vision the root system to a bush or to a tree, kind of like the interactions or the intricacies of the neurons in our mind. It kind of has a visual that is similar, there’s a lot of these off-shoots. It’s not just about implanting this very simple idea of mistakes are predictable, it’s about the complex idea that you’re trying to firmly root, which will then automatically change how you react to them in the future.

Matt:	I love the concept that emotions are the messenger, and not the root cause of performance issues. 

Jared:	It’s the only thing that seems logical to me. I mean, I think, in large measure they’ve been downgraded for a long time but they have particular messaging when you pay attention to it. Anger is the emotion of conflict, right? That conflict can exist between people, that conflict can exist within ourselves. Fear or anxiety is the messenger for uncertainty. There’s a lot of uncertainty in the world, certainly in business if you’re making an investment where there is 100% certainty, well, then there’s probably not much reward for that investment. You’re buying US Treasury bonds that are paying next to nothing. The more uncertainty that exists, the greater the reward is. The greater the investment will pay off, and that’s true with poker players, with golfers, with athletes as well. Confidence, the emotion of confidence — I think that’s an important distinction because I think people very often are not thinking about confidence as an emotion. Confidence is a reflection of skill and competence, but more importantly, it’s our perception of our skill and our competence. So, it’s not just a pure reflection of our confidence. If that were the case, my God, poker would not be profitable. The world would be a much more simple place. But we have our own biases, our own perceptions of our skill and competence that plays into our feelings of confidence. So, when you’re looking at dissecting what the messenger of confidence is saying, it’s a measurement of your perception of skill, and a measurement of your actual skill. Motivation is a byproduct of your goals, and so it’s going to reflect conflict between goals. It’s going to reflect inconsistencies, or goal that are too high or too low, and your motivation is going to be affected based on those flaws.

Matt:	So, let’s flip this on its head a little bit. I’m curious: What are some common traits you see among people who have incredible mental strength, or really peak mental performers?

Jared:	They have, I think, an almost intuitive or innate understanding of the learning process. The learning process is something that many people get wrong and don’t realize how much emotional chaos gets created as a result of it. My example of mistakes is a perfect example of that. So, they have a very intuitive process or innate process for understanding the learning. They have a great ability to be objective with themselves, so that their performance is evaluated without as much emotionality towards it. It doesn’t mean they’re any less driven to excel, it means that when they fall short, or when they excel, they’re equally as objective, and it’s a form of feedback. When you go and compete, it’s a test. And being able to grade that test is essential, good or bad, because then it helps to guide the next steps. So, they’re also... They’re long-term thinkers. They’re long-term performers, they’re not just seeing today in isolation, they’re seeing today in the bigger picture. Again, that doesn’t take away from their desire to excel today, because they know that when they excel today, they’re going to also be learning at a very high level. This is a relationship that I talk a lot about in my second book that performance and learning are intimately tied. They’re kind of like yin and yang. So when you’re performing at a very high level, you’re also learning at a very high level. So they’re driven to excel because of what it allows them to accomplish today, and what it’s also going to lead towards tomorrow. They’re constantly seeking the advice and counsel of other people They understand their own biases or their own limitations in their thinking, and they’re looking for other people to shed light on their weak spots. To shed light on their blind spots, but they’re also not going to do so blindly. They have a sense of their skill set and so when there are things that are brought to their attention that seem irrelevant, they’re not going to give it a second thought. Maybe down the line they will again, but that relevancy for them is very temporal. It’s relevant today, they’re not going to say, they’re not going to focus too much on the thing that’s going to be very relevant two years from now. They might note it so they don’t forget it, but they’re not going to over-emphasize it today. I think those are a lot of the big ones. Mental toughness and having the right temperament and the right personality... Those are things I think that are very personal. I try not to get into the personal characteristics or dynamics that make up the ideal, because I think there’s a lot of ways to accomplish it, and if you have some of the more basic essential elements, however your personality allows you to materialize it is kind of the fun of it. Kind of the diversity of it. 

Matt:	I think one of the most critical things you’ve mentioned is the importance of feedback and actively seeking out your weaknesses and your flaws, but also in a way that you’re aware of... You have to be very cognizant of what is the source of the feedback, and is this particular piece of advice or whatever it might be, relevant to where I am now and what I’m trying to do.

Jared:	Yeah. It’s very easy... I’ll say it this way. It’s easy for people to get caught up in taking advice for many, many different people. But when that happens, it’s evidence of a weakness in confidence. And that weakness in confidence might be because you don’t understand your skill set well enough. So, there is a perceptual weakness, not an actual weakness. So, the perception gets strengthen when you have a more clear understanding of what your skills actually are. Then you get to take that understanding and match it with the feedback that you're getting rather than getting pulled in many, many different directions because you’re allowing it to happen, because you don’t have that centering, that grounding that comes from being the one who is in control of your performance. As the athlete, you’re the one that has to do it. There’s no one who can actually do it for you. The people around you are supporting your ability to do that, and if you’re getting pulled in many directions, it means that they’re just some inner knowledge that’s lacking.

Matt:	Long time listeners will know that I’m an avid poker player. I’d love to dig in a little bit to some poker-specific stuff. I’m sure we’ve touched on some of the conceptual framework behind this, but let’s get back into smashing computers and ripping mice from the wall. How do you recommend, or what are some strategies specifically for things like tilt control. For those who may not know, would you briefly explain what tilt is?

Jared:	Yeah. So, tilt... I’ll actually say it in two ways. Tilt, before I came into poker was a poker player’s way of saying that any reason they would play less than their best would be called “tilt”. Tilt, as I define it, is about anger. When I studied poker players for years—and I’m not really a very good poker player myself; I’m kind of the outsider that came in and observed what was going on—well over 80% of the conversations that players are having are the descriptions they were giving about tilt, meant that they got angry, and they were doing stupid stuff, and they were losing. Very rarely are players tilting and winning. They’re usually tilting because they’re losing, and or their tilt is causing them to lose. So, the strategies for correcting tilt are identical to the things we’ve already mapped out in terms of the framework. What I’ve done in my first book is to map out seven different types of tilt that I’ve just observed. To date, my first book came over five years ago, no one has yet been able to come up with another type of tilt that could explain a situation at the poker table where someone would get pissed off. So, I continue to have that challenge out there and certainly welcome anybody that can find another one. And the reason is because each of these seven types of tilt are focused on that root cause. There are hundreds of reasons why poker players tilt. The triggers that we’re talking about earlier. Hundreds of reasons why players have their tilt triggered. But they’re only a handful of them when you dig down beneath the surface and see what’s going on. So, the first step... So, when we’re talking about solving tilt, you’ve got to understand what’s causing it and by mapping these out in seven... I think that’s helped a lot of player be able to narrow down their focus so they could actually solve their tilt problem. The first one is called “running bad tilt.” Running bad tilt in poker means that you’re losing a lot in short of succession, and a bad run of cards, basically means you’re just getting a lot of bad luck in short succession. So, if you were flipping coins, you should... The mass says that half the time you’re going to flip heads, half the time you’re going to flip tails. What about when you flip a coin and ten times in a row it comes up tails. You’re betting on heads, right? So now you’ve had a bad run, so that’s a very simple example for those who don’t play poker to understand that there’s a lot of math involved in poker, and you get into situations where the bad luck is just against you. There’s literally nothing you can do other than to continue to play a very strong, strategically long-term strategy. But obviously that’s not what happens to a lot of players. They handle that bad run by getting angry and then play worse. They try to recapture their money, they try to force the action, they try to be more aggressive and make more money. Of course, the good players are waiting for that to happen, because that’s what bad players do. So, a good player can turn into a bad player very quickly when they’re on tilt. So, running bad tilt is one. The second one is called injustice tilt. The name should imply it, right. This is a feeling like what’s happening is unfair, unjust, as if the poker Gods are against them. Entitlement tilt is the next one. Entitlement tilt and injustice tilt are very similar in terms of the language, but with entitlement tilt, it’s more of a sense of deserving. It’s a more personal feeling, as I mentioned earlier, it’s over confidence. Injustice is kind of outwardly. It’s more about, like what the poker Gods, or what poker’s not giving to you, you’re not getting what you deserve, whereas with entitlement, it’s a feeling of superiority over other players, right? You’re better than this player, so you deserve to win, not like you’re getting bad cards and feeling a sense of injustice. Hate losing tilt, otherwise known as competitive tilt. These are the highly competitive people who just hate losing, and that losing causes a lot of anger. Mistake tilt is the next one, we talked about that already. Revenge tilt, one of my always favorites just because players get so crazy and they start attacking others. It’s amusing for me. Desperation tilt is the last one, and desperation tilt is not necessarily a unique type of tilt, any of the other types of tilt that I’ve mentioned can cause desperation tilt, but I specifically carve out desperation tilt because it is the line between a poker player who is successful, who is profitable, that is having a very, very difficult time controlling themselves with a player who actually has a gambling problem. Desperation tilt is a performance issue; a gambling problem is somebody who can’t handle the losses, doesn’t have actual skill in the game, and needs clinical help. I am trained as therapist, but I’m not practicing as one. I am a coach working in performance, and yes I do get into personal issues because inevitably they're part of a player’s performance. But that’s not my primary issue of focus and I refer anybody that I believe that has a gambling problem to therapist who are specialized in that. So, desperation tilt, you know, oftentimes includes players jumping up in stakes. So, they start playing for a lot more money than their bank roll can support. They’re basically playing for all of their money, right? As a poker player, you have to have the ability to tolerate a lot of losses. And if you don’t have the cash to support the fluctuations and profitability, then you can go bust, and that’s what ends up happening to a lot of poker players. They end up playing for all of their bank roll. They’ve got $20,000, and they really should only be playing for $200 or $400 at a time, and they go play against a very skilled player for 20 grand. Most likely they’re going to lose it. Of course they can get lucky in that spot, but that’s not going to solve their desperation tilt problem. 

Matt:	The funny thing about a lot of these forms of tilt, especially things like injustice tilt, entitlement tilt, mistake tilt, you see this same exact thing sabotaging many people in all kinds of different areas in life. So, somebody who’s listening that thinks these mistakes that apply to poker players, I think you’re sorely mistaken. 

Jared:	I completely agree.

Matt:	One other concept I wanted to dig into, and we touched on this earlier, is the concept of the idea of, specifically in poker and I think in many areas in life like trading, investing, a lot of business decisions, there’s a huge gap between making the correct decision and seeing the results that you would like. How do you help people cope with that? 

Jared:	We’re talking about uncertainty. And so, in all of those fears, we’re trying to narrow in on this idea of what happened. You hit a poor golf shot, you make an investment that doesn’t pay off, you open up a business that doesn’t work out, and you want to know why. And very often, you can’t get an answer that satisfies you to 100%. But, as it turns out, psychological research doesn’t have that standard. And I’m saying that particularly because in statistics there’s what’s called a confidence interval. And so in psychological research, the research that gets published has over a 95% reliability that the data is representing the effect that they’re seeing. So, what you can do, is you can start to create confidence intervals, right. I’m 30% sure, I’m 50% sure, I’m 70% sure that what happened was X, and what that does is it keeps you open minded, so as you go and make other investments, open other businesses, talk to other people who have opened businesses or you know, hit other golf shots, play more poker, that you can start to gather more information that’s going to raise your confidence interval, to the point that you might eventually know what happened two years ago, but it might take you two years to know for sure. But you’re not stopping everything to find out what happened to 100% because you might have to go and continue to play the game, whatever game it is that you're  playing, in order to have that confidence interval rise. And I think that’s a mistake that a lot of people make. They end up getting paralyzed after some big things happen, and that paralysis makes them a little bit gun shy to take additional steps, and they want to be more right. They want to avoid having another misstep. I think, to a degree, that can be evidence of a confidence problem. At a deeper level, they don’t have the confidence to be able to learn from it to be able to absorb it, their expectations might be too high, they might think that they ought to be in more control of the outcome, they might think that the success they had early on meant that they were guaranteed to have success, so they got a little bit lazy, staying sharp and reevaluating the investment, maybe had they re-looked at it three months before things went belly-up, the writing was on the wall but they were kind of blinded by it. Same thing with a business, same thing as a golfer. Golfers who might get on a good run, things are going really well, might not be taking care of their bodies as well so they start not sleeping as much, and their performance can start to dissipate as a result of that. So, the point is you're trying to gain information that will help you to become certain, but you’re not doing so by just staying on the sidelines. You have to keep getting back in the game, and gaining more information, because that’s generally the only place you can do that.

Matt:	So what is one piece of homework that you would give people listening to this podcast?

Jared:	Map your problems, like I spoke a lot about early on. They happen in predictable patterns, very often people are blind to them. They happen, and sometimes when they happen, like “Eh, it was a one-off, that’s so unlike me, I’ll never do that again.” You know, two days later it happens again. Month later, it happens again. So, you kind of have to take away the irrationality of it, you have to take away the unpredictability of it, and assume that all of the emotional issues that are getting in the way of you performing or succeeding at the level that you want are happening in very predictable patterns, and your job is to uncover that prediction. The data is there, and like a lot of things, as you pay more attention to it, as you learn more, you develop more skill. And in this particular case, you actually create vision for yourself. It’s like you’re wearing a very dark pair of glasses, and then over time as you gain greater clarity and recognition, those glasses become less dark and become clear. You see the pattern and it’s not enough to be able to see the pattern off the felt, out of the action, you have to be able to see it in real time. So, if, right now you can see the pattern, but in the moment you can’t, then it’s about training. Or it’s about recognizing the accumulative emotion that’s rapidly overwhelming your ability to see. But yeah, mapping is the number-one priority. That’s why I have all of my clients fill out a very detailed questionnaire before we even get started. Because that helps them and me to gain a sense of what is going on and, you know, when I come across players... There’s been a handful of times where I’ve attempted to sell my services to people who weren’t ready. And when that happens, it fails. I’ve had almost zero success selling myself to somebody who wasn’t ready, and at this point I’ve stopped trying. And in large measure it’s because they don’t see it. I can’t force them to see something that they’re not ready to see. So, if you are ready to see, start doing the mapping and paying very close attention to what’s getting in your way, because you can’t get it out of your way, you can’t solve it until you can see it. 

Matt:	What are some resources that you would recommend for listeners who want to do more research on some of the stuff we’ve talked about today?

Jared:	That’s a good question. Obviously my books are helpful resources. They’re written in the language of poker. There may be very few poker players that are listening which I understand. I think The Power of Habit is a great book. I guess I’m giving more sort of general resources, not necessarily particular to what we’re discussing here. Deep Work by Cal Newport, I think is a fantastic book. The Feeling of What Happens by Antonio Damasio, it’s been around for I think 10 to 15 years now, but it’s a great book as well. Fooled by Randomness I think is a must-read, by most people. You don’t necessarily have to read the entire thing to get the basic premises of it. Those are the big ones that come to mind.

Matt:	And where can people find you online?

Jared:	JaredTendler.com, JaredTendlerPoker.com. They can also follow me on Twitter — @JaredTendler. 

Matt:	Awesome. Well, Jared, thank you so much. This has been incredibly insightful.

Jared:	Happy to hear that, Matt. Thanks for having me.

August 31, 2016 /Lace Gilger
High Performance, Decision Making

How a Judge Literally Rolling Dice Could Get You Double The Jail Time - The Anchoring Effect

July 20, 2016 by Lace Gilger in Mind Expansion, Decision Making

In this episode we are going to talk about how random dice rolls can influence judges to give people longer jail sentences, how so-called experts are massively influenced by completely random numbers – even when they explicitly deny it – and how you can better understand this crazy phenomenon – the Anchoring Effect.

As Nobel Laureate Daniel Kahneman puts it in his book Thinking Fast and Slow: 
"The main moral of priming research is that our thoughts and our behavior are influenced, much more than we know or want, by the environment of the moment."

Arbitrary numbers and anchors can have huge implications for your decisions without you even realizing it and this all operates at a subconscious level beyond your conscious experience.
 
This episode is going to focus on drilling down and understanding a specific cognitive bias – a mental model – to help you start building a toolkit of mental models that will enable you to better understand reality.
 
Anchoring bias – along with Priming and Framing, which we have covered in previous episodes – are all cognitive biases that you want to know, understand, and be aware of – so that you can add them to your mental toolbox and make better decisions.

Thank you so much for listening!

Please SUBSCRIBE and LEAVE US A REVIEW on iTunes! (Click here for instructions how to do that!).  

SHOW NOTES, LINKS, & RESEARCH

The specific research studies we cite are located within the book Thinking Fast & Slow.

  • [Book] Thinking, Fast and Slow Paperback by Daniel Kahneman (see here).

  • [Book] Think Twice: Harnessing the Power of Counterintuition by Michael J. Mauboussin (see here).

  • [Science of Success Episode] How This Simple Change In Wording Made 50% of Doctors Choose a More Dangerous Medical Procedure (see here).

  • [Science of Success Episode] This Powerful Factor Controls Your Decisions And 86% of People Have No Idea It Exists (see here).

EPISODE TRANSCRIPT

In this episode, we’re going to drill down and understand a specific cognitive bias to help you start building a mental toolkit. Remember that concept we talked about in the interviews with Shane Parish of Farnham Street, and the author and global financial strategist Michael Knobison [?] Both of them are experts in human thinking and decision making, and they both shared the same concept, the same idea. That what we should focus on to become smarter, to build better minds and make better decisions it to build a toolkit of mental models. Of models of reality that we can use to understand ourselves, understand our thinking, and understand the world around us. If you want to dig around more in that concept, check out those two interviews. They’re great interviews - tons of great information in there. But today we’re going to focus on a specific mental model. A specific cognitive bias. The anchoring bias. 

Along with priming and framing, which we’ve covered in previous episodes. These are all ways in which the environment can substantially shape your decision making at a subconscious level. It’s a cognitive bias that you want to be aware of to know, to understand, so that you can add it to your mental toolbox so that you can make better decisions and so that you don’t fall prey, like so many people do, to these dangerous cognitive biases. 

I wanted to open up with a quote from the book Thinking Fast and Slow by Daniel Kahneman. We’ve cited this in a number of other episodes, it’s an amazing book, highly recommend getting into it. But, before you do, there’s other, better books to start with because it’s such a dense book. Amazing information in there. But we talked about in the priming episode some other books that are better to start with if you really want to kind of begin to get a grasp of psychology and how it controls and rules the world around us. Anyway, here’s the quote. 

QUOTE: “The phenomenon we were studying is so common and so important in the everyday world that you should know its name. It is an anchoring effect. It occurs when people consider a particular value from an unknown quality, before estimating that quantity. What happens is one of the most reliable and robust results of experimental psychology. The estimates stay close to the number that people considered, hence the image of an anchor. If you were asked whether Ghandi was more than 114 years old when he died, you will end up with a much higher estimate of is age at death than you would if the anchoring question referred to his death at age 35.” End quote. 

Let’s dig into that a little bit.

Anchoring is the phenomenon where totally random or arbitrary numbers can substantially impact our decision making. Can substantially change the values that we assign to things, and the numbers that we select. He cites the example of Gandhi. If you - and they conducted this research study. They asked people whether Gandhi was more than 114 years old when he died. They also asked people whether Gandhi was younger than age 35 when he died. If you ask that question, what happens - and you’re probably already doing this yourself. What happens is you take that number, which is called the anchor, and then you start adjusting back from that number to something that is more reasonable. We all know that Gandhi was not 114. We also know that he was older than 35 when he died. But people who start adjusting, and this is the crux of the anchoring bias, typically people will move away from the anchor until they get to a point of uncertainty. Until they get to a point or a place where they’re not sure if they should keep moving the anchor any further. The problem is - that’s where they stop. That’s where they kind of place their guess. But typically they don’t go far enough. So the anchor has a substantial impact on their guess, or on the number, or on the value they associate to this. And we’ll get into some real world implications of this.

To give you another illustration of the anchoring effect. Amos Tversky conducted a study where they had a rigged Wheel of Fortune. It would only ever either go to 10, or 65. Now, it had zero to 100 on there, but it was rigged to only ever stop at one of those two numbers. What they would do is stand in front of a small group of people, ask them to write down the number when the wheel stopped. Again, the number would either be 10 or 65. Then they asked them two specific questions. Is the percentage of African nations among UN members larger or smaller than the number you just wrote. The next question: What is your best guess of the percentage of African nations in the United Nations? Now, as they point out, spinning that Wheel of Fortune has no impact on the number of African nations in the United Nations, it provides no valuable information. But it had a substantial impact on respondents and how they felt and how they thought about the second question that they were asked. Specifically, the average estimate of those who saw the number 10, was the 25% of the United Nations were African Nations. However, those who saw the number 65 estimated that 45% of the United Nations were comprised with African Nations. The key point here is that this totally innocuous, totally random number, created a substantial difference in the way that people perceived and tried to understand this phenomenon. We’re going to look at some other examples of how the anchoring bias can dramatically shape our decisions. 

But before we dig into that, I wanted to talk about a couple other features of the anchoring bias. A couple other ways to think about and understand how the anchoring bias functions. There’s a study conducted by Nick Epley and Tom Gilovich that found evidence that when they expose people to an anchor and have them shake their heads, they were less likely to have the anchor influence them. It was almost as if, at a subconscious level, they were rejecting the anchor. So they moved further away from the anchor and made better and more accurate decisions than either people who did nothing, or people who nodded their head in agreement which actually showed an enhance anchoring effect. But the more fascinating finding of the Epley and Gilovich study is that they confirmed that adjusting away from an anchor is an effortful process. It’s something that depletes our mental resources. And we’ve talked about this before. We’ve talked about willpower, we’ve talked about decision fatigue. And we go in-depth in that in our interview with Peter Shallard about success predictors. It’s a great episode if you haven’t listened to it. I would highly recommend listening to that episode because we really talk a lot about replenish willpower, how it works, how decision fatigue functions, and much more. But one of the fascinating things is that conscious adjustments away from an actor take willpower and take decision-making power. So, if we’re in a state of mental fatigue, we’re more likely to be influenced by anchors. They’re more likely to shape our decisions and make us make poor decisions.

The next fascinating thing about the anchoring bias is that it can actually be measured, unlike many psychological phenomenon, the anchoring bias because it deals with numbers, has a measurable effect and can often be quite literally, quantified. As Kahneman puts it, QUOTE: “Many psychological phenomena can be demonstrated experimentally. But few can actually be measured. The effect of anchors is an exception. Anchoring can be measured, and it is an impressively large effect.” End quote. And there’s a really good study demonstrates how they measure the anchoring effect, and it also shows us how even experts can be influenced substantially by anchors, and how anchors can influence us at a subconscious level, even when we’re not aware of them. Even to the point where experts will literally deny that the anchor had any impact on their decision making. And in an experience that was conducted with real estate agents. The agents were given an opportunity to assess the value of a house that was actually on the market. They visited the house and studied comprehensive amount of information that included an asking price. The trick here is that half of the agents saw an asking price that was substantially higher than the list price. The other half saw an asking price that was substantially lower. Each agent was asked to give an opinion about a reasonable buying price for the house, and the lowest possible price they’d be willing to sell the house if they were the owner. What they found out is, and again, anchoring is a measurable effect. Agents who were shown the low price, were 41% lower than the actual price of the house. Agents who had been shown a high price, were 41% higher. Again, this is average. So the average anchoring effect was 41%. The interesting thing is that agents who asked for the list price had any impact on their judgement. The vast majority of them took pride in their ability to ignore the list price and determine the value the home based on other factors. 

So, not only was there a substantial anchoring effect for these experts, but they were consciously unaware of the impact that anchoring had on them. They then conducted a follow-up study with business school students where they did the same thing. The fascinating outcome was that business school students also had a 48% anchoring effect. The crazy thing is that the difference between how the anchor affected the experts, influenced their decisions by a 41% margin, versus total laymen who had a 48% difference. Those are pretty close together. The detailed expertise that these agents had was not enough to overcome the anchoring bias. The fact that they said it had no impact on their decision, despite the fact that a group of totally uneducated people about the real estate space specifically had almost the same margin of error as the real estate agents. The only difference between the two studies was that the business school students conceded the fact that the anchor price substantially impacted their decision making. 

So, in many ways, expertise was more dangerous in this context because the business school students, knowing they were not experts, were willing to admit that the anchor had influenced their pricing. But the experts themselves were not willing to admit that. And it’s not even that they were trying to hide that fact. They were not consciously aware of the fact that the anchor had influenced them. That’s why anchoring can be so dangerous. It’s something that we’re often not aware of at a conscious level. It’s just like the priming effect. It’s just like the framing effect. These cognitive biases take place subconsciously. We have to try really hard - we have to focus in. We have to understand them deeply. We have to understand our own thinking and be aware of all of them so that we can catch ourselves, and so that we can stop having things like anchoring influence our decision making. 

Another fascinating component of the anchoring bias is that totally random anchors can have a substantial impact on people’s perceptions. We talked about that when we talked about the number of African nations in the United Nations. But this is even more staggering. There’s a study about judges sentencing people. And I’m going to quote from Kahneman here, because he perfectly describes this experiment. 

QUOTE: “The power of random anchors has been demonstrated in some unsettling ways. German judges with an average of more than 15 years of experience on the bench, first read a description of a women who had been caught shoplifting. Then, rolled a pair of dice that were loaded so every role resulted in either a three, or a nine. As soon as the dice came to a stop, the judges were asked whether they would sentence the woman to a term in prison greater or lesser in months than the exact number showing on the dice. Finally, the judges were instructed to specify the exact prison sentence they would give to the shoplifter. On average, those who rolled a nine said they would sentence her to eight months. Those who rolled a three, said they would sentence her to five months. The anchoring effect was 50%.” 

Think about that. Judges with more than 15 years’ experience on average, were influenced by something as trivial as a dice role in determining how long somebody would be sent to prison. There’s a 50% anchoring effect on these highly trained, highly experiences experts. People who we think of as totally unbiased. And we’ve talked before about in a number of the “Weapons of Influence” episodes on the podcast about how other factors can substantially influence judges in their decision making. But it’s really scary sometimes when you think about the fact that our judicial system can be influenced by such random and arbitrary things. But it further underscores the importance of the anchoring effect, and understanding it. And really grasping it so that we can become better decision makers. So that we don’t fall prey to these same mistakes. Because in your life, when you see a random number, it can impact your decision. The date, the time, your social security number. All of these things can change your decision making. Can change the way you value things. Can change the way you make quantitative decisions. So it’s something we have to be very aware of. Something we have to constantly cultivate an awareness of so that we don’t fall prey to this. So that we don’t get trapped. So that we don’t make bad decisions.

Kahneman has a phenomenal quote about the anchoring bias that I think sums this up really nicely. This is from, again, Thinking Fast and Slow. 

QUOTE: “The main goal of priming research is that our thoughts and our behavior are influenced much more than we know or want, by the environment of the moment. Many people find the priming results unbelievable because they do not correspond with subjective experience. Many others find the results upsetting. Because they threaten the subjective sense of agency autonomy. If the content of a screensaver on an irrelevant computer can affect your willingness to help strangers without your being aware of it. How free are you? Anchoring effects are threatening in a similar way. You’re always aware of the anchor and even pay attention to it. But you do not know how it guides and constrains your thinking. Because you cannot imagine how you would have thought if the anchor had been different or absent. However, you should assume that any number that is on the table has had an anchoring effect on you, and if the stakes are high you should mobilize your System Two combat the effect.” End quote.

He talks about a couple different things in there. One, he touched on priming, and I think - I wanted to loop priming back into this because if you haven’t listened yet to the priming episode, or the episode about framing. All three of these are environmental effects in ways your environment can massively shape your decision making at a subconscious level, even if you’re totally not aware of it. So, all of these effects are interrelated in many ways. And the ways that you combat them, the way you think about them, are all interrelated. He also mentioned a study that we haven’t talked about where a screensaver impacted people’s willingness to help strangers. That’s a study he talks about - digs into, in Thinking Fast and Slow. 

Again, there’s a lot more research behind every single one of these topics. I tried to cherry-pick a few stark and powerful examples for you on the podcast to really drive the point home. But there’s dozens more research studies that share and show all of these findings. The last thing to touch on briefly, is he talks about system two. We’ve touched on this in some of the other episodes, but System One and System Two are two different descriptions for parts of your brain that Kahneman uses in the book, Thinking Fast and Slow. System Two is essentially your sort of willful processing power. Willful conscious attention. If you think about it, System One is how you read, how you process language, how you process images, and have emotional reactions. System Two is how you do things like long division. So, Kahneman digs much more deeply into both of those the book Thinking Fast and Slow, but suffice it to say, for the effects of this quote, mobilize your conscious attention. Become aware of it. That’s how you combat things like the anchoring effect. That’s how you combat things like the priming effect and the framing effect.

All three of these are very very influential phenomenon. Things that you want to be aware of, mental models that you want to have in your mental toolkit. So, whenever you see a number thrown out there, understand that that could be influencing your decision making, especially if you’re making quantitive decisions. This has a ton of implications, whether it’s buying a house, whether it’s in business negotiations, whether you’re talking about the value of something, buying a car. People will try to use the anchoring bias on you all the time in your life. And sometimes it’ll happen by accident, sometimes it’ll happen consciously. But it’s something you want to really press pause, think about, and be aware of.

On the flip-side, you can also harness anchoring to your benefit if you’re presenting something, you want to frame something in a certain way. Remember, the previous episode we talked a ton about how important simple turns of phrase are, in shaping the way that things are framed and shaping people’s emotional reactions and decisions in the way that things are phrased. So if you haven’t yet listened to the framing episode, I highly recommend checking that out. But if you want to influence people’s decision making, get people to make the decisions that you think are the best possible decisions, anchoring can be another tool in that toolbox that can help you shape those decisions in a more proactive and effective way. 

July 20, 2016 /Lace Gilger
Mind Expansion, Decision Making

How This Simple Change In Wording Made 50% of Doctors Choose a More Dangerous Medical Procedure

July 13, 2016 by Lace Gilger in Decision Making

Do you think that your doctor makes their decisions based on data or on trivial factors such as how a sentence is worded?

Do you think that your decisions are typically rational and based on the facts?

In this episode we discuss how a twist of phrase made 50% of doctors choose a more dangerous medical procedure, what explains an 88% difference in organ donations in two similar countries, and how experts can make vastly different choices based on the same exact data as we explore the Framing Bias.

As Nobel Laureate Daniel Kahneman puts it in his book Thinking Fast and Slow: 
"It is somewhat worrying that the officials who make decisions that affect everyone’s health can be swayed by such a superficial manipulation."

The way things are presented can have huge implications for your decisions without you even realizing it and this all operates at a subconscious level beyond your conscious experience.
 
Behavioral economist Richard Thaler explains it this way: “The false assumption is that almost all people, almost all of the time, make choices that are in their best interest."

This episode is going to focus on drilling down and understanding a specific cognitive bias – a mental model – to help you start building a toolkit of mental models that will enable you to better understand reality.
 
Framing bias – along with Priming, which we covered in the last episode, and Anchoring – which we will cover in a future episode – are all cognitive biases that you want to know, understand, and be aware of – so that you can add them to your mental toolbox.

Thank you so much for listening!

Please SUBSCRIBE and LEAVE US A REVIEW on iTunes! (Click here for instructions how to do that!).  

SHOW NOTES, LINKS, & RESEARCH

The specific research studies we cite are located within the book Thinking Fast & Slow (Cancer Treatment, Asian Disease Problem, and Organ Donation Problem).

  • [Book] Nudge: Improving Decisions About Health, Wealth, and Happiness by Richard H. Thaler and Cass R. Sunstein (see here)

  • [Book] Think Twice: Harnessing the Power of Counterintuition by Michael J. Mauboussin (see here).

  • [Book] Thinking, Fast and Slow by Daniel Kahneman (see here).

EPISODE TRANSCRIPT

Today, we’re going to explore how the way things are presented can have huge implications for our decisions. Without us every realizing it. How a simple change of wording can dramatically influence multiple different medical outcomes. What accounts for an 82% difference in organ donation rates, and how much of this operates at a subconscious level beyond our conscious experience. This episode is going to focus on drilling down and understanding a specific cognitive bias. A mental model. To help you start building that mental toolkit that we’ve talked about in previous episodes. When we did the interview with Shane Parish, when we did the interview with Michael Mauboussin, both of those episodes dig down and drill in and explain the concept of making better decisions by building a toolkit of mental models. Of different ways of understanding the world. Ways of understanding reality. And if you want to drill down and get to the fundamentals of why you should build that toolkit, and how it’s important, I highly recommend checking out both of those interviews. The mental model that we’re going to focus on today is framing bias. Framing bias, along with priming which we covered in the last episode, and anchoring which we’re going to cover in the next episode. Are all cognitive biases that you want to know, understand, and be aware of, so that you can add them to your mental toolbox, so that you can be a more effective decision maker, and so that you can understand reality more effectively. I wanted to start out with a quote from the book Nudge by Richard Thaler. Great book, very focused on framing and describing framing and its implications. 

QUOTE: “The false assumption is what almost all people, almost all the time, make choices that are in their best interest, or at the very least are better than the choices than would be made by someone else.” End quote.

One of the things we’re going to discover about the framing bias is that often, when we make choices, we think that we’re making choices based on logic, based on morality, based on rationality. But in many cases, the entire basis for why we made the decision is the frame. And by the frame, I mean the entire basis for the reason that we made that decision, is simply the way the question was worded. The framing effect, or the framing bias, is a cognitive bias in which people react to a particular chose in a different way, depending on how that choice is presented. There are three particular books that I really like that talk about framing, explain it and drill down into it. The first is Nudge by Thaler and Sunstein. And I quoted from Nudge a moment ago. The second is a book that we’ve talked about in the past, Thinking Fast and Slow by Daniel Kahneman. Again, that book is very dense, very technical, but also incredibly rich in information. Not the best starter book if you want to go down this path and learn about a lot of these topics, but unquestionably a book you must read if you ever want to have a deep understanding of how some of these biases work. Lastly, Think Twice  by Michael Mauboussin. Again, previous podcast guest, someone we’ve talked about. If you want to get a slice or a view of how Michael thinks about the world, definitely listen to the interview that we did with him. But Think Twice by Michael is an amazing book that really covers a number of different cognitive biases and especially drills down and explains very effectively the framing bias.

So, we’re going to look at a few different examples of how the framing bias can shape and impact our decision making. Or, shape an impact the decision making of people that we often consider experts. And remember in the podcast episode in the past we talked about the authority bias when we went through the “Weapons of Influence” series. The authority bias, many times we think that people in authority have a special view on the world. That they know more than we do, that they make better choices than we do. In reality, authority, many times, doesn’t matter. It doesn’t make that big of a difference. Authority gives us a sense of confidence, gives us a sense of certainty, but it’s often falsely placed confidence, falsely placed certainty. And you’ll see that in a number of these examples. Let’s drill into the first example.

The first example was a study conducted by Kahneman and Tvirksy in conjunction with Harvard Medical School. We’re talking about serious experts here. This study was a classic example of the concept of emotional framing. The participants in this study were physicians, so they’re not students. These were doctors, these were practicing physicians. They were given statistics about two different treatments for lung cancer. One option was surgery, the other option was radiation. The statistics that were given the five-year survival rate clearly favored surgery. But there was a little bit of a twist. Surgery is slightly riskier than radiation in the short-term. So, the actual statistics were that they one month survival rate for surgery is 90%. Or if you look at that another way, there’s 10% mortality in the first month of surgery. But going back to the data, thinking about, looking at the data. Remember, the data that the doctors were given clearly showed that surgery was the better option long-term, for all of the patients. The results, 84% of the physicians chose surgery as the option when they were told that the one-month survival rate for surgery was 90%. When the physicians were told instead that surgery has a 10% mortality rate in the first month - again, these are the same sides of the coin, right? One is 90% survival, obviously implies a 10% mortality rate. But the doctors were only told one of those two sentences. The doctors that were told not that survey had a 90% survival rate but rather that surgery has a 10% mortality rate in the first month, those doctors, only 50% of them chose surgery. A 34% difference in the outcome. Surgery was clearly the optimal procedure, clearly the best choice in all instances. But just a slight tweak of the frame, a slight tweak of the wording, resulted in the doctors in the second case, the doctors that were presented with the fact that surgery has a 10% mortality rate in the first month, 34% fewer of those doctors made the recommendation of surgery. 

So, from 84% down to 50%. That’s a massive change in something that seems so obvious. Right? If there’s a 90% survival rate, clearly that means there’s a 10% mortality rate. But the way that our brains are wired, the way that the human mind is structured, is that presenting something, or as we would say - framing something - remember, we’re talking about the concept of framing. Framing something a different way, even though logically they’re equivalent, logically they’re exactly the same thing. But framing them in a different way, this procedure has a 90% survival rate, versus this procedure has a 10% mortality rate, you know - it even sounds better, it sounds safer. I’d rather have a procedure with a 90% survival rate. But they’re the same thing. And these doctors at the Harvard Medical School were influenced simply by that framing. Only 50% chose the optimal procedure when they were presented with that procedure only having a 10% mortality rate. Whereas, 84% chose the procedure when they were presented that it had 90% survival rate. As Kahneman says, QUOTE: “Medical training is evidently no defense against the power of framing.” Unquote. The scary implication here is that most of us passively accept the way that problems are framed, and therefore we don’t often have the opportunity to discover that our decisions and our preferences are what Kahneman and Tvirksy call frame-bound rather than reality-bound. I.e., the way the question is framed and presented, changes the way we feel about it. Changes the ultimate decision we make. 

This is not the only example of framing having a major implication in the way that experts feel and think about life and death outcomes. Another example is what Kahneman and Tvirksy call the Asian disease problem. In this study, Kahneman and Tvirksy had respondents look at an imaginary disease outbreak which is expected to kill 600 people. They were proposed with two alternative solutions which were phrased slightly differently. And this gets into the concept of one of the poor tenants of something called prospect theory which we’ll talk about in detail in a future episode on the podcast. But it’s something that Kahneman and Tvirksy created and discovered and are in many ways one of the things they’re best known for. There’s a disease that threatens the lives of 600 people. The first frame, the first option that was presented, was a choice between A and B. In program A, 200 people’s lives will be saved. In program B, there’s a 1/3 probability that 600 people will be saved and a 2/3 probability that no one would be saved. Okay. So, program A guaranteed saving 200 lives. Program B, 1/3 chance of saving 200 lives, 2/3 chance of killing everybody. A substantial majority of a respondents chose program A. They chose this certainty of saving 200 lives. Now it’s important to note here that statistically, those outcomes are identical, right. The expected value is identical between the two. 200 is 1/3 of 600. So, really, we’re looking at do people prefer a safe choice? Or do people prefer the gamble, right? 

And this will come into play when we look at the second frame. The second way that the same decision was proposed is that if program A is adopted, 400 people will die. If program B is adopted, there’s a 1/3 probability that nobody will die, and a 2/3 probability that 600 people will die. If you think about it, program A and program A are identical, and so are the consequence of program B and program B. In the second frame, a large majority of people chose to gamble. They chose program B, right. This ties back to the same concept, the same idea of framing. But it gets at something else. When people are faced with dangerous outcomes, they prefer the sure thing over the gamble when the outcome is a good outcome. I.e., this is also known as being risk adverse. That’s why people, when the frame is presented as saving 200 lives, or gambling to save 600, people prefer the sure thing. They’re risk averse, they want to just lock in the 200 lives they can save. However, when outcomes are negative, people are risk-seeking. They tend to reject the sure thing and accept the gamble. When the same exact question is phrased as option A, 400 people die, option B a 1/3 chance of saving 600 people or a 2/3 probability of all of them dying. People vastly prefer the gamble. 

Previously, these same exact conclusions have been discovered in a number of different contexts, looking at money - looking at how people behave in the financial markets. This is tied to the concept of loss aversion, which we touched on in the interview with Michael Mauboussin. The fascinating thing about this, is this also demonstrates the same tendency takes place when we’re talking about health outcomes, when we’re talking about people’s lives. As Kahneman says, QUOTE: “It is somewhat waring that officials that make decisions that affect everyone’s health can be swayed by such a superficial manipulation, but we must get used to the idea that even important decisions are influenced, if not governed by system one.” End quote. Again, System One we talked about this in the last episode, but System One, this isn’t a perfect description, but roughly speaking, System One, think of it as your subconscious sort of rapid decision making mind. So the Asian Disease Problem is a great example of looking at how the same exact outcome can be framed in two separate ways. It almost seems silly talking about it, because logically it’s so obvious that if you save 200, the other 400 will die. Or even thinking about the experiment with the Harvard Medical School. Somebody has a 90% survival rate, it’s the exact same thing as a 10% mortality rate. But just explaining it in a different way. Changing the frame substantially changes the way that people act. And it’s a very important thing to remember and to consider that when people are facing good outcomes, they’d rather be risk-adverse. They’d rather lock in the sure thing, right, they’d rather save those 200 people. But when it’s framed as a negative outcome, even when it’s the same situation, when it’s framed as condemning 400 people to die, they prefer the gamble of trying to save everyone. So, in both of those scenarios, the situations were actually identically. But changing the frame changed the way a substantial majority of respondents selected the outcome that they preferred.

Now we’re going to look at another example. This one you may have heard of this. It’s a very often-sited, very common example, of how framing can have a substantial impact on another medical outcome. 

A study that was originally published in 2003 looked at the rates of organ donation in a number of different countries. Countries that they tried to compare was demographically and culturally similar to see why they had these massive gaps. And the two they looked at specifically, they looked at comparing Austria and Germany. Two very cultural similar nations and they looked at comparing Sweden and Denmark. The organ donation rate in Austria is near almost 100%. But the organ donation in neighboring Germany was only at 12%. What factor could explain the 88% gap between those two outcomes? The 88% gap in organ donation rates in two countries that, by and large, are very similar. And the inhabitants of each country behave very similarly, live very similar lives, have very traditions, morals, standards, cultural practices etc. Similarly, Sweden had an 86% organ donation rate. Denmark’s? 4%. These massive gaps - and these are life-changing outcomes here. Imagine if you have an entire population of organ donors, versus a population where only 4% donate their organs. This is something that’s a life-and-death thing for many, many people. This is changing people’s lives, people who are looking for organ donations. The thing that was causing this was so, so simple. It was a framing effect. Again. These enormous differences are caused simply by the fact that in Austria and Sweden, the countries with extremely high organ donation rates, everyone is opted in to organ donating. And you have to - it’s very simple, all you have to do is check a box and say “I no longer want to be an organ donor.” Vice versa, in the countries Germany, Denmark, you have to opt in to being an organ donor. That’s it. That’s the only difference. A simple checkbox. Whether people are opted in by default to donating their organs or not. 

As Kahneman puts it in Thinking Fast and Slow, QUOTE: “that is all. The single best predictor of whether or not people will donate their organs is the designation of the default option that will be adopted without having to check a box.” End quote. 

It’s that simple. That’s the crazy thing about the framing bias. These totally obvious, totally transparent, if you think about them logically, situations, people make crazy decisions, or society makes vastly different decisions based on something as simple as taking two seconds to check a box. These outcomes have huge, dramatic changes for the societies that they’re in. Or, if you’re looking at or thinking about these medical outcomes. Simply the way that something is phrased can change the way somebody makes a decision that can impact their life therein materially. That’s why framing is so dangerous sometimes, because we often don’t understand how the frame is impacting the way we think about the problem. Here is another great quote where Kahneman really sums this up nicely, from Thinking Fast and Slow. 

QUOTE: “Your moral feelings are attached to frames. To descriptions of reality, rather than to reality itself. The message about the nature of framing is stark. Framing should not be viewed as an intervention that masks or distorts and underlying preference. At least in this instance, and also in the problems of Asian Disease and in surgery versus radiation for lung cancer, there is no underlying presence that is masked or distorted by the frame. Our preferences are about framed problems and our moral intuitions are about descriptions, not about substance.” End quote.

That’s very important the way that’s he’s phrased that. Our moral intuitions are about descriptions, not about substance. The way we viscerally feel about the option of saving 200 lives versus condemning 400 people, despite the fact that they’re the same thing, our emotional, our moral preferences, are about the frames themselves as opposed to the underlying reality. Thinking about the ways this might impact our lives on a day-to-day basis. Thaler, in the book Nudge, has another great quote.

QUOTE: “The verses that seemingly small features of social institutions can have massive effects on people’s behavior. Nudges are everywhere, even if we do not see them. Choice architecture both good and bad is pervasive and unavoidable, and it greatly effects our decisions.”

He uses a few phrases in there that we haven’t touched on before. Nudges are what Thaler and Sunstein use in the book Nudge to describe some of these frames, to describe another thing he calls choice architectures. The interesting thing is you can structure choice architectures in your own life in a way that can make you better decisions. You can think about, and be consciously aware of the frame. The sooner you become aware of it, the sooner you boil it down to the logic behind it - you can see through the illusion of the frame. You can see through the false choices that the frame creates, and make much more and effective and better decisions. Similarly there are many, many ways that you can think about how can you frame things more effectively to achieve what you want to achieve? If you’re presenting information to people, if you’re trying to convince someone to do something. Think very carefully about how you have framed the situation because the frame itself, just the wording of the situation, can have a dramatic impact on how people will react to it on the decision that people will make, and on the way that they’re going to feel about making that decision.

Think back to the example of the Harvard Medical School. Just a simple twist of the phrase - I think this project has an 80% chance of making it, or there’s a 20% chance this project is going to end in failure. If you’re sending an e-mail to your boss, if you’re proposing something, if you’re pitching investors, if you’re teaching students. Whatever it may be, think very carefully about the frames that you're using, because the frames can have a serious impact on how people react and the decision they ultimately make down the road. 


 

July 13, 2016 /Lace Gilger
Decision Making

This Powerful Factor Controls Your Decisions And 86% of People Have No Idea It Exists

July 06, 2016 by Lace Gilger in Decision Making

Do you think you’re in control of your thoughts and actions?
 
What if things totally out of your conscious experience of reality actually controlled your decisions?
 
What if random phenomenon – like the music you just heard, or the words on a billboard, changed the way you thought, moved, and the decisions you made?
 
The power of your subconscious mind is much greater than you realize.
 
As Nobel Laureate Daniel Kahneman puts it in his book Thinking Fast and Slow: “You cannot know this from conscious experience, of course, but you must accept the alien idea that your actions and your emotions can be primed by events of which you are not even aware.”
 
In this episode of the Science of Success Podcast we dig deep into the Priming Effect – the way that your environment can shape your decisions, actions, and thoughts without you ever even realizing it.
 
We discuss:
-The powerful factor shaping peoples decisions that 86% of people were totally unaware of
-What caused voters to care more than children’s parents about funding the school system
- How “like ripples on a pond” primed effects can shape and define our behavior in huge ways
-How the word Florida makes people behave like the elderly
-Another mental model to add to your tool-kit
-And much more!
 
Do you want to get smarter and make better decisions? Listen to this episode!

Thank you so much for listening!

Please SUBSCRIBE and LEAVE US A REVIEW on iTunes! (Click here for instructions how to do that!).  

SHOW NOTES, LINKS, & RESEARCH

  • Automaticity of social behavior: Direct effects of trait construct and stereotype activation on action. (see here)

  • Contextual priming: Where people vote affects how they vote (see here)

  • The influence of in-store music on wine selections. (see here)

  • [Book] Thinking, Fast and Slow by Daniel Kahneman (see here)

  • [Science of Success Episode] The Psychology Behind Making Better Decisions with Global Financial Strategist Michael J. Mauboussin (see here)

  • [Science of Success Episode] How To Stop Living Your Life On Autopilot, Take Control, and Build a Toolbox of Mental Models to Understand Reality with Farnam Street’s Shane Parrish (see here)

EPISODE TRANSCRIPT

Today, we're going to explore how our environment can shape our decisions without us ever even realizing it, how a change of music can dramatically shift your buying preferences, how the smells around you can change your behavior, and how this all operates at a subconscious level beyond your conscious experience. This episode is going to focus on drilling down and understanding a specific cognitive bias, a mental model to help you start building the mental toolkit that we talked about in previous interviews with Shane Paris and with Michael Mauboussin. In both of those episodes, both of them are fantastic thinkers, experts in human decision making, and they both recommended building a toolkit of mental models so that we can better understand reality.

This episode is one of those tools that you're going to put in your toolkit. This episode focuses on the specific cognitive bias known as priming or the Priming Effect. Along with framing and anchoring, which we're going to cover in upcoming episodes of the podcast, priming is a strong, subconscious tendency where your environment can shape your decisions and shape your behavior without you ever being conscious of it happening. Priming is a phenomenon that can have a major impact on our actions, on the way that we perceive the world, on the things that we do, but many times, one of the things that people don't realize about priming is that it often takes place at a completely subconscious level, and I wanted to share a quote from Daniel Kahneman in his book, Thinking Fast and Slow, which I've recommended before in the podcast. Thinking Fast and Slow is a phenomenal book, very, very dense, very, very information-rich. If you're new to this topic, I would not recommend starting with Thinking Fast and Slow. I would say start with Influence by Robert Cialdini. Start with even some of our episodes. The entire Weapons of Influence series that we've done on the Science of Success is a great place to dig in. But I wanted to share this quote with you from Daniel Kahneman in Thinking Fast and Slow about priming.

Quote: "Primed ideas have some ability to prime other ideas, like ripples on a pond. Activation spreads through a small part of the vast network of associated ideas. The mapping of these ripples is now one of the most exciting pursuits in psychological research. Another major advance in our understanding of memory was the discovery that priming is not restricted to concepts and words. You cannot know this from conscious experience, of course, but you must accept the alien idea that your actions and your emotions can be primed by events of which you are not even aware." End quote. That last part is essential; the understanding that events of which you are consciously not even aware of can prime and change your behavior. It's one of the ways that your environment, the things around you that you can't control or that you don't control, can transform or shape or change your behavior, and we're going to look at a few different examples of that.

The first example is something that's known as the Florida Effect. This is a classic experimental psychology study. There's a psychologist named John Bargh, and they conducted an experiment at New York University. They took a group of 18- to 22-year-olds and they had them assemble four-word sentences from a set of five words. They split the students into two separate groups. One group was given neutral words -- just random words. You know, table, apple. Things that had no association exactly with what they were testing for. The other half of the group received words that were associated with the elderly. Words such as Florida, forgetful, bald, gray, wrinkle, et cetera. The key point is that at no point was the word "old" or the words "elderly" actually mentioned in this word scramble. What they did after that, they had the students finish conducting this exercise and then they had them walk down a hallway to another room, and this is actually where the experiment really took place. The students that had been given words that were indirectly associated with old age walked down the hallway 13% slower than the students who had been given neutral words. In the next room, they asked the students if they had noticed a common theme about the words. None of the respondents said that there was any commonality, anything connecting the words. So, they were consciously unaware of the impact of the words. Their subconscious picked up on the fact that these words were associated with the elderly. And, again, going back to the quote from Kahneman a moment ago, it's like ripples in a pond. The fact that these words like "Florida" or "wrinkle" were associated with old age -- what else is associated with old age? Walking slowly, moving slowly. At a completely subconscious level, these students walked 13% slower than their comparison group, simply because they had subconsciously been primed that words related to the elderly--again, the word "slow" was not used, the word "old", the word "elderly", those were not used; they were things like "Florida"--slowed their walking speed through an indirect association of something they were never conscious of.

The key thing that you want to understand and take away from the Florida Effect study is that they were consciously unaware, and that the thing that they were primed to do, to walk more slowly, was an indirect association of something that was never mentioned. So, again, priming effects can have a number of chain reactions, ideas connecting to other ideas like ripples in a pond, that can impact and change your behavior in a way which you're never conscious of. 

Another example of the Priming Effect is in school voting patterns. In Arizona in the year 2000, they looked at a number of different propositions to increase school funding. What they found was that when they had the polling station located inside of a school, the voters were substantially more likely to vote in favor of the proposition increasing school funding. The funny thing about that: the effect of just locating the polling station inside of a school was greater than the differential between average voters and parents. So, the Priming Effect of just changing the surroundings of where people are voting, changing their environment, had a bigger influence on voters than whether or not they were parents in desire to vote in favor of a proposition -- increasing school funding. 

Another great example is a study about music and music's subconscious influence on you. A 2007 study published in the journal Nature examined the impact of music on people's purchasing choices. Specifically, they set up an experiment in a wine store. They put bottles of French wine and bottles of German wine next to each other on a shelf. Over the next two weeks, they then alternated playing French music and German music. What they found in their experiment was that when French music was playing, French wines represented 77% of sales. When German music was playing, German wine represented 73% of sales. Now, that finding alone is pretty fascinating: the notion that just by playing a certain kind of music you can have that dramatic of a shift in consumer preference, that dramatic of an impact on people's buying behaviors. But the most fascinating finding of the music study was actually when asked about their purchase choices, what do you think people said? What do you think the customers said when they asked them, after they had purchased, "Did the music have an impact on your purchase decision?" 86% of people denied that the music had any influence over their purchase decision. 86%. Let that sink in for a second. Just like the Florida Effect, these priming effects take place at a subconscious level. Many of the people may not have even noticed what music was playing, but it clearly had a powerful impact. When French music was playing, 77% of the sales were French wine. When German music was playing, 73% of the sales were German wine. And yet when they were asked, 86% of people said that the music had no influence on their purchasing decisions. 

The real takeaway from this: The environment can prime you to make certain decisions, can change the behavior of your body at a subconscious level, and, in almost every instance, you're totally unaware of it. We simply don't realize that it's happening. And the reason that it's so hard to understand this, the reason it's so hard to see these priming effects is because they take place at a subconscious level. It's not something that's part of our conscious experience. It's something that we don't see and understand every day. Our conscious experience is one of often the illusion of control, the illusion that we're making logical, rational choices, that the reason we do things is based in thoughtful decision-making, that we have control over our environment. The reality is that, oftentimes, our subconscious makes a decision that we're never consciously aware of, and we create justifications or reasons why we made that decision, or we're not even aware of it. In the example of the Florida Effect, the students, the participants, were not even conscious of the fact that they had been walking more slowly, and they were not even conscious of the fact that the words were associated with the elderly to begin with.

Priming effects can also take place or be triggered by a number of different phenomenon. Priming can be triggered by music, by smell, by sight, by words, by images. There's another experiment conducted in 2005, published in the journal Psychological Science, that explored the impact of smell and how smells can create priming effects. They exposed people to the scent of an all-purpose cleaner and had them eat a crumbly biscuit. What they discovered was that participants who had been exposed to the scent of the all-purpose cleaner were substantially less messy. The people who had been exposed to the all-purpose cleaner kept their area neater, tidied up more, and generally made less of a mess. Again, they were never consciously aware that they had even been exposed to this smell. It's something that subconsciously changed and impacted their behavior. 

There are lots of influences throughout your life, things in your environment, things that happen to you, around you -- music, smells, images that impact your behavior, impact your thinking, impact your thoughts at a subconscious level. I wanted to share another quote from Daniel Kahneman's book Thinking Fast and Slow that sums this up very nicely.

Quote: "The results are not made up, nor are they statistical flukes. You have no choice but to accept that the major conclusions of these studies are true. More important, you must accept that they are true about you. You do not believe that these results apply to you because they correspond to nothing in your subjective experience, but your subjective experience consists largely of the story that your System 2 tells itself about what's going on. Priming phenomenon arise in System 1 and you have no conscious access to them." End quote. 

And Kahneman uses some terminology there. He uses the phrase "System 1" and "System 2". That's a concept that he talks about and discusses throughout Thinking Fast and Slow. For the purposes of understanding this, essentially, System 1 is your subconscious processing power. It's the automatic subconscious portion of the mind that does things like read words, process images, hear sounds, make conclusions. System 2 is your conscious effort, that deliberate focus on something. System 2 is what you use when you want to do long division. System 2 is what you use when you're planning and thinking deeply. And he dives very deep into that topic in his book, and that's a subject in a rabbit hole for a future episode of the podcast. But, just putting that quote into context, the crazy part about priming effects is that you never experience them consciously. You don't have any memory or any examples of how priming has impacted your behavior, because it takes place at a subconscious level. But, as Kahneman notes, this impacts you. It impacts your behavior. It changes your decisions. It's cognitive bias that you have to be aware of and you have to understand, because once you understand it, you can start to leverage it and use it to shape your behavior in positive ways. You can start to combat it and start to be aware of it. Remember: Awareness is the first step to uncovering and understanding a lot of these cognitive biases. 

And we've actually talked about the Priming Effect in previous episodes of the podcast. When we interviewed Scott Halford, the author of Activate Your Brain, he and Josh Davis, the author of Two Awesome Hours, in both of those podcast episodes we talked about ways to harness priming to your benefit. We talk about and dig into how you can leverage the priming effect, the power of music, the power of your environment, to become more productive, to become more creative, to become more effective, to accomplish whatever it is that you want to accomplish. So, it's very possible to harness the Priming Effect to your benefit, but you have to be aware of it first. You have to understand its influences. Both of those episodes are great episodes to go back to and listen to now that you're aware of priming, if you want to think about and understand ways to positively use the Priming Effect to change your behavior for the better.

On the flipside, being aware of the Priming Effect helps you combat your environment priming you, changing your behavior, changing your beliefs and actions without your conscious input and awareness. And, if you want to be more aware of priming effects, another amazing tool for doing that is meditation, which we've also talked about in a previous episode and we share a great framework for meditation that's simple and easy and you can implement tomorrow in 15 minutes.

That concludes our discussion of the Priming Effect. It's something that operates at a completely subconscious level, that often we're not aware of, but can have substantial impacts on our lives. It can change our behavior; it can change the way we think, feel, and act in the world; and it's something that you need to be aware of. It's one of those cognitive biases that you need to have on your list. It's one of those mental models--remember, we talked about that--that you want to have in your toolkit. In the episodes with Shane Parish and Michael Mauboussin, previous episodes of the podcast, both of them are phenomenal thinkers about how to make better decisions, and they both harped on the concept of building a toolbox of mental models so that you can more effectively understand reality. Both of those episodes are great if you haven't listened to them, and this episode is all about one of those specific tools: the Priming Effect, how to understand it, how to leverage it to your benefit, and how to be aware of it so that it doesn't trip you up and cause you to make bad decisions.

 

July 06, 2016 /Lace Gilger
Decision Making

The Psychology Behind Making Better Decisions with Global Financial Strategist Michael J. Mauboussin

June 15, 2016 by Lace Gilger in Best Of, Decision Making, Money & Finance

Do you want to improve your decision-making a build a better mental toolkit? In this episode we explore the psychology behind making better decisions with Michael J. Mauboussin. 

Michael is the Head of Global Financial Strategies at Credit Suisse. He is the author of three books, including More Than You Know: Finding Financial Wisdom in Unconventional Places, named in the The 100 Best Business Books of All Time. Michael also serves as an adjunct professor of finance at Columbia Business School.

We discuss the following topics:

  • The interconnectedness of knowledge across many different disciplines

  • How to switch to the “outside view” to make better predictions and decisions

  • How to improve your results without being any smarter or better trained

  • A fascinating psychology study that demonstrates how we deceive ourselves

  • The biggest biases that cause investors to make bad decisions (and how to combat them)

  • Why the right tools aren’t enough to make you a successful investor

  • Concrete steps to start down the path of better decision-making

  • How to understand the difference between luck and skill in complex fields like business, investing, and entrepreneurship

  • How to become “numerate” and understand the physics and mathematics of misjudgment

  • What statistical base-rates are and how they can improve your decisions

  • How reversion to the mean really works and why you’ve been misunderstanding it

  • The power of checklists and other decision-making tools

  • And much more!

Thank you so much for listening!

Please SUBSCRIBE and LEAVE US A REVIEW on iTunes! (Click here for instructions how to do that!).  

SHOW NOTES, LINKS, & RESEARCH

  • [Book] Peak: Secrets from the New Science of Expertise by Anders Ericsson (see here).

  • [Book] Tales from Both Sides of the Brain: A Life in Neuroscience by Michael S. Gazzaniga (see here).

  • [Book] A Curious Mind: The Secret to a Bigger Life by Brian Grazer and Charles Fishman (see here).

  • Michael Mauboussin Articles on Value Walk (see here)

EPISODE TRANSCRIPT

Today we have another awesome guest on the show: Michael Mauboussin. Michael is the head of Global Financial Strategies at Credit Suisse. He is one of my favorite authors and the author of three books, including, More Than You Know: Finding Financial Wisdom in Unconventional Places, which was named one of the 100 best business books of all time. Michael also serves as an adjunct professor of finance at Columbia Business School and is an expert in decision making, behavioral psychology, and all of those fields applied to the financial markets, especially. Michael, welcome to The Science of Success.

Michael:	Thanks, Matt. Great to be with you today.

Matt:	We are super excited to have you on here. So, to kind of kick things off and get started, tell us a little bit about... For listeners who might not be familiar with some of your books, tell us a little bit about your background, and how did you become so fascinated with the psychological aspects of human decision making, specifically within the context of investing, which you're obviously an expert at, but also, you know, even more broadly.

Michael:	You know, Matt, I think part of it is you mention my association with Columbia Business School, and I started teaching there in the early 1990s and I was thinking a lot about what I was talking about with the students, effectively giving them tools to try to make them successful investors, and sort of had this growing feeling that what made for great investing had less to do with the tools--you know, accounting and financial statement analysis and valuation, although those things are obviously really important--and much more to do with decision making and temperament, especially under stressful situations. So, probably in the mid-1990s, I started to just open up my reading quite a bit. A lot more science, a lot more in the world of psychology, and sort of being exposed to this world as a lightning bolt of recognition that probably what makes for great... not just great investors, but really great in any field, is awareness of a lot of these psychological factors that improves the quality of decision. So, it sort of changed my whole tenor, recognizing that a lot of things we teach, for example, in business schools or actually any kind of school, are just the ante to the game, but the real success has to do with this whole other field of decision making. So, that was sort of my epiphany, was that recognition of where value comes from. The other thing I'll just mention is I was reading widely... You know, I was one of those  guys who was... You know, I'd read something and I'd be like, oh, here's a connection to this, or here's a connection to that, and just sort of this recognition that we live in an extremely rich world, and that there are a lot of interesting connections between different things that may not be superficially obvious but that I think could really make... that could be some really fascinating connections, and I think really helpful connections to allow people to think about the world more effectively.

Matt:	And that's essentially the concept of the idea of multi-disciplinary thinking, that Charlie Munger is a huge proponent of, and I know you're a huge proponent of, and something actually we touched on a little bit with one of our previous guests, Shane Parrish of Farnam Street. Can you explain a little bit more about, and maybe even provide some examples of, how different disciplines can impact each other or how maybe psychology can underpin finance, or something like that?

Michael:	Yeah, absolutely. The way I like to think about this is that it's like a toolbox, the metaphor of a toolbox, right. You might have the best hammer or the best screwdriver of anybody, but what you really want to do when you're thinking about the world is to have the right tool to apply to the right problem. And so, I think the Munger approach... And I do. I give huge credit for my thinking to Charlie Munger, who I think is the most articulate. I'd also mention another book, which many of your listeners may be aware of, by E.O. Wilson called Consilience, and these ideas that many of the vexing problems in our worlds are at the intersection of disciplines and we need a sort of full toolbox to try to tackle them. So, to me, this is the way to think about the world. The other thing I'll just say is another quick comment, is that we've made huge strides in science over the last, let's say, 400, 500 years through reductionism, which is to say basically breaking things down into its fundamental components, and it's been extraordinary, and I think a lot of the things we take for granted in life, advancements, are the result of that amazing work. But I think increasingly, we're bumping into areas where we're dealing with systems that are complex, where reductionism really doesn't work, where, in a very real sense, the whole is greater than the sum of the parts. And that requires a very different way of thinking about the world. Now, if you think about academia in general, you get paid for specialization. You get paid for being narrow. But a lot of the problems in the world are kind of going the opposite direction, where it's important to think about things from different perspectives. So, one example I would give you, and I think is also a very powerful mental model in and of itself, and for me was another big eye-opening moment, is just thinking about markets as complex, adaptive systems. The stock market, right. So, if you say to an academic or a really traditional economist, "How should we think about how people behave?" they'll typically say, "Well, we've got these models of agents who are rational and they understand their different... They have information that comes in and they understand their preferences and they have utility functions, and then they make decisions on the basis of this. You know, we've known for a long time that empirically, that's not how the world works. So, if you try to extrapolate that into a model of markets, it just doesn't fit the facts all that well. Complex adaptive systems, by contrast, come at the world as thinking about the interaction of heterogeneous parts or agents, right, and you can think about other examples like ants in an ant colony, right? Absolutely fascinating, because the colony itself is almost an organism. It has a life cycle and is sometimes aggressive, sometimes passive, but every individual ant is really basically clueless. They're sort of bumbling little agents within this total. So, I think that's a much, much richer way... And by the way, your consciousness, for example, neurons in your brain, you can think about example after example, people that live in New York City are components of a complex system. And when we take that sort of set of tools and that way of thinking to the world of markets, it just opens up, again, new ways of thinking about things gives you good reason to understand why markets are generally hard to beat, but it also gives you some insight as to why markets go periodically haywire. So, to me, this whole mental models thing is just a really, really powerful way to think about the world. Now, let's talk about the pros and cons. The pros is, I think, that if you do understand big concepts from various disciplines, gives you a huge leg up in life. The con is it requires constant--basically--reading and thinking and learning. So, if you're going to get into this world, it ends up being sort of a commitment to perpetual learning. Now, that's not everybody's cup of tea, but if it is, I just think it's a really fun, exciting, and I think ultimately a great way to find success.

Matt:	I love the idea that the traditional education or business school or whatever it might be is sort of the ante to get into the game, but if you really want to win, if you really want to compete at the highest level, you need to have a much richer and much deeper toolkit to really understand reality.

Michael:	Yeah, and I really think that's the case. The other thing I'll just say is that's certainly true. I also think that there are gaps now in our education. Especially, for example, in high school and college students. I'll give you one example, and I don't mean... This is sort of a negative example, but I don't mean to be too negative. One of my sons went to a really terrific high school and they decided to develop a leadership center for the kids, which is great, right. So, they were working on things communication, cultural awareness, a lot of things you would say are really important. But what struck me as fascinating about it is there was actually no segment or module on decision making or on psychology. So, I went to the guy that ran the program and I said, "This is really interesting, because at the end of the day, our future leaders are really people that need to be equipped in understanding how to make decisions, understanding being [INAUDIBLE 00:10:39], or understanding the scientific method and what science tells us. These are actually very essential elements in the future, and we're just basically not teaching those things. So, that, to me, is another area that we should be spending... And by the way, I'm about to go back to one of my college reunions, and when I went to college, the kinds of things, the decision making courses--they're now much more common--didn't exist at all. So, if you're someone of my age and you're in your forties or fifties, chances are you didn't have any access to it in school. So, there's more of it now, but certainly not enough of it, in my opinion. So, yeah, I think you have to supplement a lot of what your curriculum has been in order to become a more well-rounded individual.

Matt:	So, if you're somebody that's listening to this podcast, what are some easy steps or maybe some first steps they could take on the path towards starting to build this toolkit or starting to maybe understand human decision making more effectively, or make better decisions?

Michael:	Yeah, Matt, and I think that you know my answer, which is probably to start, whether you can read or certainly listen to audiobooks or something, but there are a handful of books that'll probably get you off and running. One book that I always loved, and I'm sure you're fan of as well, is Bob Cialdini's book Influence: The Psychology of Persuasion. It's an easy book to read. It's got six big models about how you could influence people and their decision making, or you can also see or reflect how those things influence you and your decision making. So, that's a great starting point. Another great one, of course, is Danny Kahneman's book, Thinking, Fast and Slow. It's probably a little bit more of a challenge, but so rich in terms of its content. So, that would be another thing I would say, is people reading that and just really, I mean, the degree to which you're willing to wade into the, for example, the psychology literature is fantastic. So, that's one set of things. The second set of things is if you have an appetite to do so, it's really great to try to hang out with people who are different than you. And that might be if you're a finance person, hang out with artists or people who are into literature. You know, there was a very famous essay many years ago about the two cultures, sort of the literary culture and the scientific culture, and the argument was these cultures really didn't meld with one another, and I think those people who really tried to reach out, to understand different points of view, have diverse thoughts, I think that really just forces you into being actively open-minded about the world and, I think, really gives you a leg up in a lot of circumstances. So, I don't know if that's a gentle entry in, but probably the first thing I would say is to start to read some of these things and think about, be introspective about how they're influencing you or how your decision making processes work, and then just make an effort to reach out to people who are different. You know, is Brian Grazer the guy who wrote a book on creativity recently? Do you know that book?

Matt:	I do not.

Michael:	The Hollywood guy. So, the Hollywood guy.

Matt:	We'll put it in the show notes.

Michael:	[Laughs] Yeah, exactly. So, we'll track down the exact book, but I think it's just called Creativity. And he had this sort of extraordinary story, which I absolutely love, and he said he just made a point, is when he read an article... He's a pretty famous producer now, but he'd read an article about somebody, he would just say, "I want to meet that person," and he would call them up out of nowhere and say, "I'd love to have a cup of coffee with you. Can we make that happen?" And he'd reach out to people where it'd take six months, 12 months, 18 months to schedule something, but he was just reaching, going all over the place. One week he'd be talking to a lead athlete. Next week he'd be talking to an astronaut. Then he'd be talking to a Navy SEAL. Then he'd talk to a police commissioner. I mean, this incredible, fascinating array of people, and he just made it part of what he was about, and I think he argues that really helped stoke his own personal creativity and mindset.

Matt:	That's fascinating. And that makes me think of two kind of quick notes for people who are listening. One is we actually did a whole... We did a six part series called Weapons of Influence where we basically... On the podcast, where we basically broke down each of the major pillars of influence and kind of dove deep into the research studies and the findings behind it. So, for people who want to kind of take that first step that Michael's recommending, that's a great way to get started. And the other thing, briefly, we also did a really cool episode recently on creativity, so, to kind of drill into some of this neuroscience behind that and how to spark your own creativity, for people who are listening.

Michael:	Super cool. Super cool.

Matt:	So, one of the things you touched on briefly was the idea of being numerate, and another way that I think Peter Bevelin called that in the book Seeking Wisdom is the physics and mathematics of misjudgment, and I know Munger did an amazing job in his speech about human misjudgment, kind of nailing all the different psychological factors. But two of the things I think that you've done an incredible job of really studying and explaining, Michael, are the concept of base rates and the concept of reversion to the mean, and I'd love to drill into talking about both of those, and I know there's a lot to unpack in each one of those, but in a way that we could kind of explain them to a layperson that's never heard of either of those concepts why they're important and what they are.

Michael:	Yeah. So, great. Great question. The base rate, it really comes from the work of Kahneman and Tversky, so Danny Kahneman, Amos Tversky. They were examining how people... Well, actually, the ideas precede that by many decades, but they sort of codified this to some degree. And the idea is that there are two ways of making forecasts of the world, what they called the inside versus the outside view. So, the inside view--and Matt, this is how you and I typically operate, right. You know, if I give you a problem, you give me a problem, our classic way to solve it is to gather a bunch of information, right, combine it with our own inputs, and then project into the future, right. So, if you go to a college student and you say, "Hey, when will you be done with your term paper?" they sort of think about what their calendar looks like, how hard the paper is, and so forth, and they make some sort of projection. So, that's the natural way to think. The outside view, by contrast, we're calling the base rates, says, you know what? I'm going to think about my problem as an instance of a larger reference class. Basically, in plain words, I'm going to ask the question, what happened when other people were in this situation? Right, and it's a very unnatural way to think for two reasons. Number one is you have to leave aside your own information, this cherished information that you have, and second is you have to find and ultimately appeal to this base rate. So, for example, in our term paper example, instead of saying, "Hey, when will you finish your term paper?" and the student thinking about their own schedule and the difficulty of the paper, you basically ask a question of all the students who had a term paper due a certain day, when did they actually complete it? It's a very different question, and it turns out that what we see in the decision making literature is the introduction of base rates actually massively sharpens the quality of forecasts. So, we've applied it very specifically, for example, in the world of business to things like sales growth rates for companies. So, you might say, you know, hey, here's a company that has 10 billion dollars in sales. What's the sales growth rate going to be for the next three years or five years or ten years? So, you could model it. Again, bottom up. Sort of say, "Here's what they do. Here's how many new units they'll sell," and so forth. Or you can ask the question of companies of that same size over time, "What's the distribution of growth rate?" So, they're not mutually exclusive. Both of them go together, but that's the idea of base rates. And so, once you start to think about base rates, you start to see them, they're basically everywhere. But certainly realms like sports, realms like business, we have really good data on base rates and I think they can be really, really helpful. Reversion to the mean is another concept that is really important, and I think very, actually, quite tricky. So, reversion to the mean formally says that outcomes that are far from average will be followed by outcomes with an expected value closer to the average. So, the classic example of that is heights of people, right. Heights of fathers and sons, for example, specifically. So, what we know is that very tall fathers have tall sons, but the heights of the sons are closer to the average of all the sons. And likewise, short fathers have short sons, but again, the heights of the sons are closer to the average. So, there's sort of a squishing back toward the middle. So, that's an effect that happens, right, and it's just a statistical artifact. By the way, on the height thing, for instance, that sort of has to be true, if you think about it for a second, because otherwise there'd be people walking around who are 20 feet tall and two feet tall. That doesn't happen, right. So, here's an interesting way to think about the reversion to the mean, how powerful the force will be. So, if the correlation from one event to the next event is basically zero, then you should expect very, very rapid reversion to the mean. Let me give you one really concrete example from the markets. It turns out if you look at the standard [INAUDIBLE 00:19:52]500s. They're the most popular index in the U.S., and you look at the results from year to year. So, you take on X axis t=0, like what it did last year, and then on the Y axis, t plus one, what it does in the subsequent year, and you plot that going back to the 1920s. The correlation is basically zero. In other words, what happened last year tells you absolutely nothing about what's going to happen the subsequent year. So, as a result, the best estimate of what's going to happen next is some measure of the average, right. Reversion to the mean. And so, your best estimate for the market is basically the historical average. On the other extreme, if the correlation is perfect, very high, you expect no reversion to the mean at all. So, Matt, if you and I ran a sprint against Usain Bolt, he's going to win, right. And when we run again, he's going to win again. It's going to be perfectly correlated that he's going to win every single time, and there is no reversion to the mean. So, how we finished in prior races or how he finished in prior races doesn't really make a difference. He's going to win every single time. So, this idea of reversion to the mean, you can think about how correlated outcomes are over time. That also gives you an idea of how rapidly that idea of reversion to the mean takes effect. So, super powerful, super important, and often really overlooked. Even people who do this for a living--for example, sports executives--somehow get tripped up and don't fully take into account reversion to the mean.

Matt:	And one of the things that I really struggled with, and I've read your chapters, and a bunch of Kahneman's stuff over and over again. I've read your chapters in The Success Equation five or six times, trying to really drill that concept into my head as the relationship between correlation and reversion to the mean. And also, you know, kind of going back to the simplest example is flipping a coin, and when people think about reversion to the mean, sometimes if a coin comes up heads four times in a row, people think, oh, I'm due a tails, right. But that's actually a completely incorrect way to think about and really understand how reversion to the mean actually functions.

Michael:	Yeah, exactly, and I think that... Look, one of the reasons it's so challenging is because we have intuitions about how all this stuff works, but if we want to be slightly more formal, exactly what you said. So, when correlations are low, reversion to the mean is very, very powerful, and that's my stock market example. When correlation is very high, reversion to the mean is not a powerful force. In other words, what had happened before is, for the most part, a pretty good estimate of what's going to happen next. And yeah, no. By the way, that little heuristic, that's one of our tools in our toolbox. That's a mental model. It's an incredibly powerful mental model and, remarkably, very few people get it. The other thing, you know, Kahneman talks about this, but one of the other reasons that reversion to the mean is difficult is because our minds are wired to seek causality. If I give you an effect, some sort of an outcome, your mind is going to try to come up with a cause to explain it. And reversion to the mean is a concept that really has no cause and effect. And I'll give you an example that I always find to be fascinating. It turns out I mentioned before that the heights of fathers and sons, tall fathers have tall sons, but the heights are closer to the average of all the sons. But it turns out, and this is somewhat counter-intuitive, that if you plot the heights of the sons, it turns out very tall sons have tall fathers, but the heights of the fathers are closer to the average of all the fathers. And we know that sons don't cause father's, right. So, it gives you pause. You sort of say... So, in other words, the reversion to the mean has no arrow of time, and the notion of causality really doesn't apply. It's just it applies any time you have two series that are not perfectly correlated with one another. And by the way, the heights of fathers and sons, the correlation's almost exactly .5. So, in other words, if you're six inches above average, the best estimate of your son's height would be three inches above average, half the distance between your height and the height of everybody else. So interesting, right. So, I applaud you for going back to the concept. I did the same thing many, many times, going back to it, and there are some other people besides Kahneman who talked about it effectively. I just think it's a really hard concept to get your head wrapped around and it also is worthy of a lot of study.

Matt:	I think the trickiest part is the very counter-intuitive notion that there's no cause and effect. That's what people think that it means, that there's some kind of cause that it's going to happen, cause something to happen, when in reality there's no arrow of time, there's no causality at all.

Michael:	Yeah. So, I would say, Matt, to be a little bit more careful about it, it doesn't mean the causality isn't part of it. It just doesn't require causality, right.

Matt:	Yeah, that's definitely a better way to say it.

Michael:	So, the example I give that also... Well, I'll give you a quick story on this. I was presenting to... it was actually an academic conference, and it was on behavioral strategy. Super interesting. So, these are professors of strategy, corporate strategy, who have a behavioral bend. Super interesting topic. So, I was doing a presentation a little bit on luck and skill stuff, and I showed them a very classic, well-known picture where, if you take, say, 100--I'm just making this up--take 100 companies and you rank them in quintiles, so from top to bottom, and then you follow those cohorts, the highest returns on capital I'd say specifically to lowest returns on capital, and you follow those cohorts over time. What you'll see is the high return on capital returns go down and the low ones go up, which is exactly what reversion to the mean would indicate. So, I show that slide, and everyone's sort of, you know, amening and hi-fiving, and they all get that, right. But then I flip the data and I started with 2014 and I went backwards. So, I went from 2014 back to 2005. And again, what you do is you rank the companies on 2014 return on capital, again, highest to lowest, and then you follow those cohorts back in time. And what you find is the same picture. 

Matt:	That's wow.

Michael:	So, it's clear for example that competition... So, you say, why would returns on capital go down over time? And the classic answer in economics is competition, right. So, if you're earning very high returns, maybe I'll come in and try to take part of your business away. That makes total sense. But clearly, competition can't work backward, right. So, it's the same idea that it's flummoxing, right, because competition is such a satisfactory answer as to why returns go down, but it doesn't really explain what we're after. Only partially explains what we're after. It's a really interesting point.

Matt:	And I think that the mind invents the reasons why it's happening. Often it's just a statistical artifact.

Michael:	Yeah. And that's the work... And that's another thing I would recommend. I find this to be almost infinitely fascinating, but the work by Michael Gazzaniga, who is famous for his work on split brain patients, so these are people that have suffered typically epilepsy and, to address these severe seizures, they sever the corpus callosum, the bundle of nerves between the two hemispheres of the brain. And what that opened up for Gazzaniga, Roger Sperry before him, was this opportunity to study modularity in the brain, and what Gazzaniga found was in the left hemisphere, where our language resides for most people, that there's a module they've now dubbed the interpreter, and the primary job of the interpreter is to find causes for every effect. So, it's a sort of cause and effect closing machine. And to your point, often in life, cause and effect are clear. You throw the rock at the window and it smashes. That's cause and effect, right. But the point is that if there's randomness, there's luck, going back to your coin tossing example, there's some sort of stochastic process, your mind is just going to make up a cause. It's fabricated, right, because it wants to close the cause and effect loop, and what Gazzaniga was able to show so brilliantly and so poignantly is that, with these experiments with these split brain patients, they could really isolate where this is happening and come up with these really fascinating results. And Gazzaniga wrote a book last year and he makes this point where... quite powerful, where he sort of makes this claim where he thinks that that module, that cause and effect connection, is the thing that distinguishes humans from other species most fundamentally, which is really interesting if it's true. So, I think that's a really important thing to keep in mind, too, is that our minds are constantly closing the cause and effect loops and it's not above any of us. We all do it and we just have to be very, very mindful of the stories that we're telling ourselves, because sometimes they're true and sometimes they're not.

Matt:	And I don't know the specifics of those studies, but essentially, what they were doing, they had them open a door or something, right, and then the other hemisphere of the brain would invent a reason why they had done it or something, right?

Michael:	Yeah, totally. Exactly. So, I mean, there are lots of different examples. They would show pictures or whatever it is, but one simple example, yeah, would be something just like that. They would flash some words to the left visual field, where it goes to the right hemisphere. Something that'll say, the patient sitting down will say "Stand up." So, the left visual hemisphere sees it. Right hemisphere connects. The patient stands up. So, it's interesting. Of course, the left hemisphere, the person knows that they're standing up. They have no access to that cue, but now the researcher will say, you know, "Patient, why are you standing up?" And the research is almost humorous. It's because these people would fabricate these sort of elaborate, crazy stories. You know, my left knee is sore and I want a stretch, or something like that, right. They would fabricate something that would sort of hold the whole thing together. But obviously, it was completely contrived. So, again, you get these chuckles as you see these things that these people are saying, but the more serious and fundamental point is that we're all doing it all the time and we're just not mindful of it. So, this is just shining a spotlight on something that we're all doing all the time. So, it's a really hard thing to do, but it's discipline to say, am I fabricating a narrative here or is this a luck-laden activity or a luck-laden field? Am I simply just capturing luck here and I'm making up a story to try to make for a cohesive world?

Matt:	I think that's the critical point, is that just because... It's happening in the research, but the reality is it's happening every single day to everyone who's listened to this podcast, and both of us.

Michael:	Precisely. Absolutely.

Matt:	Well, I think that's a good segue into the idea of cognitive biases, and I know that's something you're very knowledgeable about. What are some of the most insidious or even some of the most common cognitive biases that you see people suffering from? And maybe specifically in the context of investing, or even broadly?

Michael:	Yeah. So, there are really two things that I would mention in investing. There are many more. One of them, which is extremely difficult to sidestep, is confirmation bias. This is this idea that even if you struggle to make a decision--let's say buy an investment, buy a stock or what have you--even if you sort of struggle to come to that conclusion, once you've made a decision, we all have a natural tendency to seek information that confirms out point of view and to dismiss or disavow or discount disconfirming points of view. And one of the things we've learned, you know, certainly, and I think a lot of what we've been seeing in computer science the last 25 or 30 years has been strongly reinforcing, is this idea of updating information as new information comes in. So, it's a Bayes' theorem. So, you have a prior... you have a point of view of how the world works. New information comes in and, really, if you're doing your job properly, you should be updating your view, updating your prior, given this new information. And, unfortunately, the confirmation bias is this sort of huge brick wall that prevents new information from finding its way into your mind or finding its way into your decision making. So, that's the first one that's a really big one. The second one is probably overconfidence, and this is very trivial to demonstrate if you get a group of people. People tend to be very overconfident about topics that are a little bit away from their own bailiwick. So, if I give you questions that you know a lot about, you'll do fine, but things that are just a little bit on the margin from that, you'll tend to be overconfident. And the way that tends to manifest in an investing setting, for sure, is people tend to project ranges of outcomes that are too narrow. In other words, they think they understand the future better than they actually do, and they fail to consider possibilities, whether they're really good possibilities or really bad possibilities, and that's, I think, the more pernicious component of overconfidence. So, those are two that come to mind, but boy, you know, things like... We could go on and on. Loss aversion. So, we suffer losses more than we enjoy comparable-sized gains. That's a really big one that looms large in a lot of our decisions. So, there's a long list of them, but those two probably, confirmation bias and overconfidence, are probably the one-two that I would list first.

Matt:	And what do you think are some ways that people can combat each of those?

Michael:	So, confirmation bias is just really, the key is to be as open-minded as possible. Jonathan Baron at University of Pennsylvania's got this beautiful phrase. He called it actively open-minded, and this idea of really, truly trying to be as open as you can to new information or new input. And the second thing, I think it's very few people are going to be formal about doing something like Bayes' theorem, but understanding behind Bayes' theorem, which is, you have a point of view. New information comes in. Are you revising your view, both directionally the correct amount and the magnitude of the correct amount? So, those would be some ways to try to do that. Overconfidence, the key is to just... and we can go back to our discussion a few moments ago about Bayes rates, is just to continue to compel yourself to think about alternatives, right. I'll give you one example that's a very simple one. I joke with my students at Columbia Business School, often when there are stock recommendations, you know, you see someone on CNBC or something, or they recommend a stock for purchase, they'll often say, "Well, the upside is 30% and the downside is 10%." Something like that, so it sounds like three to one. Pretty good, right? But if you think about, just statistically for a moment, the standard deviation of the stock market, right, so how fat the bell shape is of the distribution of returns. It's about 20% standard deviation in the last 85 years or so. So, that's a diverse five portfolio, of course. So, the standard deviation of an individual stock is going to be higher than that. Let me just pick 30% just to make the numbers easy. So, the average stock, let's say roughly speaking, would be up about 10%, mean return, average, with a 30 standard deviation. So, just translate that into statistics. That would say that about 68% of the time, it's going to be between up 40%, right, 10% mean plus 30 standard deviation, to down 20%. So, 10% mean minus 30%. So, 40 to -20. So, I just joked about this 10 to 30 percent upside, 10% down. You know, just one standard deviation is wider than most analysts are willing to accept, and certainly going on two standard deviations, it's vastly wider. So, imposing this discipline on yourself to understand what the underlying distributions look like and to recognize, try to think about having ranges of the future that are wide enough. And then there are other techniques, which we could talk about, and I think you probably have covered some of these in some of your prior podcasts, but things like pre-mortems. So, these sort of structured ways to get people to think about different points of view are also some nice techniques to allow to do that.

Matt:	You know, we actually use pre-mortems in our business, but it's not something that I've talked about at all on the podcast. I'd love for you to kind of extrapolate on that concept.

Michael:	Sure. I mean, so most people know about post-mortems, right? So, in other words, the patient has died or something adverse has happened to the patient and we sit around as a medical community and say, given the facts that we had at the time and our technology, what could we or should we have done differently to get to a better outcome? And we're also very familiar with scenario forecasting. So, we sit here in the present. We peer into the future and say, "Here are the possibilities we should consider as we make a decision." A pre-mortem, as you've already gathered from the name, is a very different exercise. It actually effectively launches yourself into the future and you look back to the present. So, now it's June, for example, 2017 and we look back to today, June 2016. This was developed by a social psychologist named Gary Klein, and so, just to give props to him, he's the guy that developed this. And so, we can tie together two ideas here. So, here's the classic way to do this. You say, "Let's sit down. We'll meet in our conference room." I suspect this is what you guys do in your business. And you say, "We're going to think about making a particular decision." Let's say it's an investment decision or a business decision to expand or what have you. And what we're going to imagine, then, each of us, is that this decision turned out to be a fiasco. Total disaster. We're all embarrassed about it. But now it's June 2017, so it's a year from now. So, each of us is going to write a little narrative, write a little 200-word essay about why this decision turned south. And it's very important to do it independently, and it's very important to do it from the point of view of the future looking back to today, right. So, you might say, and then you combine the different inputs, and it turns out that that exercise tends to generate substantially more alternatives or scenarios than simply standing in the present looking to the future. And by the way, is that consistent, Matt, with your own experience in your own company?

Matt:	Oh, yeah. Absolutely.

Michael:	Yeah. And so, let's tie this back to the idea of the interpreter. You might say, "Well, hey, I'm looking at scenarios. I'm thinking about this already. Why is a pre-mortem adding value?" And the answer, I believe, is by launching yourself into the future, assuming that this particular outcome has occurred, what that does is it wakes up your interpreter, right. This little module in your brain, you've now given it a fact and you're saying, "Hey, interpreter, why did this go bad?" And the interpreter's like, "I'm up to this task," and starts generating particular causes for it, right? So, in a sense, your scenario planning, standing in the present, future, the thing isn't done. So, you're not really thinking about causes in a very rich sense. And the second, the pre-mortem, you're basically recruiting your interpreter, in a sense, to help you understand scenarios more richly. Isn't that cool? So, I think that's part of the psychological reason why pre-mortems, I think, can be more effective than simply scenarios. And, you know, my experience is very consistent with yours, that organizations that have adopted, embraced pre-mortems tend to report that they have much richer discussions, much more heated debates, and ultimately probably make better decisions as a consequence of going through the exercise.

Matt:	Another related concept that we've used a number of times is something from the military called a Red Team. Have you ever heard of that?

Michael:	Yep, absolutely. So, we wrote a piece about decision making, and we talked about different things. So, we talk about Red Team, Blue Team very specifically. And, you know, you may have mentioned this before, but red team typically is attacker, blue team is defender. I think today, one of the good... it's from military strategy, of course, but today, one great example, very relevant example is cybersecurity. So, you might say, "Hey, chief technology officer, are we protected from cyber-threats?" And he or she may say yes, but you might hire a hacker to be your red team, so to challenge yourself to see where your vulnerabilities lie. And so, red team... And, by the way, this was my prior job. If we had a particular investment that wasn't working out well or a thesis that didn't seem to be unfolding, we actually would do this, that you'd assign some people to go off and develop the counter case, the devil's advocate case. You'd have people defending the point of view of the firm and we just let people sit across from each other at a conference room, and everybody else would be judge and jury and we'd let them go at it, which was great. I'll tell you the one thing that I learned. A couple of things that I would just add onto that. One is that in Red Team, Blue Team, I think it's really important to distinguish between facts and opinion, and I think a lot of our discussions in general, by the way, we tend to not distinguish as carefully as we should or could between facts and opinion. So, this is a really interesting exercise I'd recommend all the listeners to do, if they have a few minutes, is to pull out an article. For example, something you either really agree with or something you really disagree with, right. So, something that's really polarizing for you. And then take two different color highlighters, say blue and yellow, and with one color, highlight what you would deem to be facts and then another color what you would deem to be opinion, and then simply step back from the document, and whether you agree with it or disagree with it, try to have a balanced assessment as to whether you're being persuaded or not persuaded by fact or opinion. That's super cool. The second thing I'll mention, which was a new thing for me, is that Adam Grant's a great professor at University of Pennsylvania, and he wrote a book called The Originals. I don't know if you guys talked about that. There's some stuff on creativity in there, as well.

Matt:	Have not.

Michael:	But Adam talked about Red Team, Blue Team, and he actually made a point that I didn't appreciate fully until I read it. And he said, "If you're assigning red team responsibility in your organization, what you want to find is someone who really doesn't believe in the thesis." You don't want to just say, "Hey, can you be the devil's advocate?" You want someone who actually doesn't believe in the thesis, someone who really is the devil's advocate, and he just says that enriches the dialogue greatly, versus having someone that's sort of an innocent bystander, grab them by the collar and say, "Go tell us why you're against this." So, that was another little wrinkle that I just learned about, which I think could add a little value in the process. 

Matt:	And another tool that I know you're a big advocate of our checklists. Can you talk a little bit about that, how important they are and how they can improve decision making?

Michael:	Yeah, absolutely. You know, I was really inspired, and I think many others, originally, by Atul Gawande's article in The New Yorker, which ended up being a book, The Checklist Manifesto. But the protagonist of that original New Yorker article, and to a large degree, the book, is a guy named Peter Pronovost, who's a doctor at Johns Hopkins. And, actually, we had a conference a number of years ago where he invited Pronovost to come in. And the story's nothing less than astounding, where Pronovost basically... And by the way, he had lost his father to a medical error, so it was very real and very personal for him. Where Pronovost basically introduced a very simple five-step checklist for putting tubes in, intravenous tubes, and found that they could massively reduce infection rates, saved lots and lots of lies, and I think Gawande in the book argues that Pronovost may have saved more lives in the United States than any other person in the last ten years or so. So, this sort of informs us that... By the way, doctors, if you ask them what they need to do before putting a tube in, they know what to do. It's not like their lack of knowledge. It's really a lack of execution. And so, I think the point that Gawande makes in the book that I think is so powerful is that in every field where this has been studied, be it aviation, medicine, construction, a faithful... First of all, coming up with a good checklist and a faithful use of the checklist has led to better results, and this is without making the underlying users any smarter or any better trained. So, it's just hewing to the process more accurately, which is really fascinating. So, I think a lot about this in the context of investing. Now, investing is a little bit of art and a little bit of science, and I think where the checklists really do apply very effectively is in a lot of the process-oriented stuff. So, how to do certain types of calculations. Basically, it's sort of the fundamental components of investing analysis. Now, the art part comes into some other elements of interpretation, but I would just say if you have components of whatever job you do, and I think almost all of us do have components that are somewhat algorithmic, where consistency and accuracy are really, really helpful, you should be thinking about, if you're not doing it already, developing and applying checklists. Gawande's book is fantastic. Pronovost, by the way, himself, wrote a book about this topic, and maybe the last thing I'll say that came out of Pronovost's book, which I think is very important, is that he said one of the keys to checklists succeeding is actually gathering and analyzing data. In other words, being scientific about this, not sort of just a nice idea of having a checklist, and I think that was one of the keys to Pronovost's original wild success as a Johns Hopkins, was not just that they developed a proper checklist but they figured out ways to get the doctors to use it, and then they really kept track of it and gave the doctors feedback. And so, this idea of data collection and feedback is also a really, really key element to this whole thing.

Matt:	Changing directions a little bit, I'd love to dig into some of the stuff you talked about in The Success Equation, kind of untangling luck from skill and the concept of the luck-skill continuum. One of the tools or mental models that you use to describe that phenomenon was the two jars model, which I found to be extremely helpful. I'd love for you to kind of explain that a little bit.

Michael:	Sure. So, you know, and by the way, luck, skill, the whole topic of The Success Equation, it had been sort of lurking in the shadows for me for many, many years. I played sports in college and high school and a sports fan. Clearly a big deal in the world of investing, and also if you look at corporate performance, it's almost everywhere you look, this idea of luck was sort of there, but hard to pin down. And I read Fooled by Randomness by Nassim Taleb in 2001. That certainly got me thinking more about that, and I think Taleb does an incredibly effective job in that book of sort of underscoring the role of luck, but didn't really do much to help us quantify a lot of this. So, the cornerstone of the book, as you point out, is called the luck-skill continuum, and the way to think about this is that you just draw a line and on the far left you put activities that are pure luck, right. So, roulette wheels or lotteries, where really, there's no skill whatsoever. And on the far right, you might put pure skill activities. And things like maybe... a lot of things. Pure skill, but running races, or chess is probably over there. And then, just thinking about arraying activities between those two extremes. So, where does a basketball game fit on that? Where does bowling? Whatever it is, right. So, that in and of itself, the methodological approaches to trying to do that was really, really interesting. But, as I got into this, as you point out, I was trying to think about conceptualized the so-called two jar model. So, the idea is that your outcome for whatever activity is going to be the result of drawing a number from a jar filled with numbers for skill, and then drawing a number from a jar that's got luck. Right, so you're going to pull two numbers out, add them together, and that'll be your outcome. Now, if you're on the pure luck side of the continuum, for example, you'll have a luck distribution. You can envision it as a bell-shaped distribution, is fine. And your skill jar is filled with zeroes, right. So, only luck will make a different. If it's on the pure skill side, you know, you have a skill distribution and you're drawing zeroes from luck, so only skill matters, but almost everything in life is sort of these two rich distributions colliding with one another. And the question is, how much is each contributing? So, I just think that's... And by the way, one of the really nice things about the two jar model is it allows us to understand to some degree things like reversion to the mean, which we spoke about before. It allows us to appreciate the fact that great outliers--for example, streaks in sports of consecutive hits in a baseball game or consecutive shots made by a basketball player--are always, and almost by definition, going to combine great skill and great luck. Because, if you think about it for a second, that has to be true, right. Not all skillful players have the streaks in sports, but all the streaks are held by skillful players, right, because skill is the prerequisite and luck comes on top. So, to me, it's just a very, very vibrant way to think about a lot of things in life, and the key point of The Success Equation is not just thinking about these topics, but hopefully providing some people with some ways to think about the concrete, how they have to deal with the world differently concretely, as a consequence of understanding the role of luck.

Matt:	And one of the things that I'm really fascinated with is the concept of deliberate practice, and you touch on that and how it relates to and applies more specifically in skill-dominated systems. But I'm curious, you know, how would you think about applying something like deliberate practice, or maybe the core lessons behind deliberate practice, to a field like investing or business or entrepreneurship?

Michael:	Yeah. Super interesting. And so, deliberate... I don't know if you've... There's a brand new book by Anders Eriksson called Peak, on this...

Matt:	I have not heard of it. I'll have to read that.

Michael:	Okay, yeah. Check it out. So, Anders Eriksson just wrote a book called Peak, just as it sounds, which I just read a couple of weeks ago. So, that is his... you know, talking about deliberate practice, just to reiterate for all the listeners, deliberate practice is this idea of practicing that is at the cusp of your ability, so a little bit at or right beyond your ability, often where you have a teacher or coach, someone who can give you instruction, and you're getting quality feedback. So, you're proving at the cusp of your skill level. So, as he points out, a lot of us practice things, or we do things that's like we practice. We do things over and over, or even we practice but we don't really satisfy the requirements of deliberate practice. It's usually not beyond our or at the edge of our capability. We often don't have coaches. We often don't get the quality feedback. And, as Eriksson expresses it, deliberate practice is not a whole lot of fun, right. It's actually very tiring, because you're constantly pressing yourself. So, I wrote a piece about this actual topic of deliberate practice and 10,000 hours back in 2004. It came before Gladwell's book and so forth, and I've struggled since that moment of writing that piece about what deliberate practice means. What is this idea of working beyond our boundaries and getting feedback and so forth? So, I don't know that there's a perfectly good example of that, so maybe I can make two points. One is what I argued in The Success Equation, is skill improvement or skill development through deliberate practice is absolutely valid in fields where your output is an accurate reflection of your skill. So, what kinds of things would that be true? It would be, you know, music, if you're a musician. Athletics, it would be true. Chess playing, it would be true. So, there's certain fields where the output is an accurate indicator. There's very little luck that's filtering out the outcomes, right. So, that's where deliberate practice really is good. As you slide over to the luck side of the continuum, what happens is the connection between your skill and the outcome is colored greatly by luck. So, for the example I gave Matt that's a trivial one is, if you're a blackjack player and you enjoy playing blackjack and you go to Atlantic City, you may play properly with standard strategy and lose badly for a few hands, or you may play very foolishly and win for a few hands, right? So, this connection between your skill and the outcome are broken. And when that's the case, what I argue is you should focus almost exclusively on process. And process, it's got elements of deliberate practice, but process is going to have three components, as I would argue for it. One is an analytical component. That is both trying to find situations where you have an advantage and also how do you bet, given your advantage. I'm going to call the second component behavioral, and this covers a lot of what we've been talking about today, but are you aware of managing and mitigating the behavioral biases that we all fall prey to? And the third I'm going to call organizational, which is we all work for companies or parts of organizations or parts of teams. None of them are perfect. Agency cost can be a very big deal. What are we all doing collectively to minimize those organizational drags, right. So, to me, it becomes very process-oriented, and I think if you look at the elite performers, whether it is in sports betting or even sports team management or investing, you get a very common thread, that those folks are almost always and almost exclusively focused on process in the faith, the full faith that a good process leads to good outcomes over time.

Matt:	I think that's great advice and that's something that I've struggled with a lot, is kind of how to reconcile that or how to deal with the challenge of getting whether it's accurate feedback or whatever else it might be in systems where there's a very fuzzy relationship between skill and outcome.

Michael:	Exactly.

Matt:	So, you've touched on this a little bit, but if you had to kind of distill it, what would one piece of homework be that you would give to the listeners of this episode?

Michael:	Read. [Laughs] Read is probably the main thing, is to... And I actually say that I think working with people like you or following people like you is a great place to help curate some of this stuff, but I think it probably helps to have some thoughtful people. Shane Parrish, you mentioned, was fantastic.

Matt:	He's great.

Michael:	And Shane's another guy who can help you curate that stuff. But I think starting to just...making sure that you commit a substantial percent of your day to learning, continual learning, and, again, being diverse in what you're reading and thinking about; and forcing yourself, compelling yourself to have the stance of being actively open-minded, so making sure that you're considering different points of view, you're exposing yourself to different types of people. So, that maybe not. That's maybe a tall order, but, to me, that would be the first thing I would say. And, you know, I do find a lot of people struggle to find time--or at least they perceive they struggle to find time--to read, and the main thing I would just say is that life is about tradeoffs. So, the question is: Are there things that you're doing today in your moment to moment that you could trade off, that you could do less of, that would allow you to do more reading? Because I do think the return on investment is really, really... The return on time and the return on investment is really high for reading.

Matt:	You know, there's a really funny study that Zig Ziglar talks about in some of his old speeches. And I think the study was in the '50s or '60s, but they basically looked at...they looked at a factory and they started with everybody from the factory workers up to the line managers, up to the office managers, up to the president, and they looked at how many hours a week they each spent watching TV. And there was sort of a relationship where, you know, it's like the factory worker spent 20 hours a week watching TV, all the way up to where the president spent half an hour a week watching TV or something. So, that's a great point, is that there's always a way to find time to read if you make it a priority.

Michael:	That's right. Exactly. And I love that. And, again, it's maybe not everybody's cup of tea, but for people who are probably listening to this, it is going to be something that they'll find interesting and I would just jump in. And I would also encourage... Especially for young people it's a great thing to get going on. When you can work it into your habits when you're young, it's just a huge leg up through the years, for sure.

Matt:	I mean, obviously you're a very active reader. Do you have any kind of methodology that you use to keep track of all of your kind of book notes or to keep...to sort of categorize everything that you've read and all the knowledge that you've accumulated?

Michael:	[Laughs] So, Matt, I wish I had a good answer to this question. The answer is no, not so much. But I guess I...

Matt:	I struggle with that, too. That's why I'm asking -- for myself, in this case.

Michael:	[Laughs] But I benefit from a couple things, which are sort of offshoots of the way my career works. So, I have the fortune of being able to write a fair bit for my job and not just book stuff or just day-to-day stuff, and so that allows me to weave in a lot of the stuff that I read and implement it, and I think teaching and writing are two really powerful mechanisms to help consolidate thinking and consolidate ideas. So, that helps a lot. And, beyond that, it's just... Now, a lot of it is cumulative, right? So, it's just trying to make sure that whatever I'm reading clicks into place. I mentioned this Anders Ericsson book and, you know, I've been reading about... I think I have probably a half dozen books or more on expertise. Many of them were edited by Anders Ericsson. So, that was just adding onto something that I had a little bit of a foundation in. So, yeah, there's not much method to my madness, but I'm not sure that... Yeah, I'm not sure... I think just jumping in is probably the first and foremost thing to do.

Matt:	Where can people find you and some of your works online?

Michael:	So, probably the easiest thing to do is go to michaelmauboussin.com. So, that's a website that mostly highlights the books that you mentioned at the outset. The Success Equation, our skill lookbook, also has its own website, which is success-equation.com. Success-equation.com is also kind of fun because there are some interesting little simulations that you can play around with, including the two jar model you talked about. There's also some fun stuff on the Colonel Blotto game, which is a game theory model, and a little mind reader algorithm. So, there are some fun things to do there as well. And then it's harder... My professional writing is difficult to get access to through formal channels, but if you've got some fingers in Google, you can tend to find a lot of the stuff on there. So, I would just google it. [Laughs]

Matt:	And I think valuewalk.com has a great list of a lot of your...a lot of your pieces.

Michael:	Yeah. So, ValueWalk's a good example. Yeah, exactly. And Hurricane Capital's done a great job. So, a couple of these sites, those guys do a nice job of recapturing a lot of the stuff we do.

Matt:	Well, Michael, thank you so much for being on The Science of Success. It's been great to have you and it's been an enlightening conversation.

Michael:	Matt, it's been my pleasure the whole time, so thank you for having me.

June 15, 2016 /Lace Gilger
Best Of, Decision Making, Money & Finance
26-HowToStopLivingYourLifeOnAutopilot,TakeControl,andBuildaToolboxofMentalModelstoUnderstandRealitywithFarnamStreet'sShaneParrish-IG2-01.jpg

How To Stop Living Your Life On Autopilot, Take Control, and Build a Toolbox of Mental Models to Understand Reality with Farnam Street’s Shane Parrish

June 07, 2016 by Lace Gilger in Best Of, Decision Making

Do you feel like your life is on auto-pilot? Do you want to take control and build a better and deeper understanding of reality? In this episode we discuss mental models, cognitive biases, go deep on decision-making and how to improve and build a smarter decision-making framework and we look at a number of key mental models that you can add to your mental toolbox.

If you want to dramatically improve your decision making with a few short steps - listen to this episode! 

Shane Parrish is the founder and author of the Farnam Street blog, which has been featured in Forbes, The Wall Street Journal, The Financial Times, and much more, its one of my personal favorite blogs and an incredible resource dedicated to making you smarter every day by mastering the best of what other’s have already figured out.

We discuss the following topics:
-Why you should focus on mastering things that change slowly or don’t change at all
-Why reading “pop” books and news doesn’t make you smarter
-How to pattern interrupt yourself when you get focused on the wrong things
-What “mental models” are and how you can use them to your advantage
-Why you should focus on your “circle of competence"
-How to reduce your blindspots and make better decisions
-Simple steps you can take right now to improve your decision-making
-How to think about the world like Charlie Munger
-How you can avoid becoming “a man with a hammer"
-Why you should focus on avoiding stupidity instead of trying to be smart
-Why its so important that you should keep a decision journal (how to do it)
-And much more! 

Learn more and visit Shane at https://www.farnamstreetblog.com/

Thank you so much for listening!

Please SUBSCRIBE and LEAVE US A REVIEW on iTunes! (Click here for instructions how to do that!).  

EPISODE TRANSCRIPT

Shane Parrish is the founder and author of the Farnam Street blog, which has been featured in Forbes, the Wall Street Journal, the Financial Times, and much more. It's one of my personal favorite blogs and an incredible resource dedicated to making you smarter every day by mastering the best of what others have already figured out. Shane, welcome to The Science of Success.

Shane:	Thanks for having me, Matt. I'm excited to be on.

Matt:	We're super excited to have you on here. So, for listeners who might not be familiar, tell us a little bit about what is Farnam Street and what do you talk about on the blog.

Shane:	Oh, there's so many ways to describe it, but a friend of mine put it best when they said it was an online intellectual hub for people who are rediscovering their curiosity and want to be better, in a non-self help-y way, but want to be better at solving problems, removing blind spots, exploring life. I think that about encapsulates the blog. We talk about everything from art and philosophy to the science of decision making to what it means to live a meaningful life to what it means to be a good friend, and how you can go about doing that and how you can learn from other people, and not only learn from other people but learn from their mistakes. I'm very open about some of the mistakes that I've made about being a good friend, and some of the decisions I've made have factored into how we think about decision making. So, I think that it's just an online intellectual resource for people who are consistently looking to gain an edge over somebody else.

Matt:	And how did you initially become interested in this subject?

Shane:	Oh, it started back with my MBA, and it wasn't really anything that I thought would turn into what it has become today. Originally, when I started my MBA, I was focused on doing my homework and passing and all of this stuff and getting good grades, and all of a sudden it became pretty apparent to me that a lot of the schools—and I won't mention names—have become check cashing institutes, where somebody, usually a corporate sponsor, sponsors an employees to go get an MBA, and the schools have a large incentive to allow those people to get MBAs. So, what happens in between is almost irrelevant, as long as those people get MBAs and the school gets a big check. The learning became secondary, and so I took it upon myself originally to start learning on my own, and then this is the manifestation of that. Like I said, it was never intended to be what it is today. It's a lot of luck, a lot of happenstance, a passionate group of 80,000 readers, and it's kind of taken off from there.

Matt:	So, kind of the tagline of the subheading for the blog is Mastering the Best of What Other People Have Figured out.

Shane:	Yeah. I mean, I'm not smart enough to figure out everything myself, so how do we learn? We learn a lot through reading. We learn a lot through experience. But there's only so many things that I can experience in life, so I want to try to learn from mistakes of others, the epiphanies of others, the insights of others, and that'll give me kind of a cumulative advantage over a long period of time, in terms of the knowledge that I can accumulate and how I apply that to problems.

Matt:	You know, that's an interesting... When you say cumulative knowledge, I've heard an analogy before that it's almost like compound interest. You know, when you start to read, you kind of build this knowledge base and this framework that you can continually sort of layer knew knowledge into. It's like, someone can't just read two or three books that you read recently and catch up to where you were before.

Shane:	Yeah, definitely, and it depends on what you're learning and what you're reading, right? I mean, all of that factors in. There's almost a half life to knowledge, and you want to learn if you're going to apply yourself, and you have an opportunity cost to your time. You want to start learning things that either change slowly over time or don't change at all. Unless you're in a niche field where you have to keep up with the latest neuroscience or research in a particular field, it makes more sense to apply yourself broadly to things that change slowly over time, and then use those tools to reduce your blind spots when making decisions, when connecting new things for creativity and innovation and solving problems, and then also for how to live a meaningful life.

Matt:	That's a great point. The idea of mastering or focusing on things that change slowly or don't change at all. What would you say are some kind of types of knowledge that would fall into that category?

Shane:	Well, I mean, if you look back in history, we have this big bucket of time, right? We have psychology, which everybody thinks is this great knowledge to have, but it's fairly recent that we've discovered these heuristics and biases. But physics has been around for a long time and chemistry's been around for a long time, and these laws don't change much over time. I mean, our heuristics and biases are important to understand, but you also want to merge them with other ideas. And I think that where people go astray is when you go to the bookstore and you pick up the bestselling book, and we have every incentive to pick up... I call them pop psychology books, but the pop psychology book of the day, because we feel educated, we feel like we're learning something, we feel like we're moving forward, and it's on a subject that's usually topical, that's in the news, and then what happens inevitably over time is those books disappear and the study either gets disproven or there's contrary evidence. It doesn't end up being knowledge, so you end up spending your time, whether you believe it or not, you spend your time entertaining yourself. And I think it's great to entertain yourself. You just need to be aware of when you're reading for entertainment, when you're reading for knowledge, when you're reading for information, and the way that you approach those subjects should differ. And your goals, in terms of how you get better throughout your career or what you want to do is also... will lead you to different sources of information. 

Matt:	I love the idea of focusing on kind of going back to the hard sciences, and that's something that someone who I know you're a big fan of and I'm a big fan of, Charlie Munger, talks about a lot. Kind of, you know, focusing or thinking about biology, physics, really those core fundamentals, and then branching out more and more into kind of the things that are built on top of that.

Shane:	Yeah. Munger is the source of a lot of inspiration for me, in terms of just the way that he approaches problems, and when you think about the world, it is multi-disciplinary. So, if you don't understand the big ideas from other disciplines, how can you synthesize reality? How can you remove your blind spots and how can you gain an edge or make better decisions that other people miss if you don't understand those big ideas from different disciplines? And these ideas are understood at different levels and you hone them over time. It's not something that you just conceptually grab. You write a chapter on physics one night and you understand gravity. It's something that you develop over a long period of time, and you hone those ideas. And I think that when you encounter new information, you start mapping it to what you already know, and this is where Munger's concept of the latticework of mental models comes in, where you start saying, "Oh..." You start seeing people make decision making errors and you can say, oh, that's confirmation bias, oh, that's anchoring bias. That's great. It gives you insight. But those are heuristics. Those are great. But it also gives you insight into, oh, well, they're operating outside their circle of competence. I'm operating in a complex adapted system. There's supply and demand effects here, and then when you kind of go through this mental list of models that you have in your head from other disciplines, including ecology, investing business, heuristics in terms of psychology and mathematics, statistics, chemistry, physics, you can usually gather in your mind mentally the variables that will control the situation. Right? Momentum is an incredible variable that people underestimate a lot of the time. That's a concept from physics. Statistics, in terms of sample size and distribution and mean and medium, and understanding the difference between those things enables you to make better decisions, and it enables you... More importantly, it enables you to reduce your blind spots, which I guess, in the end, is how we make better decisions. We all have a certain aperture onto the world, and that aperture is not a 360 degree, almost holographic view of what the problem is. But by reducing our blind spots, we come to a more complete knowledge of the situation, and that knowledge enables us to make better decisions, avoid stupidity, which is also an important outcome, and then go forward. 

Matt:	So, backing up slightly, can you kind of define or dig in a little bit more on the concept of mental models? It's something that we've mentioned briefly on the podcast, but some listeners may not be familiar with it.

Shane:	So, in my mind, I mean, there's two types of mental models. There's the psychological mental models, which are how we deceive ourselves, and those would be kind of like the heuristics that are popular today. There's availability. There's confirmation bias. There's anchoring bias, hindsight, overconfidence, and so on and so forth. And then there's kind of like the time simulations, and these are also heuristics, which are important to understand in some senses, right, where there's gravity. If I drop a pen, I know what's going to happen, but I'm simulating time. So, understanding that and understanding feedback loops and redundancies and margin of safety and the prisoner's dilemma and understanding how these things play out over time enables us to fast forward through time and see the most probably outcome when we're making a decision. Doesn't mean it's a guaranteed outcome. I mean, there are some things that are pretty guaranteed, like gravity, but it gives us a better aperture into the problem that we're trying to solve and also enables us to recognize intuitively that there's other outcomes that are maybe less probably but still possible. 

Matt:	So, can you think of an example of applying some of these mental models in a challenge or problem that you've faced recently?

Shane:	Well, one of the mental models that we use a lot if circle of competence, and circle of competence enables you to, just knowing where you're competent and where you incompetent enables you to make a better decision. I'll give you a kind of high level overview of how that works. If you're accurate in your circle of competence and you keep, say, a decision journal or something like that, you'll be able to hone that over time and you'll be like, well, when this type of decision comes up, like an investment decision in an airline company, I have a really high batting average. I would say that's within my circle of competence. But we all can't sit back like Charlie Munger or Warren Buffet and basically for the fat pitch that's within our circle of competence. Most of us have this pragmatic reality where we have to make decisions outside of our circle of competence. But if you recognize that you're outside of your circle of competence, you approach the decision in a different way. What I mean by that is now you start, instead of becoming overconfident, you start recognizing that other people's opinions may be valuable. Instead of thinking that you have all the information, you start seeking disconfirming evidence to the belief that you hold because you know you're not operating within the circle of competence. So, just a knowledge of a circle of competence and where you make good decisions and where that boundary is enables you to proceed in an area outside of your circle of competence and still make better decisions than you would have otherwise.

Matt:	And in that example, circle of competence is essentially one quote-unquote "mental model" in the toolbox, right? The goal is essentially to build a toolbox of tens, if not hundreds, of potential models that you have kind of deeply internalized in a way that it's almost intuitive, so that when you encounter a problem, you can naturally kind of pluck the four or five most appropriate models for understanding that particular situation.

Shane:	So, I think about it like you're a craftsman, right, and you show up to the job, and if you have a hammer, there's a limited set of problems you can solve. There's a limited amount of creativity that you can have with raw materials. The more tools you have, and the tools and the knowledge industry happen to be sometimes mental models, and sometimes they're very niche, you don't always need to be a broad, generalist thinker. Oftentimes, the most rewarding professions, like neurosurgery or lawyers, tend to be very niche in terms of how they think about the world and the problems that they try to solve. The rest of us have to operate in a lot of ambiguity in the sense of, we're solving problems that may not be as narrowly defined. We may not be in such a niche where we studied it for 15 or 16 years and we have to get on this treadmill to kind of keep up with it, but we're solving general business problems, and then the problem becomes how do you solve those problems better? How do I become better at my job? How do I become more valuable as an employee, as a knowledge worker? And I think the answer to that is acquiring more tools to solve different problems, but, more importantly, by solving different problems, you're often avoiding different problems. We teach a course on productivity, and one of the biggest sources of productivity that really not a lot of people think about and is very counter-intuitive is that the best way to be more productive is actually to make better decisions, because when you think about how most of us spend our days, we're spending so much time just fixing mistakes and solving problems that we've created by rushing our decisions, by not thinking about them, by not doing something that we could have done to change the outcome. So, the best way to get free time is to make better initial decisions. And when you think about that, it makes a lot of sense, but most people don't frame it that way. So, if you want to start making better decisions, one of the best ways to go about that is to understand the problem, and one of the best ways to understand the problem and understand reality is to be able to synthesize it. You want to be able to look at the problem from a three dimensional point of view. And if reality isn't multi-disciplinary, then I don't know what it is.

Matt:	And when you say reality is multi-disciplinary, can you elaborate on that so that listeners who might not be as familiar with Munger and his conception of worldly wisdom know what you're talking about?

Shane:	Yeah. I think you can't just look at one background. Like, if you have a psychology degree, the world isn't only psychology, right? It's also physics. It's also ma. It's biology. All of these things factor into most of the problems that we look at, and our goal, as a decision maker in an organization, not only do we want to make more effective decisions, we want to recognize when we're making decisions outside of our circle of competence, or that multiple disciplines might factor into. Psychology's great in terms of corporate decision making, but it may underplay supply and demand. It may underplay switching costs. If you don't have a grasp of these concepts and you don't have an intuitive mature about how to handle them or how to structure them in your mind, then you become what Munger says is the one-legged man in an ass-kicking contest. You're handicapped in life. And then people will run circles around you, and that may be fine and that may not be fine, and that all depends on your makeup and what you kind of want to achieve and how you want to live your life.

Matt:	And I think in many ways, economics is a field that's often criticized for failing to understand or take into account the implications of other disciplines, with the example I know... I think there's a psychology book where they talk about the difference between econs and people, where it's what an economist would say how someone would behave and how they actually behave in the real world.

Shane:	Yeah. I don't think I know enough about the discipline of economics on that level to comment on what the economists think. I think there are economists out there who think in a very multi-disciplinary manner. Greg Mankiw from Harvard, I think, would be one of those people who think that way, and Munger has pointed out that his textbook thinks about economic problems in a multi-disciplinary way. I think his criticism was he doesn't actually point out that he's thinking about them in a multi-disciplinary way, and I think there's a lot of lessons that the rest of us, especially those of us who operate in mid- to large-sized corporations, can learn from business, about the time value of money and investment returns and marginal costs, and most importantly, probably, opportunity costs, which is a lesson that all of us can learn in the sense of you life one life and you can trade time for money and that's fine, and you can also trade money for time, and Buffet has a great quote where he said the rich... I forget the exact words, but the rich are always trading money for time, whereas the poor are trading time for money. And when we think about that, that comes down to opportunity cost, and most of us... Say, for example, you live in the suburbs or you live somewhere where you have a long commute. Most of us view that as a cheaper way to live. But do we factor in—and the important question is, do you think about—the time it takes to commute? Do you think about the two, maybe the two and a half hours a day you're spending in the car, and how do you value that time? And when you start factoring that in, it kind of changes the dynamics of what you're thinking about in terms of cost and value. Example would be reading. If you're reading something, you're not reading something else. So, if you're reading Gawker, whatever, Buzzfeed... I don't even follow most of the media today, but if you're reading the latest news, that's great. It's keeping you up to date on current events but it means you're not reading something that's enduring that doesn't change. So, there is an opportunity cost to everything we do. If you go to lunch with a friend, maybe you value that a lot, which I do, personally, and if you sit and do nothing but read the newspaper, you value that, and it becomes just knowing what's valuable to you and knowing how it helps you achieve the goals that you're trying to achieve or how it entertains you or gives you some sort of down time, which is also an important cost. But there is an opportunity cost to everything, and I think people underestimate how important that concept is to grasp, right. While you're watching Netflix, you're not doing something else. And if somebody else is doing something else that makes them better, more valuable, or more knowledgeable, eventually, over time, you're going to lose the edge that you have. And I think that's important to realize.

Matt:	I definitely have the same sort of perspective about most news, most current events. I barely read any sort of news sources. Mostly what I read are blogs like Farnam Street or things that really talk much more deeply about, to use the phrase that you use, things that don't change over time, right? You know, you can fill your head with a bunch of news. Six months later, most of that stuff is irrelevant. Whereas if you fill your head with these mental models...

Shane:	When you think about how we consume information, most people—and I'm generalizing here—are consuming articles, like ten ways to get promoted at work or whatever the clickbait headline of the day is. And what's really funny is I've talked to some of my friends who are like this, and they love it. They do it for entertainment. That's great. But they're often, like, "You know what's really interesting is I click on the same article two days in a row and it's just got a different headline, but I don't really recognize that I'm reading the same article until the last paragraph, when something kind of jumps out at me." So, they're going through these 800 to 1,500 word articles and they're not actually remembering that they read it yesterday. So, what are they doing? I mean, that's just a form of entertainment at that point. And then anybody who's promising the world is not going to deliver that. There's now four steps you can take to guarantee your employment. There's no six ways to negotiate with your boss to get a raise. I mean, there's tips and there's tricks and there's probability involved, in terms of, well, if you employ this, and I know one person who teaches about how to get a raise at work, and one of the main factors that he's giving people is the courage to ask for a raise. But he's not actually giving them a tool that they develop, right? He's basically saying you need to ask for a raise, and a lot of them get a raise when they go and ask for a raise. And that's fine, but what is he teaching them, long-term? Maybe it's self-sufficiency and maybe it's that I can ask for things I want. We want to teach people things that don't change over time that apply to a wide variety of problems, from everything from innovation to decision making. I mean, we factor into corporate mergers and acquisitions. We can set it in SEC filings. There's a whole bunch of stuff that we want Farnam Street to be, but it really boils down to giving you more tools that you can use over a year, over two years, over three years that enable you to be better at whatever it is you want to be better at, and part of that is just recognizing when you're reading things for entertainment or information and when you're reading things for knowledge, and when you're reading things for knowledge, you want to slow down. When you're reading things for entertainment, you might want to speed up. But it's not to say that one is better than the other. I don't think we're making that decision for people. We're just giving them an alternative. 

Matt:	You know, it's funny you mentioned the story about somebody reading the same article and not realizing it. One of the things that we talk a lot about on the podcast and that I'm a big fan of is meditation, and it may not be for everyone, but one of the beautiful things about meditation is that it kind of gives you that inner dialogue to sort of check your thoughts and be like, hey, what's happening, right? So, if you start... Sometimes I'll get sucked into a loop of reading a bunch of stuff on Reddit or something like that, and then my mind will kick and be like, what are you doing? Pull out of this dopamine loop. And I'll pull out and be like, all right, I gotta stop doing that.

Shane:	Yeah, but that comes back to a feedback loop, which is also an important concept from engineering, right. So, the mental model is that you've created this either intentional or unintentional feedback loop that enables you when you go astray or do something you're not wanting to be doing to just check in and be present, right? We all make decisions. It's whether we make them consciously or unconsciously, and a lot of us just spend that time, I would say, unconsciously, which is fine. But you've enabled yourself to kind of be like, oh, is this how I want to spend my time? And that feedback loop enables you to make different decisions about consuming information. It might mean that you go back to Reddit and you start reading more, and it might mean that you're like, what the hell am I doing? I want to do something else and I want to spend my time differently. But just that in and of itself, that feedback loop, that mechanism to kind of switch from unconscious to conscious is one of the most incredibly valuable things you can have. And I would say meditation probably is the foundation for much of what I do. I don't meditate every day, but I do meditate on a regular basis, and it enables me to structure my time better and it enables me to clear my mind and have moments in my mind that are device-free, that are quiet, that are calm, that are soothing, and it's made me respond to situations in a different manner than I would have in the past, where I might have made more anxiety or stress about a certain situation. Now it's enabled me, I would say, to become more stoic about it and just accept the world for the way that it is, instead of pushing back against things that I think are unfair or unjust and just accepting that that's the way it is, and that is unproductive energy and my mind would get clouded with some of the stuff like that, before I started meditating, before I started yoga, and now it's become a lot more clear in terms of the path that works for me.

Matt:	It's funny that you mentioned stoicism, because we have a whole episode about the idea of accepting reality. The same concept of, it doesn't matter if it's fair, it doesn't matter if it's just. It's all about accept things the way they are so that you can move beyond them. 

Shane:	Yeah. I mean, Joseph Tussman has this amazing quote, and I think it becomes about this. He says, "What the pupil must learn if he learns anything at all is that the world will do most of the work for you, provided you cooperate with it by identifying how it really works and then aligning with those realities. If we do not let the world teach us, it teaches us a lesson." And I think that's one of the most profound things I've come across in a long time, and I think that enables us to think about, am I confronting the world or am I accepting it? And if I'm accepting how it works, that's a bit of a feedback loop into checking what I think and checking my approach to life, and that feedback loop over a long period of time should compound and enable us to better align with reality. It's not something like... You don't go to bed Thursday night and wake up Friday morning and be like, I'm going to align myself with the world. You just start opening your eyes to how the world really works, how it operates, the different outcomes, and understanding that outcomes are not necessarily guaranteed and they're a function of probability, and we all have periods of bad luck, and then you enable that over time to slowly learn to roll with the punches.

Matt:	It's amazing that once you've kind of gone down the road of internalizing and really starting to understand many of these different mental models, it's almost like, you know, I'm thinking about... I was in a meeting last week in kind of a sales meeting, and it's amazing how I can just immediately see it's like they're using this bias and they're doing this thing, and it's like you start to kind of build this framework where you can subconsciously just capture that stuff.

Shane:	Yeah, totally. And I mean, the flip side to that is biases are biases for a reason. I mean, they work most of the time. They're heuristics because they work 99% of the time. Our goal is to kind of recognize when they're leading us astray, which is why there's frameworks for decision making that enable you to just check and balance that. One of the questions that you should ask yourself is where am I leading myself astray, where am I... I'd be fooling myself. And that's when you kind of check your biases and your heuristics yourself and start thinking about, oh, well, it's a really small sample size. Should I be basing a $500 million merger on two years of track record from this other person? And then just enabling those questions usually generates a better outcome, but not always, right. I mean, you really have to think about this stuff. And when you think about how we structure our days, how we structure our time, most people don't take the time to make good decisions. And what I mean by that is they're not making a conscious choice to make bad decisions. So, just setting themselves up for failure. Think about the... Generalizing again. Think about the modern office worker. They work... Let's say for the sake of argument, they work nine to five. They show up. They've got to drop off the kids first. It's a hectic morning. They get in a little later than they want. It's 8:35. They open up their email. They have a nine o'clock meeting but they've got to go through 30 emails before then, because some people have shown up earlier and they've redirected their time, and then they realize that it's 8:55 and they have a nine o'clock meeting, and they're supposed to make a decision on something, so they pull up the document that's the briefing on the decision they're supposed to make, and they have five minutes. So, what do they do? They read the executive summary and they go to the meeting and they base their decision on the executive summary, which most times will work. It's another kind of heuristic, right? But often it leads us astray, because we don't do the work behind the scenes to understand the decision to understand the dynamics of the problem, to understand things. So, one of the other ways that you can increase productivity, and I guess it leads into making better decisions, is to schedule time to think about the decision. I mean, that's very counter-intuitive. We mention it in our productivity course, which is bewaymoreproductive.com, but it's incredible to me the amount of people who show up to work and just let email dictate their day. And they rely on, I guess, their wits or their spur of the moment judgment to make decisions. And, you know, 90% of the time that's going to work for you, but the 10% of the time it doesn't work for you is going to consume most of your time going forward.

Matt:	So, for somebody who's listening right now, what would you say are some concrete things they might be able to do to kind of immediately start improving their ability to make smarter decisions?

Shane:	Well, I think one of the things that you can do is, if you're unsure of the path forward, is to invert the problem, right. And to invert the problem means think about what you want to avoid and if you're avoiding those outcomes, you've already come to a better conclusion than you would probably otherwise have. But that's not the best way to make better decisions. I mean, the best way to make decisions is really to understand the problem and understand the dynamics, and part of that is recognizing when you're operating within your circle of competence and when you're not. And if you're the head of an organization, then it's understanding how people learn from each other. You might have... Say you have 100 people in your organization. Somebody's got a circle of competence in X. Somebody's got a circle of competence in Y. Often, the way that we facilitate decision making is in a way that X doesn't learn from Y and Y doesn't learn from X. But eventually, X or Y quits and retires and then the other has to make a decision. But they haven't learned. Even though they worked with the same person for ten or 15 years, they haven't actually learned how they structure decisions, how they think about the variables that govern the decision, what the range of outcomes could be, and how to hone that attention. This becomes really fascinating to me, because I know a lot of investors who, you know, they read everything about a company, which I get. I mean, it makes a lot of sense. But when you really know the variables that you're looking for, you're able to filter the information a lot quicker. When you understand the situation, you know, they could put out 6,000 pages of press releases and documents a year. You don't necessarily need to read every word of it. What you want to look for is, do the variables that I know; what are they; what are they saying; are they indicating that we're on the right track; if yes, all things are probably good. And yeah, you want to see disconfirming evidence. Most of us consume media. This is another interesting and possibly important point about how we consume media. We consume things that tend to reaffirm what we already think instead of consuming things that disconfirm what we think, and if you go back to Charles Darwin, he wasn't... He had this amazing discovery, which is probably some degree of luck and some degree of him being able to disprove himself. So, one of the tools or tricks that he had in his toolkit was, every time something disagreed with him, instead of glossing over it, he paid attention to it. And think about the way that we consume media today. We don't consume media like Charles Darwin. We consume media like, oh, well, if I'm a pro-Trump supporter I'm going to read pro-Trump articles. If I'm a pro-Hillary supporter, I tend to be inundated with pro-Hillary articles or anti-Trump articles, which is really just reinforcing my view. What we really want to do is slow down and come across things that, oh, well, I thought these five variables matter, but this person's saying a different variable matters. Why does that matter? Does it conflict with my view of the world? How does it conflict? Are they right? And then kind of dropping our assumption that we know what's best or dropping the feel-good nature of the media we consume, which is, I agree with you. And, I mean, that feels great. We get probably a dopamine rush from that. We're not alone. Everybody agrees with us. But at the end of the day, as a knowledge worker, you're paid to be right. So, it's not about paid to be feel good. It's paid to be, when am I wrong, recognizing you're wrong, and there's a lot to be said out of scrambling out of problems, right, and recognizing that you're right early and taking course correction, instead of waiting till it's too late. 

Matt:	So, how would somebody listening to this start acquiring a lot of these different tools and mental models?

Shane:	Reading Farnam Street would be a great example of how to go about it, but I mean, most people go back to reality. Most people aren't going to set aside an hour a day and start going through physics textbooks. They're not going to set aside an hour a day of going through biology textbooks. And most people don't have the time, with kids and family and work, to set aside time to learn on a regular basis, consistent basis. So, the way that you go about it is becoming more open-minded, and one of the ways to become more open-minded is just to read things that disagree with you, and not read them in a critical sense of, oh, that's hogwash, but read them in a sense of, oh, that kind of makes sense. Right? I really want to take a different approach, or, oh, I was wrong, and admitting you're wrong. And you don't have to admit to the world you're wrong, but admitting to yourself you're wrong is a big step in terms of getting better at recognizing the keys to the world. And then recognizing how you consume media. Are you consuming it for opinion? Which I think a lot of people do, right. We want to show up at the water cooler, and we live in a culture where you have to have an opinion on every subject, otherwise you're ignorant and uninformed, which is just ridiculous when you think about it. But in that culture, what it creates is this environment where we read these op-eds, or we read this headline, and that becomes our opinion. We haven't read the article. We haven't thought critically about it. We haven't spent the time doing the work, and yet we formed an opinion on it. And I think that that is contrary to the approach that we want to take, where maybe the way to consume most of the mass media we get is for information. I'm not going to let somebody else do the thinking for me, but they can provide me with the statistics that I need to form my own opinion, or they can provide me a structure for an argument that I will then refute or think about critically, but not one that I will regurgitate without having thought about it. It's okay to say I don't know. And then if you really want a fun exercise and you work in an organization of, I would say, more than ten people, I mean, just keep a tally pad in the last page of your notebook about how many times people say, "I don't know." I mean, I've consulted with organizations big and small, and it almost never comes up. There's almost nobody who's ever said, I have no idea. And that can vary between, "How do you think IBM's doing in their cloud computing space?" to "How do we design this part better?" Everybody has an answer to everything, and once you recognize that, you're like, that's not possible. How can that be the case? There's no way you can understand all of these different things. And then when you recognize that in yourself, it enables you to be more open-minded about other people's opinions, but it's important to probe them. Why are they thinking that? What variables matter to them? Why do those variables matter to them? What would cause them to change their mind? And then when you start thinking about it from another person's point of view, it inevitably creeps into your point of view, and then you start thinking about, what would cause me to change my mind? Why do I think what I do? Where does that information come from that I think this? Is it a headline I read on Twitter? Do I really want to base a decision on that? Do I really want to state an opinion on that? And I think that when you start thinking at that level and that, that enables you to move forward in a way that you're more conscious about what you're consuming, how you're consuming it, and the types of decisions and models that you're adding to your life.

Matt:	Going back to the comment you made about how few people say "I don't know", I think it's something that Munger touches on, kind of the idea... It ties in many ways to overconfidence bias. But the fact that often the most wise or the smartest people are the ones who typically are like, "I don't know," and the least informed, most over-confident person is the one who barges in with a very concrete opinion about XYZ.

Shane:	Yeah, but when you think about how that manifests itself in an organization, often the organizational psychology is the one that promotes the person who has an opinion and is right, versus... It's not because they're right because they've thought about it necessarily. I mean, they could be right just based on odds. They could be right for the wrong reasons. And the person who says, "I don't know," gets left behind. What I mean by that is saying "I don't know" is an important trait to recognizing and understanding knowledge. That doesn't necessarily make it an important trait to getting promoted, and I think when people start distinguishing, you know, I want to be smarter because I just want to understand the world better and I think that's going to help me live a better life, and that, in and of itself, should, over a long period of time, obviously, aggregate into disproportionate rewards in terms of what you value. Maybe that's promotion. Maybe it's level. Maybe it's quality of life, spending time with your family. And maybe it's other things, and that's fine, and everybody has their own kind of utility value associated with all of this stuff. The flip side is the person who goes in, and let's say it's a coin toss and just says heads four times in a row. Well, they're going to be wrong a lot, but they'll also be right every now and then, and if they get promoted because they're right but for the wrong reasons, you can kind of accept that and it doesn't become this, oh, they're better than I am. It becomes, oh, well, that's just luck, right. They're right for the wrong reasons. That'll eventually catch up to them. And then you also need a feedback loop. Like, when am I right for the wrong reasons and how do I learn from that? And it's that learning and that feedback loop that enables you to compound over time, and most people aren't conscious about learning. They're not conscious about their decisions. They're not conscious about their feedback loop that they employ, so they're not actually getting better at what they're doing. And when you think about driving, driving would be a perfect example. We learn how to drive when we're, you know... It's 16 in Canada. We learn how to drive when we're 16. We probably stopped getting better at driving for all effective purposes when we're, like, 19. And then we spend all this time driving but we're not practicing. We're not getting the feedback we need to be better. We're just kind of recognizing the cues that we've already learned. And I think we do that with decision making. We do it with organizations. We do it with new jobs. We spend maybe the first year, we're getting better at our new job, we're learning about different things, and then all of a sudden we kind of get the hang of it and we stop getting better. We stop the compounding. And when you stop the compounding, that's a really bad thing. What you want to constantly be doing is, like, how can we get better, and challenging yourself. And one of the ways to do that is decision journals and to seek outside feedback. It's to ask people how you can be doing better. It's to ask people to coach you, right. Like, a lot of people have mentors in organizations. How do you think about this? What should I be thinking about? What are the variables that I should be thinking about? How do I structure this? How do I approach this problem? And if you're really open to it and you're not just asking to kind of be a kiss-ass or something like that, then that enables you to get better over time. 

Matt:	There's so many questions I want to ask after that. One of the things that comes to mind immediately, talking about the concept of being right for the wrong reasons, I'm a very avid poker player, and one of the biggest lessons that poker taught me was the difference between winning a hand because you made the right decision or losing a hand even though you made the right decision, and kind of what I think often in poker is called positive expected value thinking, in terms of make the right decision based on the math and then whatever the outcome is, it's irrelevant at that point.

Shane:	Yeah. It's not always going to work for you, but you also need to be able to tally that, right, to check your view of the world. So, if you think I made the right decision but I lost and I should have won 80% of the time, you need some sort of feedback that you're not making that same decision and losing all of the time, right. You need some sort of check-in balance that, yeah, 80% of the time I do win when I make that decision. So, yes, it's a good decision, and not just that you have this comfort in, oh, this is what I believe and it was just bad luck. So, you need to actually go a little bit deeper than kind of thinking that way, and poker would be a great example where the odds are pretty well-known and you can go through that structure, but most of the world isn't as structured. It's not as refined as that. So, it becomes more of a, like, where was I off, where was I wrong, and that becomes a very humbling exercise for people, and that humbling-ness is what often creates... or what often leads them to stop the feedback loop, because there's no CEO who wants to admit that he was right for the wrong reasons or she was right for the wrong reasons. But internally, you need that check-in balance in terms of getting better over time, so that you can calibrate yourself, calibrate your circle of competence, and calibrate your decisions and better understand how the world works. It's the only way I know of to improve your ability to make decisions. 

Matt:	So, going back to the driving example that you used earlier, one of the things that I'm fascinated with and I know you've talked about is the concept of deliberate practice and how you can drive for thousands of hours and never improve versus if you sort of concentrate and do deliberate practice, you can grow and achieve and become better.

Shane:	Yeah. I mean, deliberate practice is so important, right. It's about getting better at little things and seeking feedback that's usually immediate, in terms of how you're getting better. One of the best ways to do that—I mean, again, I'll apply it generically to people who work in an organization—is don't just send the report your boss asked you for, but seek feedback and specific feedback, and kind of corner them and be like, "Hey, where could I have done better? Where did I do wrong?" And if they can't give you that feedback, then you're never going to get better at the job that you're in, and if you can get that feedback, it doesn't necessarily make you better at your job, but it makes you better in your boss's eyes. So, it's also filtering that feedback and going, oh, this is what he or she wants versus how I think the world works, but you also want to calibrate that. Why does he or she want that? How do I get better at doing what I'm doing every day? How do I get better at sending emails? I mean, how many of us, just for an example, send an email to schedule an event or a meeting with somebody or a coffee, and we need 30 emails to do that, and we need 30 emails all the time to do that. Why is that? Well, part of the reason is we don't do something simple like, "Hey, here's some proposed dates. Do any of these work for you?" in the first message. Usually, that reduces the number of emails that you need to do that. Well, that's a great feedback mechanism, in terms of getting better. And if you deliberately try different things when you're proposing something that you do commonly throughout the day, like, ten or 20 times, then you can start to get feedback on what works and what doesn't work, and you're almost kind of AB testing things. It's like, it's almost [INAUDIBLE 00:45:58], right. Like, here's my best idea today, but does this other idea work? Does it change my understanding of how people will respond to this? Does it enable me to get to the outcome I want quicker and better and in a win-win way? And, if yes, then let's adopt that. And if not, then I can revert to my old one. 

Matt:	I think feedback is such an important idea, and one of the ways that people often get tripped up—and, I mean, again, this loops back into a lot of the different cognitive biases—is ego, right, and kind of denying reality or getting caught up in their egos.

Shane:	Oh, man. Yeah. I mean, we all have egos. That's incredibly important to recognize. I mean, I don't know a person in the world who doesn't have some sort of ego, especially wrapped up in their opinion on a controversial subject. Adapting to that reality is incredibly important, and recognizing sometimes it serves you and sometimes it doesn't, and it's the same as mental models, right. Sometimes they work and they serve you and they enable you to make better decisions, and sometimes they're wrong, but often we're just coding things into our head that, oh, well, when this happens, do this. But we're not actually saying, well, here are the reasons this happened. Do they exist in this situation? So, applying that mental model won't necessarily work. Ego can become this incredible enemy of seeking wisdom, and I don't have any good ideas, I guess, for how to avoid that from creeping in. I mean, I know people who are naturally very egotistical. I know people who are very naturally the averse to that, but they both have egos. And they're both sensitive in different ways and they both approach the world in different ways. And I think part of it is, if I was forced to comment on it, would be understanding where you are and meeting the world at that place, and then understanding where you want to be and recognizing the path towards that. And ego can be something as small as, I need to give other people on my team a voice, and I'm not always right, and part of that comes back to calibration and feedback loops, and that helps check your ego and helps humble you, in a way, and part of that comes back to saying, sometimes I do need to be the egotistical leader, and by egotistical I mean not that you think you're right, but projecting confidence, and by projecting a path forward. In uncertainty, people will naturally gravitate towards people who take risks, who seem to know what to do, and your job is to not only grasp those risks and those situations and those opportunities and move forward and galvanize your team and kind of push forward, but it's to recognize that you may be wrong. Even if you're not projecting that, it's to recognize that maybe it's wrong, but here's how I will know I'm wrong and here's how I will course correct if I am wrong. You don't necessarily have to tell your team that, but you have to recognize it internally if you want to be the best version of yourself. 

Matt:	So, one of the tools you touched on earlier was the idea of a decision journal. Can you explain that a little bit and sort of demonstrate or talk about how maybe you use that, or how someone listening could potentially use a decision journal to help improve their decision making?

Shane:	Most people make decisions and they don't get better at making those decisions, and so when you think of an organization, you think about how they're going to go about making decisions. They'll make the same decisions. They'll make them by committee. Nobody's learning from anybody else. Nobody's really accountable for the decision, and nobody's getting better, right. So, you end up reaping... And when people think about, well, why do we keep making the same mistake over and over again? That would be one of the reasons. Nobody wants to be humbled, right. So, nobody really wants to keep an accurate decision journal, and by decision journal... We have a conference called Rethink Decision Making, and we talk about this extensively in there. But what you really want to catalogue, and we've created physical decision journals for participants at our conferences, what we go through is individual decisions. So, you can either share them or not, but what you really want to do is start calibrating yourself, and you want to talk about the situation or context of the decision, the problem that you're facing, or what about it is different. Why is it a problem? The variables that you think will govern the situation. So, there's never one. There's usually multiple. The complications or complexity as you see it, why do you have to think about this? What are the factors that you're considering today as you're making the decisions? You want to talk about the alternatives that were considered and why you didn't choose them, right. There's never one path, and I mean, we've kind of nailed into this view of, oh, you know, the corporate PowerPoint presentation. I can't tell you the amount of boardrooms I've been in where it's like you have these three options or these two options, and it becomes a false duality. I mean, there's way more options than that. We just narrowed them down for simplicity. We need to recognize that that simplicity isn't always what we want, and we do want to dive into these other options. And then you want to kind of explain to yourself the range of outcomes that you see possible in the situation. And the reason that you want to do that is often you're going to have an outcome that is something that you don't see. And you want to assign a probability to those outcomes so that you can start to hone your ability to understand yourself, where you make your decisions, where you make bad decisions, and what type of probability you assigned to the different outcomes. Then you want to talk about what you expect to happen. Like, what is the most probably event, or maybe not the most probable, but there's an intervening factor that you think will lead to a different outcome. But you really want to talk about the reasoning behind it. So, you want to get into your own kind of self-dialogue about why you think this would happen, when you think it'll happen, and the variables, again, tying it back to the variables that you think will govern the situation. And then you also want to keep track of things like the time of day you're making the decision, and the mood you're in when you're making the decision, because you're not always going to be happy and you'll probably recognize that most people make better decisions when they're in a certain type of mood, and that mood might vary by the person. But what I've learned through implementing decision journals at various organizations and with hundreds of people is that the time of day often affects the quality of decision that you're making. We tend to... Again, generalizing, but we tend to make better decisions in the morning than in the afternoon, right, and you can use that for decision theory or depletion of cognitive resources or whatever. We tend to be more mentally alert at the front of the day than at the back of the day, so one of the ways that you can take advantage of that is to structure decisions at the beginning of the day, not the end of the day. That simple fact alone will enable you to make an incremental improvement to the quality of decisions that you're doing. And then importantly, it's not about just keeping track of this. You want to review it, right? You want to go back in six months and be like, how did this decision play out? How did I think it was going to play out? How did it actually play out? And what can I learn from this? Do I need to calibrate myself differently? Did I think I was within my circle of competence and clearly I'm not because something way outside of the probability that I expected happened, or do I think that I'm reasonably right but now I can learn or hone my understanding of this situation differently? And when you think about that on an individual level, you start learning a lot, right. You don't want to use vague or ambiguous wording. You don't want to talk in abstractions. You really want to use concrete wording that you can't deceive yourself with later. You don't want to talk about strategies. You want to be specific about what strategy. You want to be specific about what variables. Because that enables you to learn. But when you think about it, learning on an individual basis is great, but the real value to a corporation is when a CEO or a vice president or somebody high up in the organization enables organizational learning, so that I'm not only learning from myself, now I'm learning from you. If I had access to your decision journal, now all of a sudden I don't necessarily need to make the decisions you're making, but if I had to, I bet you it would be a better decision than if I didn't have access to your thoughts and the variables that you thought, and knowing the outcomes that you achieved with those thoughts. And that will enable us slowly, over time, to make better decisions. Now, better decisions alone aren't enough. The world is always changing, so we need to make better decisions on a relative and absolute basis, but we also need to make slightly better decisions than our competition, and if we can do that and we can do it over a long period of time, well, then eventually we're going to own the industry. 

Matt:	I love the concept of handicapping all the probabilities and then coming back and reviewing how accurate was my prediction that this was a 20% likelihood, this was an 80% likelihood.

Shane:	Oh, yeah. That where most people stop doing it, though, right? So, they'll get an outcome. If they get an outcome they thought would happen, and then at, like, 5% of the time, and it's a decision they've made repeatedly over the last six months, like, say, buying a stock, for example. They'll give up, right. Or, if they get outcomes that they didn't expect, they'll give up. Or, if they get the answer right for the wrong reasons, they'll give up. And by give up, I mean they just stop keeping a decision journal, because it becomes humiliating. And when you think about decisions in corporations, one of my favorite things to do when I'm in a corporation and consulting or helping them is to listen to the people involved in the situation and how everything is always right, right? And how they predicted it. You know, if I work with you for a year, I can quickly figure out that you didn't predict that for the right reasons. You got lucky. And then, just understanding when people are right for the right reasons and when people are right for the wrong reasons, and when people have bad outcomes but they're for the right process, that enables you to surround yourself with people who can challenge you, who will help you make better decisions over a long period of time, and those are the people you really want to work for, right?

Matt:	So, changing gears a little bit, what's one kind of piece of homework that you would give to our listeners?

Shane:	Oh, become self-reflection, right. One thing that I work with people a lot on is just take stock of your day. And I don't mean, you know, a typical Saturday or something. I mean, how do you spend your day? How are you matching your energy to the task? Are you reading newspapers in the morning and matching your best time of the day to a task that may be a low value add for you? Newspapers aren't something to avoid. I mean, everybody works in a different industry. They have different constraints. But if reading the newspaper at 6:00 p.m is going to not make a difference, then reading the newspaper at 7:00 a.m., I would advocate that you maybe need to think about why am I reading it at 7:00 a.m. Is that a habit? What is the most productive use of my time at 7:00 a.m. in the morning? I want to be thinking about something deep, something strategic. I want big chunks of time in terms of how I approach that problem. And I think that that enables you to switch out of automatic mode and it enables you to switch into something conscious, and I don't care about what choices people make. Within reason, obviously. I mean, if they're conscious about those choices. But we usually get into this autopilot and that's how we live our lives, and then we wake up at the end and we recognize that, you know, maybe that wasn't the best approach, or maybe that wasn't the approach that I wanted personally, and those are the decisions where we want to take a different path. Being conscious about those decisions and inserting a moment in the day on a regular basis where you just do five minutes of self-reflection. You can call it meditation. You can call it whatever you want. You can go sit on the toilet, but what you really want to do is just think about, like, what did I do today? What could have been better about today? Where did I waste my time? How do I waste less time in the future? Where could I have been more productive? Where should I have invested more of my time, my thinking energy? And then being aware of how these things interact over a long period of time, so also taking that and thinking about, well, I spent my time on X today. Why was I dealing with X? Not, like, how did I deal with X? And what is the path forward? But why is X an issue? Is it because I made a poor decision in the past? Why did I make a poor decision in the past? Does my environment play a role in that? And start asking yourself questions like that. And then just being open to the response about it. I mean, it's not a dialogue with a friend. You don't have to admit you were wrong to anybody else. You just want to be open to yourself in getting better over time so you're spending less time doing stuff like that, more time doing what you want to do. I don't know if that helps.

Matt:	No, that's great. That's super helpful. And I think everybody could take five minutes at the end of their day and kind of reflect on what to place and why.

Shane:	Yeah, but nobody does that. Well, I don't mean nobody, but very few people do that on their own volition, and the people that I've helped start it, we do it in an organized and structured way. They almost always continue, and they say it's one of the most helpful things they've ever done.

Matt:	What are some books or other resources that you'd recommend for people who want to kind of follow up or dig down on some of the topics we've talked about today?

Shane:	I think Peter Bevelin's book Seeking Wisdom is amazing. 

Matt:	One of my favorite books of all time, by the way. Seeking Wisdom.

Shane:	Yeah. Porcelli's Almanac. I mean, we want to get less out of this... and, I mean, I fall into this trap on occasion. Less out of this, I need to read more, and what we want is more about what am I reading and do I understand it and is it worth reading to a level of understanding. I mean, I've met so many people who tell me that they've read Seeking Wisdom or Porcelli's Almanac, but then they do things that would definitely contravene the wisdom in those books. So, reading and understanding are two different things, and we want to apply ourselves to understanding. And if you just read the same book, you know, the people who say... And, I mean, I was one of them back in 2013. I think I read... Or 2014, it was. I read 150 books. I must have started 300. But at the end, I mean, one of the biggest lessons, one of the biggest failings I had, one of the biggest lessons I learned, and this is almost like a big secret, right, is that it's not the number of books you read. I could have read five books over the course of the year and actually improved myself more than reading those 152, because you start losing... When your goal is to read more books, you start losing track of what it is that matters and the understanding that matters and where does that come from. And then reading Porcelli's Almanac, that's not a book you read once and you kind of chuck on a shelf. And reading Seeking Wisdom, you don't read it once and then be like, oh, I got it. It's something that you read, you digest, you try to apply, you read again, you digest, you try to apply. And then through that, you hone your understanding of those ideas, and then you start consuming other information. And you map it and you translate it in your mind to the ideas that you've learned, the structure that you've decided to go for. And I think that, aside from that, I mean, I've moved almost materially to older books. We do a lot fewer newer books than we ever used to, a lot more of the books that have been around a long period of time, because that's usually an indication that they contain some sort of wisdom that's enduring, or they hit on some point that helps us hone our understanding of a topic that is still relevant. Less about the bestsellers, less about the Gawkers, less about the "What is the trend of the day?", more about what changes slowly over time, more about what am I really interested in, more about do I understand my circle of competence, how can I improve that? I think that that is all individual based. There's no ten books I can give everybody to read and they'll walk away satisfied. It's kind of a [INAUDIBLE 01:02:14], right. If you like white wine and I offer a red, that doesn't make a good [INAUDIBLE 01:02:20]. It's all individual-based and customized to you and what you're trying to achieve and where you are.

Matt:	I think... I mean, Seeking Wisdom is probably one of the best books I've ever read, and my copy, I think every single page has multiple notes, underlines, highlights. You know, somebody could probably spend a year just digesting that book, or more, easily. 

Shane:	Oh, totally. I have friend who reread that on a regular basis. You know, honestly, I would say that's a large portion of their success, is that not only do they reread it, but they understand it and they understand the dynamics at play, and then they apply it to life and they become incredibly successful by doing that. 

Matt:	Well, where can people find you online?

Shane:	So, we're at farnhamstreetblog.com. F-A-R-N-A-M Streetblog.com. We do three to four posts a week, covering everything from art and history all the way to philosophy and psychology, and I'm also on Twitter, which is @FarnamStreet. @ F-A-R-N-A-M-S-T-R-E-E-T, and we're on Facebook as well. Or, you can just Google Shane Parrish and Farnam Street crops up as, I think, the number one link on that. And that would be a great way for you to follow along with what we're doing and build your toolkit over time. I would encourage you that if you see an article and you're like, oh, well, I don't agree with this, or I don't want to learn about art, that you give it a week or two. I can't tell you the number of times I've had people go, you know, "A friend of mine sent me your link and I read it for a day and I was like, oh my God, what is this, and then I read it for a week and I was like, oh, this is really interesting, and then I read it for a month and I'm like, oh, I'm addicted to this. I can't actually get away from it. I've started going back and reading all their old posts." Because the topic of the day is not necessarily... I mean, our approach is to give you a broad range of solutions, or tools, if you will, so that you can build better products or solve different problems. Inevitably, we're going to come across something that you don't agree with or that you think is useless, or something you already know. And often, we contradict ourselves, right, and part of that is getting the reader to do the work of understanding that contradiction, and we're not giving you... We're giving you 90% of the solution. We want you to do the 10% on your own. And that 10% is where most of the value comes from, because if I give it to you, you don't actually understand it. It doesn't become part of your life. By you doing the work, then it becomes embedded in what you're doing and how you're approaching things.

Matt:	Well, Shane, this has been a great interview, and I really want to say thank you very much for being on here, and I know the listeners are going to love a lot of the stuff that we talked about today.

Shane:	Thanks, Matt. Really appreciate it. I'm looking forward to it.

 

 

 

June 07, 2016 /Lace Gilger
Best Of, Decision Making

The Biological Limits of the Human Mind

November 03, 2015 by Austin Fabel in Emotional Intelligence, Decision Making, Mind Expansion

On this episode of "The Science of Success", we explore one of the fundamental underpinnings of psychology: the brain itself. 

Your brain is a roughly million-year-old piece of hardware, designed to operate in the world of hunting and gathering, where dangerous animals and competing humans may lurk behind the nearest bush.

While our society has changed massively in the last 10,000 years (or even the last 500 years), our brains have not had time to catch up.

As a result, you and I are equipped with a tool that is riddled with shortcuts and processing errors, which can manifest themselves in mistakes, calamities, and all around terrible decisions.

To find out how you can get around these and make life a little easier, listen to this week's episode "The Biological Limits of the Human Mind".

 Also, continue the conversation by following Matt on Twitter (@MattBodnar) or visiting his website MattBodnar.com.

Thank you so much for listening!

Please SUBSCRIBE and LEAVE US A REVIEW on iTunes! (Click here for instructions how to do that!).  

EPISODE TRANSCRIPT

We’re going to start our discussion today with a story of a turkey. A mother turkey, to be precise. Turkeys are very caring parents. Research has actually shown that there’s a certain sound - a “cheep cheep”, or a “chirp chirp” sound that’s an automatic trigger built into turkeys by evolution. What happens is when a turkey hears this sound, it’s almost like a switch goes off in the turkey’s mind. And it immediately goes to nurture and take care of its young. That makes sense, then. The vast majority of the time, that works out perfectly. But here’s where it gets really interesting: If you take that sound and you record it, and you put it on a stuffed polecat. A polecat is an animal that’s one of turkey’s natural predators. The turkey will immediately go out to the stuffed polecat and begin to nurture it, just like one of its baby chicks. Which is a pretty surprising reaction, especially when you consider the fact that if you don’t have a recording of the “cheep cheep” sound with a stuffed polecat, the turkey will go absolutely insane and ruthlessly attack the stuffed polecat with its life. Why is the turkey doing that? The biological shortcuts are programmed into the turkey’s mind by the process of evolution. This is called by psychologists a “Click, Whirr” response. 

The famed psychology professor Robert Cialdini, author of the book Influence, has shown us that this sort of Click, Whirr response, which sounds kind of ridiculous, right? You know, what, how are these turkeys so dumb that they’re literally taking care of their arch nemesis one day, then if they don’t have this particular sound, they’re suddenly flipping and attacking them. But what happens is, as Robert Cialdini has shown, and many other psychology researchers, this phenomenon is actually a biological shortcut that’s programmed in the turkey’s mind, and humans have many of the same biological shortcuts.

So, let’s back up a million years. I want to talk about the selective power of evolution, and really understanding the mechanics of evolution and what it means. A lot of people when they hear, or when they talk about evolution, they think that “the strong survive”, or “the best survive”, whatever that means. But really what evolution is talking about is that the most well-adapted to their environment happened to survive more often, and thus happened to reproduce. And so, the environment selects, kind of the optimal characteristics for survival. So, through evolution these turkeys happen to - the turkeys that happen to have sort of a natural trigger that when they hear the “cheep cheep” sound, they go and take care of their young. Those turkeys took more effective care of their young than turkeys that had other behavioral patterns. So, those turkeys reproduce more often and more frequently. And thus, that trait, over hundreds of years, over millions of years, over thousands of years, was slowly embedded into the turkey’s behavior. Similarly, human beings have many of the same biological Click, Whirr responses as turkeys. And if you think about it, human beings most of our evolutionary history has taken place in a hunter-gatherer society. And within that hunter-gatherer society, or even pre-hunter-gatherer society, evolution naturally selected a number of behavioral traits that are embedded in the human mind, into our psychology, that are completely non-optimal for living and existing in today’s society. In fact, if you were to compress the four million year evolutionary history of human society into just twenty four hours, the advent of agriculture would take place at 11:55 p.m., just a shade before midnight. So, if you think about the fact that the evolutionary time scale of our development was nearly four million years, and that agriculture, which was even thousands of years ago, was only at 11:55 p.m., on that 24-hour window. You really get a sense of how much time we’ve sort of had to adjust to the constraints and stresses of modern day society. What happens is the things that are naturally selected in a hunter-and-gatherer environment were you’re, you know, foraging for food. You’re living in a small tribal society, you’re dealing with predators, you’re dealing with all different kinds of dangers. The behavior patterns that are selected by evolution that are optimal for survival in those circumstances are not the same behavior patterns that are optimal for succeeding in today’s society and in today’s world.

Society has changed massively in the last two or three hundred years, let alone the last several thousand years, let alone the last several million years. So there’s a couple key ways that these changes manifest themselves. One of the first examples is the idea of seeking explanations for things. Wanting to understand, wanting to put an explanation to something that isn’t necessarily always right, isn’t necessarily there, doesn’t necessarily fit. This is kind of a pattern recognition which humans are incredibly effective at recognizing patterns. So much so that sometimes we recognize patterns that don’t even exist. 

Another way that this manifests itself is through fear and anxiety. When you think about it, if you’re living in kind of the world of the hunter-gatherer, if you have all of these stresses taking place, if you have a predator lurking behind a bush, if you eat these berries and they’re poisonous, you may not live, right? All of these different things in that world – it pays to be very cautious, it pays to be very skittish, it pays to avoid taking risks and to be very anxious about what might happen to you if you were to take a certain course of action. In reality, that sort of behavior is deeply engrained into us. Some people use the term “lizard brain” to describe that type of behavior. 

Another way this manifests is in fast classifications. If you’re living in a world and you see something, hunter-gatherer sight, and you see something? You need to be able to classify it immediately and people that work really quickly at classifying “that’s a threat, that’s dangerous, this is safe.” The most quickly they could do that, the faster they could make a decision, and the higher probability they have of surviving. But the reality is a lot of times those fast classifications in today’s society, we end up making the wrong classifications, or are evolutionary programmed mental shortcuts end up short-circuiting. 99% of the time, those shortcuts are designed to be incredibly effective, right? To the point that there’s so much information that deluges us every day that we get hit with. Nonstop ads and e-mails and all kinds of things, that we have to have an ability to filter out a lot of that junk. But the reality is, occasionally, these fast classifications and these mental filters, will let something in or classify something in a way that’s completely inappropriate, and you have this sort of outsized event takes place there a massive mistake happens, that you never could have foreseen because your mental shortcuts essentially misfired.

Another thing that was preprogrammed to us in this hunter-gatherer world is the focus on society and the tribe. If you think about, you know, from a reproductive standpoint, somebody who gets exiled from the tribe, loses food, lose potential mates, lose – it’s pretty much a death sentence in many ways to get exiled from a tribe in a hunter-gatherer world. So, people naturally develop the traits that led them to wanting to please others, and many of these traits are incredibly beneficial. Occasionally they misfire. But focusing on not wanting to – not wanting to do something that’s not socially acceptable. Wanting to get the approval of other people. All of these things were essential in survival in a world where being exiled from the tribe means your death. 

But the reality is all of these different filters manifest themselves in a way that is completely counterintuitive and unproductive in many of the contexts we find ourselves today in the modern world. The same sort of Click, Whirr psychological shortcuts that are essential to survival in the hunter-gatherer world, in today’s high-pressure business and social situations, can give us exactly sort of the wrong impulse, exactly the wrong way to think about how to handle these situations. And so this idea that there are sort of hard biological limits on your mind, your subconscious, your mind, your ability to process things, your innate, built in biases and the way you perceive the world. Every single human has a ton of inherent challenges and problems inside their mental machinery. It’s been pre-programmed to us for millions of years where, evolution, the hunter-gatherer society, has essentially sculpted the human brain into a tool that, while perfectly optimized to survive and reproduce in the world of a million years ago, has a number of shortcomings in today’s society. 

This is kind of one of the most critical first things to understand if you really want to understand the psychology of peak performance. You have to understand what the physical limits of the brain are. You have to understand that these limits exist so you can start to realize and see the patterns in the ways that it plays out where your biological limits within your mind naturally cause you to make certain judgements - to feel certain ways - to think about things in a certain fashion that are not the right ways to think about it, that are not the optimal ways to think about it. And so, throughout the course of this podcast, I’m going to teach you a bunch of different cognitive biases, a bunch of different ways that you trick yourself, that your mental circuitry short-circuits and give you the tools and capabilities to be able to overcome these problems, be able to understand and see your own mental limitations so you can achieve the goals you want to achieve, so you can be successful, so you can master your own psychology. 

Warren Buffett has an analogy where he talks about the mind as a motor. Your IQ, or your innate intelligence or talent, represents your horsepower. Right? Say, an engine has 500 horsepower, whatever it might be. Your IQ sort of represents that raw potential. But your output is what actually counts, right? Do you have the toolkit mentally and the mindset and the ability to use that 500 horsepower engine to go 10 miles an hour, or to go a hundred miles an hour? And the biological limits set in place by evolution over millions of years of human history are going to naturally constrain in many ways your ability to do that, and without the sort of self-awareness and knowledge of what those biases are, you’re inherently limiting your ability to maximize the output of your engine. So, I want to teach you the tools. I want to give you the framework to think about all of these different pieces of the puzzle so you can really understand how am I lying to myself? How is my mind tricking me? What are the shortcuts that are naturally misfiring in my mind, that are causing me to have these challenges, or issues, or preventing me from achieving the goals that I want to achieve?

And so that’s what we’re going to talk about throughout this series, Science of Success. And this is really one of the cornerstones, kind of fundamental pieces of understanding what is necessary to sort of grasp the depths of the human mind, and grasp and understand how performance really functions, and how the mind truly functions. 

November 03, 2015 /Austin Fabel
Emotional Intelligence, Decision Making, Mind Expansion
  • Newer
  • Older