[00:00:19.4] ANNOUNCER: Welcome to The Science of Success. Introducing your host, Matt Bodnar.
[0:00:12.0] MB: Welcome to the Science of Success, the number one evidence-based growth podcast on the internet with more than 2 million downloads, listeners in over a hundred countries and part of the Self-Help for Smart People Podcast Network.
In this episode we discussed how to make better decisions under conditions of uncertainty. We look at the worst call in the history of football. Discuss examples from life, business, and even high stakes poker to understand how to make the best possible decision in a world filled with unknowns.
What exactly is a good decision? Is that different than a good outcome? We look at this key question and uncover the wisdom hidden in the reality that these two things might just be completely different. All this and more with our guest, Annie Duke.
Do you need more time, time for work time for thinking and reading? Time for the people in your life? Time to accomplish your goals? This was the number one problem our listeners outlined and we created a new video guide that you can get completely for free when you sign up and join our email list. It's called How You Can Create Time for the Things That Really Matter in Life. You can get it completely for free when you sign up and join the email list at successpodcast.com. You're also going to get exclusive content that's only available to our email subscribers.
We recently pre-released an episode and the interview to our email subscribers a week before it went live to our broader audience, and that had tremendous implications because there is a limited offer in there with only 50 available spots that got eaten up by the people who were on the email list first.
With that same interview, we also offered an exclusive opportunity for people on our email list to engage one-on-one for over an hour with one of our guests in a live exclusive interview just for email subscribers.
There are some amazing stuff that's available only to email subscribers that's only going on if you subscribe and sign up to the email list. You can do that by going to successpodcast.com and signing up right on the homepage, or if you're driving around right now, if you're out and about and you’re on the go you and you don't have time, just text the word “smarter” to the number 44222. That's S-M-A-R-T-E-R to the number 44222.
In our previous episode, we went deep on the science of personality. We looked at how we've moved way beyond the debate of nature versus nurture. Examine the myth of authenticity and the danger of just being yourself and why human well-being, a.k.a. success, depends on the sustainable pursuit of core projects in our lives.
We explore the complex dance of self-improvement between the limitation of biological social factors and the identity of individuals and looked at how much agency and control we really have in shaping our personalities and lives among all these different factors with our guest, Dr. Brian Little. If you want to really understand yourself and how to live a better life, listen to that episode.
Now, for our interview with Annie.
[0:03:10.6] MB: Today, we have another unique guest on the show, Annie Duke. Annie is a professional decision strategist and former professional poker player. She's leveraged her expertise in the science of smart decision-making throughout her life and for two decades she’s one of the top poker players in the world. She’s the author of the book Thinking in Bets: Making Smart Decisions When You Don't Have All the Facts, and after being granted the National Science Foundation Fellowship, she studied cognitive psychology at the University of Pennsylvania.
Annie, welcome to The Science of Success.
[0:03:40.3] AD: Thanks for having me.
[0:03:41.8] MB: Well, we’re ere very excited to have you on the show today. As we kind of talked about in the preshow a little bit, I am a poker player so I’m familiar with you and some of your work, and I think the decision-making, sort of thinking methodology that poker teaches is so valuable and so sort of applicable to a broader kind of sphere of life.
So I’d love to start out with kind of this idea that you talk about, which is sort of the difference between bad decisions and bad outcomes.
[0:04:06.4] AD: Yeah. Sure. So I think that one of the big problems that we have in life is trying to figure out the lessons that we’re supposed to take from the way that things turn out. So we have this idea that you should be learning from experience, but that’s actually really difficult because there are a lot of playing in the way that your outcomes relate to the actual quality of the decisions to lead up to them.
So this very loose relationship between outcome quality and decision quality which you can see really well in poker, right? I can play the best hand and I can actually play it very well. On the turn of a card, because I don't have any control over the cards that come, I can lose. So I can make really good decision and have a very poor outcome, or I can play a really bad hand. Actually play it pretty poorly, and because I get lucky in the cards that are still to come, I could actually win. So I can make really poor decision and have a really good outcome.
And this lose relationship actually creates a lot of problems for us, and what you see people do is that when they’re evaluating outcomes out in the world, they tend to do this thing called resulting when they're looking at other people's results. What resulting is, is tying too tightly the quality and the outcome to the quality of the decision. In other words, thinking that it's a reasonable thing for you to be able to work backwards from whether the outcome was good or bad to whether the decision was good or bad. In other words, thinking like if I win a hand of poker, I must have played it well, or you lose a hand of poker, you must have played it poorly.
So if you want me to, there’s a really, really good example of that that I actually opened the book about Pete Carroll and the Super Bowl, if you want to go into that as an example.
[0:06:00.9] MB: Yeah, I’d love to hear that example and dig into it.
[0:06:03.2] AD: Okay. Then, actually, I just posted today on Twitter about a very good example of that that we could get into that’s like brand-new research. So I think that might be fun to look just really quickly. So let me give you that sort of popular sports example of resulting.
So it’s the 2015 Super Bowl, Super Bowl XLIX, and the Seahawks are on the 1 yard line of the New England Patriots. It’s second down on the 1 yard line, 26 seconds left in the whole game, and the Seahawks are down by four and they have one time out.
So, obviously, if the Seahawks can score here with a touchdown, they’re going to win the game, because there’s not going to be time for the Patriots to be able to march back out on the field. So everybody is expecting, because it’s second down and they’ve got the one timeout. So everybody is thinking, “Okay. They're going to hand it out to Marshawn Lynch, who’s one of the best short-yardage running backs in the history of the game and he’ll obviously just try to barrel through the Patriots line. Then if he can't do that, Pete Carroll will call a timeout. They’ll hand it off to Marshawn Lynch again and give him two tries at the end zone.
So that’s what everybody kind of thinks is going to happen, and instead, what Pete Carroll does is something pretty surprising, which is he calls for a pass play. So he has Russell Wilson pass the ball. The ball is very famously intercepted in the end zone by Malcolm Butler. Obviously, that ends the game.
It’s really interesting, people can go on to YouTube and they can see the clip from the actual game, and I really recommend that you listen to Cris Collinsworth actually call this play. So after the ball is intercepted – I mean, Cris Collinsworth is just flabbergasted and saying this is the worst call that he’s ever seen in the Super Bowl, and the headlines the next day didn't disagree. Most of them declared USA Today, The New York Times, The Washington Post, they were all saying this was the worst call in Super Bowl history. The Seattle Times actually, which I think has more skin in the game said it was the worst call in the history of all of football.
So the question is; was this really a bad call? We know the really bad result for sure. It was a very, very bad outcome. But it wasn’t a bad call. Well, Pete Carroll was asked about it actually on The Today’s Show, and I think they were trying to get him to say that it was a really bad call, and what he said instead was, “Well, I agree that it was the worst result of a call ever.”
I think that that’s an incredibly insightful comment by Pete Carroll, because when you actually look at the map, and you can do this by – you could look at Benjamin Morris over on 538 have pretty good analysis of this. Also, Brian Butler, I think over on Slate analyzed it as well, that the map actually looks pretty good for doing a pass play.
I think the key piece of information to note is that if you look at the 2015 season, the number of intersections in that situation were zero. It’s probably too small of an end. But if you look over the past 15 season, which I think is generously aggregating the data, because covered has changed, but let's just go over the last 15 seasons just to get a lot of data, then the intersection rate in that spot was only 2%.
The interception rate in that situation is going to be somewhere between 0 and 2%, depending on what data you pull. I think once you know that, it's a little bit easier to recognize that it was probably just a really unlucky outcome and not necessarily a really bad decision, that it was just an unlucky bad outcome. I think that it's pretty easy to get there once you kind of imagine that Pete Carroll called that pass play and the ball was caught for a touchdown. What do you think the headlines could have looked like?
[0:09:57.6] MB: Probably most genius play in Super Bowl history.
[0:10:00.0] AD: Yeah. I mean, I think that they would've been loud and this is the kind of thinking that got him to the Super Bowl in the first place. You could imagine that, right? He out-Belichicked Belichick, who’s known as a pretty creative coach.
We actually have an interesting example of this now, which is the Philly special. So for people who are familiar with this year's Super Bowl, the Philadelphia Eagles on fourth down on the 1 yard line of, again, the Patriots who are in the Super Bowl every year as we know. They’re up by three, the Eagles, and everybody is expecting Doug Peterson to call for a field goal and just take the easy three and go up by by six going into the second half
Instead, he goes for it — not only goes for it on fourth, but runs a very unusual play called the Philly special, and Nick Foles, the quarterback ends up in the end zone catching the ball. Everybody talks about how it’s an incredibly brilliant play. But if it had gotten dropped and the Eagle had actually lost that game, you know that that’s exactly where people would've been looking and talking about what a stupid decision it was by Doug Peterson not just to take the three points.
Hopefully what you can feel from those example is how much we emotionally get pulled around in the way that we evaluate the quality of our decisions based just on the quality of the outcome. Because, obviously, whether that ball is caught or dropped on that one time in the end zone by the Seahawks does not have anything to do with whether the decision was good.
In the same sense, if I go through a green light and I happen to get in an accident, that doesn’t mean that going through green lights is a bad decision. If I run a red light and I happen to get through it safely, it doesn't mean that running red lights is a good decision either. It’s the same thing for Pete Carroll there, and this is a really, really, really big problem for decision-making, is that when we’re trying to learn from our experience, we get so hung up on whether things turned out well or things turned out poorly in terms of whether we repeat the decision or change the way that we’re doing things. We end up with these weird reactions to the way that things turn out, and if we can't get past that and if we can't get past resulting, it becomes very hard to become effective at learning.
[0:12:22.3] MB: It’s such a great example, and I love the idea of kind of the stoplights, because that really crystallizes it I think really clearly. As somebody who you know plays poker, obviously, like a suck out is a great example of something like that. But for so many people, even a lot of people you see at the poker table, they get so focused on the result of the hand as supposed to kind of what the decision-making process was and how the kind of went into it.
So I think pulling it out and providing some context in sports and with other examples is a great way to shine light on the fact that it's so easy to get caught up in the result, and yet what really matters, because the only thing we can actually control is how do we make better decisions.
[0:13:00.7] AD: Yes. So there’re been a couple of articles that have come out recently that are looking at this by really good economists and behavioral scientists. So one was looking at – And this is where you can see how these overreactions and under reactions can be really, really devastating based on a single result.
So they were looking at the NBA. One group of researchers was looking at the NBA and they were looking at cases where the team one by one point or lost by one point. So I imagine you would agree with me that winning by one point or losing by one point is really mainly a matter of luck, that there's probably no difference in the level of player, the decision-making that goes into a one point win or a one point lost. That’s just variance. I mean, I assume we can just agree on that.
What they looked at with lineup changes, and what they found was that you were much, much more likely to get a lineup change after a one point loss than you were after a one point win. Of course, in reality, there should be no difference between those two. So what you're getting is this big overreaction to a one point loss, which is just based on the quality of the outcome feeling like you need to change something now. But you're not getting that same reaction to a one point win even though statistically those would be the same.
Then, actually, there was a paper that I was just looking at today from probably going to butcher his name, but Spyros Makridakis, and what he was looking at, he created a situation. This was where he was giving people data to evaluate. So it’s quasi-experimental. So he was looking at how good agents were in soccer and evaluating talent, and it was a close misses for kicks. So where they either hit the post or it just went in.
Again, these are going to be a matter of luck, b you're very close to the post. What they found was that agents were much more likely to view the player is talented when it just went in versus when it hit the post, and that's very similar to the NBA situation.
So these are cases where we know that the determining factor is mainly luck, and yet people are acting as if the outcome of the big signal for scale. In our own lives, we see this all the time. When in our business lives or our personal lives we have a bad outcome, we go and look for things that we can change. We think, “Oh! I need to change something because I bad outcome. So I need to change strategy,” and you get these very big overreactions to them. Then went things are going well we think we’re supposed to rinse and repeat, and we don't recognize that just that we should probably be patting ourselves on the back less, because good outcomes can always going to have some luck element to them, and sometimes very big luck element to them. We should also probably be changing strategy and recriminating ourselves less when we have poor outcomes as well.
[0:15:55.6] MB: So how do we start to combat that kind of emotional reaction or that kind of natural kind of gut reaction to think about the result, as supposed to pulling back and evaluating the process?
[0:16:09.3] AD: I'm really glad that you asked that question, because that’s a really big – Okay. So here's the situation. There's a lot of noise. We know that our brains work this way. We know that we naturally try to make these connections. So my answer to you is not, well, now that you’ve learned about it, you’re fine. That's not actually very helpful.
So that’s the first thing, is that with any of these cognitive biases, having knowledge about the bias is not necessarily that helpful and it's really just because this is the way our brains work, like we’re just sort of built with them and we can't take our brains offline and install new software. Like here we are.
So first let me give you the good news, which is that very small changes can have very big impact. So you don't need to get that much better at this in order to have a big impact on what the quality of the outcome over your lifetime are going to be, because it acts like a compounding interest. So if you can get a few percentage points better at being more rational and evaluating your outcomes, for example, or overcoming confirmation bias, or not succumbing to hindsight bias, whatever it might be, you’re going to do a lot better in life. So that’s the good news.
Let me give you some hints about how to kind of deal with this. So hint number one is to think about – To sort of approach the world to in the frame of want to bet. So, really, to think about would I be willing to bet that this is why this happened. So, for example, with the Pete Carroll thing, if you said, “Oh! I can't believe that that was the worst decision in Super Bowl history,” and I said, “Well, do you want to bet on that?” What would happen to you when I challenged you to that question?
[0:17:50.0] MB: Yeah, that’s really interesting. I mean, I think you start to – At least the way I would think about it is start evaluating other decisions. Start looking at the probabilities of different outcomes happening and that kind of stuff. Then it gets – You start to think much more objectively and kind of quantitatively about it.
[0:18:05.0] AD: Right. Because what that does, what that want to bet does is it causes the uncertainty to bubble up to the surface. So if we think about why is it that outcomes and decision, like that relationship is hard to evaluate, there’s two sources of uncertainty that cause that to be hard to evaluate. The first is lot, which we’ve discussed about, that there's just a lot of luck in the way that things turn out. The second is information asymmetry or hidden information. That we don't know what Pete Carroll knows, for example, when we’re trying to evaluate that. We don't know what coverages he saw, what kind of things he had practiced.
In fact, when we’re actually watching that, it's not like we’re living in the matrix and we can see those percentages or the decision tree right in front of us. We don't know the map either. So we’re sort of guessing at it.
So when I challenge you to want to bet, what it does is it reminds you that while you might have been so certain about what the connection about the quality of that outcome and the quality of the decision work, that there is, of course, uncertainty there from hidden information and luck. When I say want to bet, what it does is it spurs you to start trying to examine the uncertainty. So it causes the uncertainty to bubble up to the surface in a really good way and you start asking yourself questions like, “Well, why do I think this? What research have I done? What's the math? What are the real probabilities? What does Annie that I don’t know? Why is she challenging me to this bet?” So it causes you to think about what my perspectives might be, which is really important. Because one of the best ways to be a decision-maker is not just to imagine things from your perspective, but try to imagine things from other people's perspective. That’s naturally because other people's perspective offers valuable insights that you might not yourself have. So you start to question what my knowledge might be.
All of these causes you to be very, very information hungry, because once I say want to bet and you acknowledge the uncertainty, because I’ve caused the uncertainty to come to the forefront, now what you want to do is start to narrow down the uncertainty, to reduce uncertainty. In order to do that, you have to start seeking out information and thinking about things from different perspectives. So that the first step, is to really recognize this probabilistic relationship and to try to ask yourself as much as possible, like, “Would I be willing to bet on this?” Because that allows that to come through.
But, remember, I said to you. It’s hard for us to get around our brains working this way. So the second thing that is really, really helpful is to get other people involved in the process with you. That’s because I think that in your own life you’ve probably noticed that you’re probably pretty good at recognizing when other people are bias, like what other people's biases are. Even if we’re not so good at recognizing it ourselves, we seem to kind of have a clearer view of when other people are maybe engaging in biased thinking.
So we can use that to our advantage and we can get some people together and say, “Listen. I’m going to watch your back and you're going to watch my back.” But the key is that you want to do it through this idea of we’re going to approach the world through thinking and bets. What that means is that we’re going to approach the world through the frame of trying to be accurate, versus trying to be right.
So let me explain what the difference between those two things are. So if we’re engaging in betting, the person who’s going win is the person who has the most accurate representation of the objective truth. I assume you agree with that.
[0:21:37.9] MB: Absolutely.
[0:21:39.0] AD: Right. So that's what I mean by accurate, that you're trying to figure out what sort of objectively what the world is and sort of have the most accurate mental models, right? So we are accurately modeling the world. We’re not just trying to prove that we’re right. So we’ve heard a lot in the news about echo chambers.
So when most people get together in a group, they sort of form a tribe whose goal is to prove that they are better than everybody else, right? So we kind of see this in politics, right? So the problem with that is that what ends up happening is that instead of de-biasing the individuals in the group, but actually causes more biased, because I am looking at – We’re all looking at the world to try and just to agree that we’re all smart and right. So it's whatever our prior beliefs are get affirmed.
So you'll say something about politics and I'll be like, “You're totally right,” and the other side are a bunch of idiots. Then you're like, “Yeah,” and then I'm like, “Yeah,” you're like, “Yeah,” and I’m like, “Yeah.” Obviously, that's not very helpful.
So when most of us are approaching the world, we’re just trying to prove that we’re right, that the things that we already believe are true, and people who think that way are going to lose in betting. You probably know this from poker too, right? People who persist in their same beliefs about like – I mean, a good example would be there are some people who think you're always close to slow places, right? Meaning, try to track people with aces. Like how do their lives go?
[0:23:08.1] MB: Yeah, I think you’re totally right. I mean, poker is such an unforgiving sort of crucible of learning, which is why I think it's such a great thinking tool, because in poker, the game doesn't care if you persist in your own kind of ignorance or persist to try to be right instead of be the correct, right? Or sort of try to be right instead of try to get to what's true. So you keep getting punished over and over and over again mercilessly until you change your thinking and start evaluating your own biases.
[0:23:34.9] AD: Right, or go broke. Which is actually what happens to the majority of people, which is what’s so telling, right? The people who are successful are the ones who do the thing that you, said, right? But those people are very few and far between. So we’re trying to get ourselves into that group of people.
[0:23:51.3] MB: And it's funny. I mean, we talk at length about this on the show and I wrote an article kind of at the beginning of this year called The Biggest Threat Humans face in 2018, and we’ll throw that into the show notes if any listeners want to check it out. But it was basically all about this idea that people today live in a world where they don't care about sort of pursuing the truth. They only care about being validated and feeling like they’re right. So that's causing all kinds of kind of social, and political, and problematic issues. The articles sort of addresses, “Well, how do we move forward and how do we kind of advance as a civilization if we’re kind of slipping into this place of ignorance and we’re losing kind of the pursuit and the grasp of what's actually true.
I think our podcast in some ways is kind of a project to try and teach people how to think and teach people what's important so that they can – We kind of take their own journeys towards finding the truth and kind of leveling up and being smarter, kind of better versions of themselves.
[0:24:45.8] AD: Yeah, I completely agree. We’re certainly seeing this in politics, right? The argument isn’t about what's the best policy. It’s about are you on my side? So, exactly. What that piece that you wrote, obviously, it totally germane to what we’re were talking about right now. So if you get a group of people together, like what you're trying to do with this podcast, where you’re saying, “Look. What we care about is not being right. What we care about is being accurate.” So I’m agreeing that because my goal is accuracy, that I’m going to have to take some short-term pain sometimes. It means that sometimes I’m going to find out that something I believed is not actually accurate. Now, that's okay, because my goal is accuracy.
So what you're doing within the group is you’re reinforcing this new mindset. The reason why we think, “Oh! I just want to be right,” is because it feels good to be right and feels bad to be wrong, and we’re all just trying to feel good about ourselves. Where I’m just trying to feel like we’re smart and we’re valuable and our opinions are meaningful and we’re good thinkers and all of those things. If we have to say this opinion that I hold actually turns out I need to calibrate it, because it's not exactly right. In the moment, that feels really bad to us.
But if we’re in with a really good group of people and we’ve decided that the goal is accuracy. What ends up happening is that the kinds of things that we start to feel good about shifts. So, for example, if I come up to and I say, “You know what? I think I was really wrong about this thing. Let me talk to you about it.” You'll say, “Oh my God! That's so amazing that you changed your mind and thank you so much for sharing that with me.”
Now, what’s happened is that because we have this commitment, I’m now being reinforced for the act of mind changing, or the act of calibration, or the act of mistake admitting, or the act of giving credit to someone that maybe I don't want to give credit to, because, for example, they’re on the opposite side of the political aisle, or whatever it might be, and you’re reinforcing those behaviors for me.
So now what happens is that the kinds of things that cause me to feel good about myself are things that actually move me toward the goal of accuracy. So I don't have to give up feeling good in the moment, because I change what it is that makes me feel good. The best thing that I could do, for example, in poker, if I walked up to say Eric Seidel, who’s an amazing player and one of my mentors. If I walked up to him I said, “Man! I think I really butchered this hand.” That would get so much more reinforcement than if I walked up to him and just said, “Oh! I got so unlucky,” which is what most – That’s the kind of talk that most people are reinforcing.
Eric Seidel would've walked away from me if I said, “Oh, I just got so unlucky,” because he didn't care about are you right or are you wrong. He cared about are you getting better. Are you getting more accurate? Are you moving toward the North Star? Your North Star?
So now, when I walk up to him and said, “I think I really butchered this hand,” which might feel really bad, sort of, at the outset. Once he’s become my mentor, that's what makes me feel. So get a really good group of people together when you're committing to accuracy. You’re going to hold each other accountable. You’re going to listen to diverse viewpoints. You’re going to be willing to – And here is a really important thing. You’re going to be willing to sit in the middle of not saying something is 100% or 0%, that Pete Carroll's call was bad or good, but holding those beliefs somewhere in the middle. Because once I say, like, “Do you want to bet?” What that does is it causes you to see like how sure am I about this? What you realize is that you're very, very rarely 100% or 0% on anything and it moves you to the middle where you’re like, “Well, I’m like 60% sure that was a terrible call.”
So now when I start to walk through the map with you, instead of having to have a full on reversal from right to wrong, you get to go from like 60% to like 45%, which isn't as big a deal, because you're sitting in comfort with uncertainty anyway. So that’s the second step that’s really important.
So the first is really start thinking about want to bet and really start embracing uncertainty and understanding the uncertainty in this relationship. The second is get other people to help you. The third is –I think this is the really kind of interesting one, is the best way to ensure that you’re learning well from experience is to actually trying to quarantine yourself from experience.
So I know that that’s sounds a little bit weird, but let me try to explain what I mean. When you have an outcome, good or bad, the quality of the outcome casts such a strong shadow over your ability to evaluate the decision quality that it’s mostly better not to have the outcome at all. I mean, unless you actually have like 10,000 outcomes, unless you can flip a coin 10,000 times and you can run a Monte Carlo, which for many, many decisions we can’t.
Most decisions, the probabilities are relatively unknown. You're guessing at them, and we mostly don't get enough tries at the decision in order to have enough data to be able to say something across the aggregate, right? So we're usually dealing with just a handful of outcomes as were trying to evaluate decisions. We know that the outcome quality, it’s just casts this very big cognitive shadow. So what we have to do is try to figure out how to get out from under that cognitive shadow.
The way to do that is to kind of ignore the outcome and focus on the process of the decision in the first place. So there're really three things that you can – There's four things you can do in order to really do this. Number one, as much as possible, evaluate decision prior to getting the outcome. For example, if you’re thinking about a particular sales strategy, really do really good evaluation of that sale strategy in advance. Try to imagine what the outcomes of that strategy might be, or particular tactics that you might be employing and what the outcomes of those particular tactics might be.
Try to – When you're thinking about what those outcomes might be in advance, think about what the probabilities of those outcomes might be and actually write them down, and don't be afraid to try to do that. People will say to me, “Well, I can't say what the probability is, because I don’t know what the exact answer is.” But it's not school where it's like you have to say if I flip a coin, how often it will land head? We know the answer is 50%.
For most things, you're going to be guessing. It’s going to be a range. You’re going to be like, “I don’t know. It’s somewhere between 30 and 55% of the time,” and that doesn't feel good to most people because it feels like a wrong answer, but it's not, because it's better than not trying at all. Once you get that range on it, just like with the want to bet question, it causes you to try to really seek out the information that might allow you to narrow that range.
So it makes you very information hungry, which is good, because you're actually thinking probabilistically. So think about the outcomes. Try to assign some probabilities to those outcomes, and now that helps you, because it helps you to actually make a better decision in the moment prior to the outcome coming. Then once the outcome hits, because you’ve memorialized that process, you're less likely to overreact to a single outcome, because you recognize that outcome is part of a set. So that the first strategy, is much as possible, try to do this in advance.
Now, there are some outcomes that you can't do in advance. So for example, you know from poker, I can't go through that process and memorialize all that stuff and work in a group to try to get to those scenarios. When I’m making a decision at the poker table, I have 30 seconds. So pretty much all of my evaluation is occurring after the fact, exposed.
So what do you then once you already have the outcome if we know that the outcome casts such a strong shadow? There're three strategies for doing that. The first is, is if I’m working with you as a member of my decision pod, to try to talk about an outcome. We’ll talk about the quality of my decision. It's really good if I walk you through the decision only to the point that I have the question and not beyond.
So I imagine you know from describing poker hands that this is actually really difficult to do, right? So when people are describing poker hands to you, how often is it when they're asking you a question that they don't describe the whole hand before they ask for your advice?
[0:33:07.5] MB: Yeah. I think there’s always missing information. What position were you in? What was the stack size? What was the stack size of your opponent? What were the table dynamics? There’re so many kind of pieces of the puzzle that often times people just leave out huge factors that could meaningfully impact it.
[0:33:22.2] AD: So I think that that’s true. How often is it that someone describes a hand to you where they have a question about the hand and how often is it they don't tell you how the hand turned out? Don’t they tell you how the hand turned out like every single time?
[0:33:35.7] MB: Yeah, that’s right. The best way do it would be to just –
[0:33:37.1] AD: And then they ask you.
[0:33:38.4] MB: Yeah. Just don't say the outcome, and then analyze whether they made the right decision or not.
[0:33:42.3] AD: Right. Because, think about it. So here is the problem, is that once I’ve told you the outcome, I’ve now infected you with the outcome. We know that there’s this bad thing called resulting. I’m now going to make you result.
So, naturally, whether you're trying to or not, if I tell you that I won the hand, you're probably going to process my decision is better. If I tell you that I lost the hand, you’re going to process it as worse. So when I described, for example, a poker hand to you, I might say like, “Okay. I was in first position. I’m going to give you all the right info, obviously.” I was in first edition and this is was my stack size and this way my opponent’s stack size. Here's what my hand was and here's sort of whether I’ve been loose or tight or perceived as aggressive or whatever it is. Here’s the perception of me. Here’s how that person has been playing. I had this particular hand and I was trying to decide whether I should open the pot for a raise or fold. What do you think? I stop.
Most people don’t do that. Most people will move on and say, “So, I was trying to decide whether I should raise or fold.” So I raised, and then they did this and blah-blah-blah, and they describe the whole hand and they'll say to you, “So what do you think of whether I should’ve played the hand or not?” You might as well not ask the question at that point.
That's true for infecting people with any kinds of beliefs. It’s like, for example, if you have a hiring decision and you have four people interview the potential candidate. If you allow those people to talk to each other before they come and tell you their opinion, you might as well have had one person interview the candidate. You have to figure out how to quarantine people from these things that really a part of what causes bias so that we know that my beliefs can cause bias for myself, because I'm a natural tendency to try to argue toward my own beliefs.
Guess what? If I tell you my beliefs, I’ve now infected you with those and we’re going to now probably come to consensus in our view toward those beliefs. If you know the outcome of a decision, I’ve infected you with it. So when I'm talking to you and I’m trying to work with you, it's really good for me to not tell you what I believe and not tell you the outcome. That's a really good thing to do, and only describe up to the point that I have the decision and try to quarantine you from the rest of these stuff.
Another really interesting exercise that you can do that I think is super, super valuable, I kind of hinted at a second ago, which is take one group of people and describe the decision that you have. Go past the decision point to the end, to the outcome and tell them that the outcome was good. Then take another group of people, describe the exact same decision process and tell them that the outcome was bad and just look at what happens in terms of their evaluation of the decision process so that you can see how much the outcome drives the analysis of the decision. How much it biases the analysis, the decision, and then you can interpolate between the two, because the people that you tell that the outcome was poor are going to point up the weaknesses in the decision process and those obviously you would like to be able to see. The people that you tell the outcome was good to, they’ll point out the strengths in the decision process and then you can kind of combine those two pieces of advice to try to get to a better answer about what the quality of the decision was.
[0:37:03.8] MB: This week's episode is brought to you by our partners at Brilliant. Brilliant is a math and science enrichment learning tool. You can learn concepts by solving fascinating, challenging problems. Brilliant explores probability, computer science, machine learning, the physics of everyday life, complex algebra and much more. They do this with addictive interactive experiences that are enjoyed by over 5 million students, professionals and enthusiasts around the world.
One of the cool things that I really also like about Brilliant is that they have these learning principles, and two of them in particular really kind of stick out to me as powerful and important principles. One of them is that learning is curiosity driven, and if you look at some of the most prolific thinkers and learners in history, people like Leonardo da Vinci, Albert Einstein, they were incredibly curious individuals. Just really, really curious and that it's so great to see that one of the learning principles is this principle of curiosity.
Another one of Brilliant’s learning principles that's absolutely critical is that learning needs to allow for failure. If you look at Carol Dweck, if you look at the research behind mindset, this is one of the cornerstones of psychology research. You have to be able to fail to learn and improve. You have to be able to acknowledge your weaknesses. You have to be able to people to push yourself into a place where it's okay to make mistakes.
These learning principles form the cornerstone of the foundation of Brilliant. It's such a great platform. I highly recommend checking it out. You can do that by going to brilliant.org/scienceofsuccess.
I'm a huge fan of STEM learning, and that's why I'm so excited that Brilliant is sponsoring this episode. They’ve been a sponsor of the show for a long time and there's a reason. They make learning math and science fun and engaging and exciting. You can get started today with Brilliant by going to brilliant.org/scienceofsuccess. That's brilliant.org/scienceofsuccess.
If you’ve been enjoying our weekly riddles in Mindset Monday, we’re also collaborating with Brilliant to bring some awesome and exciting riddles to our Mindset Monday email list.
[0:39:10.7] MB: So I want to change gears a little bit, but kind of come back to a concept that I think underpins a lot of this thinking, which I love kind of the concept that you’ve talked and written about and this whole idea of sort of how poker and sort of broadly decision-making is really about kind of trying to make decisions under conditions of uncertainty.
As you sort of put in the subtitle of the book; making decisions when you don't have all the facts. I think one of the key components of that is this concept of expected value and how that kind of weighs our decision-making process.
We spent a lot of time on the show digging into the bias side and kind of the human side of it, but I'd love to get into as somebody who's been in the trenches and made a lot of this kind of quantitative probabilistic sort of thinking. I'd love to hear kind of you talk and share the idea of expected value and how that kind of weighs into what we've been talking about.
[0:40:02.2] AD: Yeah, absolutely. So, again, in terms of making decisions under conditions of certainty, really, we’re talking about these two sources of uncertainty, which is hidden information and luck. So we’ve got those two pieces of the puzzle. What that means is that any given outcome, there’s – Basically, what it means is that there are more futures that are possible than the one that will actually happen, right?
So for Pete Carroll, there was a future where the ball was fumbled. There was a future where the ball was intersected. There was a future where the ball is just incomplete. There was a future where the ball was caught. So we can sort of think about all those different futures and then think about that only one of those could occur, in this particular case, it was the ball was intercepted occurred. But each of those futures have a certain probability attached to them.
So this is true of anything. It's not true of things that we sort of naturally think of as quantitative, right? That’s not the only place that’s true. So we think of like, for example, if we’re talking about investing, right? That, “Oh! Well, that’s obviously very quantitative. So we’re supposed to think about these probabilities and think about how we’re supposed to calculate those out.” But it’s really true of anything that we do in life.
So thinking an expected value is a way to hold in your head that the future is uncertain so that we don't have these really big overreactions and so that we can evaluate our outcomes in a more rational way in order to learn from experience.
So let me just say first of all what thinking an expected value means, and then I'll give you an example from my own consulting of how I sort of wrap this into a group where probabilistic thinking would have been necessarily natural to them that it really ended up improving the way that they work.
So expected value is exactly this idea. Any future isn’t – Most times, the future is not 100% going to turnout in a certain. So there is a variety of different outcomes and each of those outcomes has a probability of occurring some percentage of the time that it happens, and each of those outcomes has some sort of return that you might get from it. So we always want to think about that the return is an all or nothing. The return is whatever the return is multiplied by the number of times that it will occur.
So I'll give you a very simple example. If you tell me that when I flip heads, I’ll win $100. That doesn't mean that I'm going to win $100. It means I’m going to $100 when I flip heads. I’ll flip heads 50% of the time. So that means that what I actually am – My expected value on the next flip is 50% times the hundred that you're going to give me. So it's actually $50, not $100. So how can we apply this to something that’s less direct, like coin flipping?
So I work with a nonprofit called Afterschool All-Stars. Afterschool All-Stars is a very big nonprofit. It’s national. Provide three hours of structured afterschool programming for over 70,000 inner-city kids per day. Really great. So they’re offering programs.
Now, obviously, one of the things that's true of all nonprofit is that they have a big reliance on grants for funding. So I was working with them trying to help them to deal with their budgeting, because their budgets were kind of all over the place. Also along with that, to try to understand to help them understand when they should be hiring outside grant writers, which obviously cost money and also how to sort of work their stack in terms of prioritizing grants.
So what I asked them to do was to give me a list of all the grants that they were applying to that year and what each grant was worth. So what they gave me was a list of all the grants that we’re applying to and what the grant award amounts were. So that's what the potential word was. So I said, “Okay. That's great. But what I need to know is how much are each of these worth to you. So you have to think about how often you’re going to get them and then multiply that by award amount to get it.”
That was a surprisingly hard thing. It took a few back-and-forth to get them to understand what I was asking for, because it’s not a way that people normally think. In that, what I got was, “Well, we can't know what percentage of the time we’re going to get it.” I said, “Well, that’s true. But do you agree that you're not going to get it 100% of the time?” “Yes.” “Do you agree you're not going to get it 0% of the time?” “Yes.” “Do you agree you’re not 50-50?” “Yes” “Okay.”
So we've kind of gotten a yes, no and maybe out of there. So you're going to be better than anybody else, because you have the most experience within your organization and with these foundations of kind of guessing at what the percentage of the time is. So you just have to take your best guess.
So they started doing that and they took their best guess and then I showed them, “Okay. So now you multiply it by the grant award amount.” That actually tells you how much the grant is worth. So they started doing that and it did a bunch of really good things for them. Number one, because they had to actually estimate the percentage in order to get to the expected value, it made them actually think more about what the actual probability of getting the grant was, which helps them make decisions under conditions of uncertainty because they start thinking about, “Well, really. What is the luck element and what is the skill element? What can I do to make this grant better? What information can I find? What can I understand about the grantor and will it start to reduce the information asymmetry in order to get those guesses to be better?” So that's the first thing it did.
The second thing it die, it revealed to them that some grant that they thought were very high value were actually not so high value and some grant that they thought were kind of low value were not that low value. So I can just give you an example. If you have $100,000 grant that you're going to get 10% of the time, that is not worth as much as a $50,000 grant that you’re going to get half the time. $100,000 grant that you’re going to get 10% of the time is 10% times 100, which means it's worth 10,000 to the organization.
A $50,000 grant that you’re going to get half the time is 50,000×50%, which means it’s worth 25,000. So it helped them to understand what the actual worth to the organization of the grants were. Now when they were doing their budgeting, they weren't guessing so much, because they were taking all of those expected values, all of those expectancy and putting that into their budgeting for the next fiscal year. So they’re budgeting was more on. So that was really good.
Then after the fact when they got or didn't get a grant, they were much less likely to overreact to it. So they were much less likely to start pointing fingers and blaming and say, “I can't believe that you didn't get that.” It helped them understand when they should hire an outside grant writer, because they could understand if the hourly that they were going to have to pay the grant writer was enough to increase the percentage of getting the grant enough to make it worth their while, to make it worth the return on the investment. So they understood that.
Then the other thing that it did that was really wonderful was because their focus started getting really digging down into these better probability estimate, they started calling up the foundation, and instead of just calling the ones that they didn't get, which is what they used to do. They would call them ones they didn’t get to ask what they could've done better, which his sort of our natural response. They started also talking to the grants, the people who gave them grants that they did get.
The reason why they were talking to those foundations, which they didn’t used to do, was because they really wanted to understand how much of it was the grant that I wrote. Was I close? Was it a close call? Was it like one of these one point win versus one point losses or did a really clear the goalpost? Was I right in the center of the net? Because those are really important for understanding why you got the grant, because if it was a close call, obviously, you would want to treat that like you didn't get it so that you can improve going forward and you want to include that in your future probability estimate.
Most people don't do that because it's painful. We really like to feel that we’re right and we did a good job and that our decision process was good. When we go to somebody where we got the grant and start probing around and they tell us, “Well, actually, you kind of got lucky.” That doesn't feel good unless you have a focus on accuracy and not a focus on being right. What thinking an expected value does is it naturally put your focus on accuracy so that that’s what you feel good about. You feel good about the call.
So once we got all of that and there are sort of development attained, they ended up actually taking that way of thinking an expected value to program as well and thinking about what the success rate of a particular program might be. In this case, it would be how many kids would you serve. Also, with programming, there're some issues of if you get certain programs, you might be more likely to get a grant.
So they would think about what’s the probability of success of program A versus program B if they have a choice between the two, and they're trying to think about how many kids would each program serve, and now they can come up with an expected value for how many kids are going to be well served from Program A versus program B. So it doesn't just have to be about money when you're thinking about it. It can be a return on your happiness, for example. How much happiness will I get out of something? How much satisfaction? In terms of health, like how much will it affect my health if I make choice A or choice B and what percent of the time do I think that that will actually work out?
I think it’s a really valuable way to start approaching the world that really improves your decision-making and also goes a long way to helping with this resulting problem and helping with this kind of confirmation bias problem and this problem that we all like to affirm the things that we already believe and sort of think really well of ourselves. We don't like to probe around into the things that could actually help us improve our decision-making.
[0:50:00.6] MB: Yeah. I think that's a great example and it's good to see kind of an application outside of the sort of really sort of clearly delineated world of poker. That's always something that I've tried to sort of zoom out and apply more broadly to business and personal context, is the beauty of poker as kind of a learning laboratory for teaching some of these decision-making concepts, is that in many cases you can kind of go and do the math after the fact and say, “Okay. Well, in fact, this was a correct decision, or this was an incorrect decision.”
Now, you may not have perfect information of that case, but in many cases, you can kind of run the probabilities and say, “Okay. If they’re going to fold 30% of the time, this was a good all in,” or whatever. Whereas in business and life, it's so much harder to sort of cut through that fuzziness between decision and result and really figure out, “Okay. What actually was the correct decision?”
[0:50:53.3] AD: So part of that problem I think comes from the fact that, in poker, it is actually possible to run a Monte Carlo, right? So you can take a particular hand and you can kind of run it enough sometimes to see how that might work out in the long run, that particular decision. So you can think of hypotheticals that you can actually run and see how they’ll go.
I mean, poker ends in a cloud of no information a lot of the time. You don't end up seeing the card, but you’ve definitely played hands that are like that, or you can have some idea of, for example, if you said you can do these counterfactual, or you can imagine if the person is going to fold 30% of the time, or 40% of the time, or 50% of the time, or 20% of the time and you can figure out what their folding rate would have to be in order to make it a waiting decision.
So there are ways to sort of explore in there that are more precise. But with a lot of decisions that we make, we can't do that, because the decision is somewhat unique and the probabilities are sort of more open and unknown. I think that that's where becoming really information hungry and having a really good group of people around you offer you their perspectives as well becomes really important, because even if your decision is unique to you, pieces of the decision are the kinds of decisions that other people have made.
So we can think about bringing other people into the process as a way to kind of run a Monte Carlo, because then they’re going to bring their own experiences and their own evaluation of the process and their own sort of pieces of the puzzle and give their perspectives on your decision in a way that’s going to help you to cobble together something that looks like the decision that you made so that you can narrow down. So you’re not just guessing as much. So that you can actually get some clarity on know what’s worked for other people or hasn’t worked for other people. What their view. What their perspective is. What weaknesses they might see. What stress points they might see that you wouldn't otherwise see, because for any decision you make, lots and lots of people are making a decision that’s sort of like it.
So if you can bring their experience to bear, it’s a little bit like being able to run that, “Well, if I done this, or if I had done this, or if I had done this, how do I think it would've turned out?” You might not be able to run it on a computer, but you can run it with other people.
[0:53:11.0] MB: I think you made another really good point about that and sort of how to apply this to more broad context in the sense that it doesn't have to be a perfect exact probability. Charlie Monger from Berkshire Hathaway has kind of the famous saying, or I think Warren Buffett says the same thing, is that they would rather be roughly right than precisely wrong.
So the whole idea is to can you get a sense of, “Okay. We don't know the –” I think you’re nonprofit example really highlights this, is like we don't know if it's exactly a 35% chance of this sort of grant closing, but we know that it's between maybe 30 and 40%. This is a rough estimate and that can – Even if it's not a perfect number and the probability is not perfect, so many people kind of get caught up on that need to have the exact probability. When in reality, you can still make really effective decisions using this sort of lens of expected value even without exactly precise probabilities.
[0:54:03.6] AD: I would actually argue to that point that not trying to all, taking the choice of not trying because you think that you can't come up with an exact answer is really, really disastrous. Like if I had a choice between stabbing an exact answer and not trying at all, I would take stabbing at an answer, because at least I’m thinking about, right? At least there are – Even though I should be thinking about a range, right? I should try to be roughly wrong as you said. I should recognize that it should be a rough estimate.
But if I’m at least trying to come up with an exact answer, I'm trying. I'm recognizing that it is probabilistic in nature, and because I'm trying to come up with an exact answer, I’m at least looking for the information that would allow me to get there. Now, it’s a much better step to your point, the Charlie Monger and Warren Buffett statement to recognize that you’re actually going to have a rough estimate and that you have to be comfortable with that. That it's okay that you have a rough estimate.
The fact that you can't get to an exact number, like 56%, doesn't mean that you're just supposed to say, “Well, screw it. I shouldn't even try, because I can't. I don't know that 2+2 is 4. All I know is that 2+2 is somewhere between three and five.” Well, okay. Because that’s way better than not trying at all, because if you don't try at all, the whole range is available. Then 2+2 might infinity, which we know is so silly.
By the way, we do this in – If you think about it, we do this in math a lot, right? Like if I were to say to you what’s 156×243 and you said, “Whoa! I don’t know. It’s not off the top of my head.” I could actually get you to think this way, because I could say, “Well, do you think the answer is three?” Of course you’d say, “Well, no. That’s ridiculous.” I say, “Do you think is the answer is 225 million?” You’d be like, “No. Of course not.”
So I can start to get you in the right range, like, “Do you think it’s a hundred thousand?” “No.” “Do you think it's 342?” “No.” We can start to get down into that range where we’re going to get somewhere in the 20,000-ish area, right? We can kind of get ourselves to a place where we kind of recognize, “Well, it has to have this many zeros, because I know what 100×200 is,” right? That’s sort of like – You start to sort of think about the other things that you know that are easier problems that can apply to it. I can now start to get you to range it down even if I can't get you to exact.
In that case, the kinds of the decisions that you'll make out of – I can’t remember what the example I gave was, but the kinds of – Things that you’ll make out of – Decisions that you’ll make around whatever that number are, are going to be a lot better, because you're not going to be making decisions as if the number is three, and you're not going to be making decisions as if the number is 200 million. You’re going to be in the right range and that’s going to get you a lot farther along.
[0:57:01.6] MB: I think that comes back to one of the things you talked about much earlier in the conversation, which is another really important point that we actually harp on a ton on the show, which is the idea that even these incremental kind of small improvements in your decision-making cascade through everything that you do and it impact your life across a vast array of kind of arenas. Because, really, fundamentally, life is just a series of decision after decision after decision.
[0:57:26.7] AD: So there’s two things to think. I actually got asked this in an interview once. I said, “Well, how does improving your decisions really help if a lot of the decisions you make are one timers? So you can think about, well, hopefully one timer would be like getting married. I think for the majority of people now, it's a two timer. But let's call that a one-time decision and it’s like getting married.
So how can it help because you're only doing it once and you just answered that question, which is, “Well, yeah. But if you're improving the quality of your decisions, you make thousands and thousands and thousands and thousands of decision throughout your life.”
So if you improve the quality of each of those decisions even if the decisions might be different over the course of your life, that your outcomes are just going to better, because your decision quality is going to be better, even if it's a decision that you only do once. That’s going to realize – You can think about it as improving decision quality across a particular decision that you can run the decision 10,000 times. Over those 10,000 times, you'll be able to realize the game, but you could also think about it as more horizontal as opposed to vertical across all of your decision. Even though the decisions are somewhat different, if you're improving your decision quality, you'll see those gains start to realize. I think that’s the first important thing to understand.
The second important thing is that I think that this kind of goes back to what we’re just saying about this idea of people are afraid to think about how often an outcome might occur, because they think they can't be exact. That people think that anything less than perfection is somehow failure, and they don't understand or they can't feel or they don't really – It's hard for them to embrace the idea that, “No. If I just get a little bit better, it’s okay. That I don't need to measure myself against perfection.”
So I don't need to think about, “Oh! Am I getting it exactly at 56%?” It’s, “Am I not trying at all,” or “I’d rather be at between 20 and 80% than not trying at all, because even that’s better and it’s going to get me a little bit of the way.”
So I like to give this example actually from poker, which is this, like, if I am not working with a group and I'm not really trying to improve these kinds of things, I'm not really trying to de-bias. I'm not really trying to think about how to learn from my outcomes. I’m processing the world and the way that I sort of born into it with not really trying to move my decision-making at all.
Let's say that out of 100 opportunities that come my way, when I'm playing poker, maybe I catch five really good learning opportunities. I’m missing 95% of the learning opportunities as my natural self and I’m catching five of them.
Now let’s say that I start to do this really great work and I start to find some people that I can really deconstruct decisions with and I start to think about how to be a better decision-maker, and now let’s say I improve that output so that now I’m catching 10 opportunities to learn out of 100 that cross my path. I think that a lot of people look at that as a failure. They say, “Well, you're missing 90% of the opportunities to learn.” I say, “No. That’s a tremendous success, because new Annie is going to crush old Annie. Old Annie is only catching five, and new Annie is catching 10.”
So I suppose you can look at it as new Annie is missing 90%, but that's not the way I look at it. I look at it as Annie just doubled her opportunities to learn and, obviously, that version of Annie is going to crush in terms of her ability to win the old version of Annie who wasn't trying at all.
So I think we really need to understand that the goal is to make these small changes. Now, look, if you can make big changes, that's great, but I think that it's unrealistic and I think that we think, “Oh! I’m going to get to this perfect state.” It actually inhibits us from being able to move forward, because we will view ourselves through that lens of failure.
Whereas if we say, any time that we do catch ourselves, anytime that we catch ourselves being bias, we catch ourselves equating the quality of the outcome with the quality of decision or engaging in hindsight bias, or I told you so, or black-and-white thinking, or not thinking probabilistically and we catch it and we were reverse. That’s a success. Even if we missed a whole bunch of stuff before that, if we catch something that we wouldn’t have already seen, it really makes a difference.
So I try to think about our – That we have this distribution. Let's call it just a normal distribution of the quality of our decisions. Our goal is to get more of our decisions our at the right tail, out at the good end of the tail. I think that doing this work, two things happens. One is you are more likely to end up with more decisions out at the right side of the tail. Not all of them, right? But you will.
Then through this kind of training where you start to change, what it is that you get your reward from? Where you start to change getting your reward from Pete saying, “Wow! I really think I butchered this decision,” or “I think I might've made a mistake,” or “I think this other person, like I don't really like them, but I think I have to give them credit for this.” Where that start to be what you sort of get your high from, that will just slightly start to shift that distribution to the right, just a little bit, but that little bit is going to have you returns for your whole life.
[1:02:42.3] MB: I think that’s such a critical point, this idea that changing. Then you mentioned this very early on in our conversation, but like re-orienting yourself to what makes you feel good is kind of this pursuit of truth and getting to what's true, as opposed to proving yourself right. It’s just a massive kind of fundamental impact across all of the results that you see in your life.
[1:03:03.5] AD: That sort of mindset shift, is that in normal social conversation, like if I’m just talking to somebody who’s like isn’t in my decision part or whatever and we’re just like at dinner and you’re just like – Or you’re at a cocktail party or something like that. The normal interaction is that if I express some sort of belief that is not true that the other person generally won’t correct. They want offer the other piece of information.
There're really kind of two reasons why that is. One is either if they really believe that you're wrong, they usually don't want you to feel bad, right? Because like you're at a cocktail party. They're not looking to get in an argument with your or whatever. So they don't really want to embarrass you. They don’t want to make you feel bad. They don’t want to get in an argument with you. So they usually hold the opinion to themselves, or they might thing that they’re wrong, and so they don't offer up their information because they don't want to be embarrassed.
So you've expressed something with great confidence and so now they question their own belief, and so they won't actually offer up the information that you have. When you start to engage with people in the way that we’re talking about where it's around accuracy and we have an agreement to accuracy. It reverses that, because what you know is that if you don't tell me information that you have, that’s what’s actually doing harm to me. That it's not about like, “Oh! I might hurt her feeling because I’m telling her information that would have to cause her to calibrate her opinion.” It’s that you know that if I found out that you had information that would have helped me develop a more accurate view of the world and you didn't tell me it, that would be the harm that you would cause me. So that’s as beautiful thing that really happens when you create a really productive decision pod.
[1:04:36.7] MB: So kind of tying this up, for somebody who’s listening to this conversation that wants to start to improve their decision-making, start to implement some of the ideas we’ve talked about today, what would you sort of give them as a piece of homework or kind of a starting step towards implementing some of these processes and ideas?
[1:04:54.3] AD: Well, obviously read my book. So, yeah. I mean, I think that the biggest, the most important piece of homework is to go find some people who are looking to be more open-minded to be more constructive about the way that they hear the sense, that really do seem to want to be better decision-makers. You can find them as coworkers or as friends. Maybe members of your own family, and sit down and really write down an agreement with them.
Say, “We’re going to be in this together and here’s what we’re agreeing to. We’re going to hold each other accountable to bias. We’re going to try to not be defensive when people challenge our opinion. We’re going to pat each on the back for things that signal that we’re trying to be accurate as opposed to trying to be right. We’re going to be open-minded to diverse opinions and we’re going to open ourselves up to people who disagree with that,” right?
Go look at your Twitter stream right now and see if you're only following people who have the same opinions with you or if you’re really, really paying attention and following with an open mind people who disagree with you, and go fix that. Go fix your social media if your social media is all on one side. Because you should be following people who disagree with you.
Because the opinion that disagree with you are actually the most valuable opinions for you. They're the ones that have the most to teach you, because you already know why you think you're right. What's the most valuable is people who might point out to you why you might be wrong, and you can't get that if you're only listening to voices that agree with you. So go fix that and go find some people to do that work with you. So I would say that that would be piece of homework number one.
Piece of homework number two would be to start listening in yourself and you can also do this as a group exercise as well. For the things that might come out of your mouth that signal that you're engaging in this kind of bias behavior. So any time that you declare things with certainty, using the words wrong or right, saying I should have known, or you should have known, or I told you so, or why didn’t I see that coming? In chapter 6 of the book, I've got a list of some of those kinds of things to get you started.
But try to listen for those things that come out of your mouth that might get you to start thinking, “Well, I’m not really thinking in [inaudible 1:07:09.3]. I’m thinking with certainty. I’m thinking that I should have been able to see what was happening, when obviously I couldn't,” and really write those down and like pin those somewhere. Put those up on your wall or something so that you're aware of those kinds of things that might come out of your mouth or those kinds of thoughts that might go through your head so that when you have those thoughts, it will actually regular you to go in and actually step back and really examine was that true and signals that maybe you should say, “Well, would I bet on that?”
So when you say like, “Oh, I should have known it was going to turn out that way,” and you know that that's on your list, that you step back and say, “Well, wait. Would I really bet on that? Would I bet that I should have known?” so that you can go back and start to think about really what the decision quality was. I think that that’s a really useful exercise and you can do with a group and you can share the list with the group so that they can point out when you're saying things like that.
Then the third piece of homework that I would say is really try this thing of discussing a decision with one person and telling them that it turned out poorly and discussing a decision with another person and telling them it turned out well and just listen for the differences, because I think that that's one of the most eye-opening pieces of information that you can ever get. When you see how different the analysis is. Make sure that when you're doing it, you're not talking about something that's really obvious, like running a red light or running a green light. Make sure it's really like a more Pete Carol kind of decision. Something about a strategy or a tactic that you applied or a tough decision that you had in your life, and go talk to one person and say, “Hey,” and here it is. It turned out great. This make up a great outcome for it. Then with another person make up a bad outcome and really just hear them. I think that there's nothing more that will show you how much you need to really keep outcomes away from people when you're trying to get advice.
[1:08:58.4] MB: And for listeners who want to dig in, learn more about you, read the book, etc., where can people find you and the book online?
[1:09:04.6] AD: Sure. So if you’d look at annieduke.com, everything is there. So my book is definitely there. You can order there from whatever your preferred bookseller is. I really recommend – I also put out a newsletter every single Friday, and the newsletter goes through things that are sort of the moment that apply to this kind of thinking.
As an example, in this particular – In the newsletter that’s going to be coming out tomorrow, I have a piece on Bloomberg article where they declared – These people had done a simulation of the World Cup, where they had Germany as the most likely to win the World Cup. Obviously, there're a lot of teams. So Germany was 24%. So when Germany got knocked out, Bloomberg wrote an article about how the simulation was wrong.
So I have usually about five pieces in the newsletter. For example, this week, that's one of the piece of just talking about how problematic that is that they declared the simulation wrong when the simulation literally said that three times out of four, Germany was going to lose. It just happened that Germany was the most likely, and this is part of how you can see out in the world the way that this sort of need for uncertainty and the way that we are black-and-white thinkers instantiates in terms of our ability to really understand outcome quality and decision quality.
So I’ll take from that, for example, which was obviously news in the sports to research in behavioral economics, economic psychology, to politics. Looking at how you apply to business. Looking at how you apply these kinds of concepts. So if you want to subscribe to my newsletter, you can also go to annieduke.com and there is links to archives of old newsletter so that you can read what I write before you decide to subscribe.
Then if you do decide to subscribe, you’ll get that every Friday. You can also go look at my foundation, which is howidecide.org, and what we do is try to bring these kinds of decision skills to youth with a special focus on inner-city youth. So I help people explore that. Then you can also follow me on Twitter @AnnieDuke.
[1:11:18.0] MB: Well, Annie. Thank you so much for coming on the show, sharing all these wisdom about decision-making and thinking more effectively. That was a really fascinating conversation, and we’re glad to have you on here.
[1:11:27.3] AD: Well, thank you for having me. I really enjoyed it.
[1:11:29.9] MB: Thank you so much for listening to the Science of Success. We created this show to help you, our listeners, master evidence-based growth. I love hearing from listeners. If you want to reach out, share your story or just say hi, shoot me an email. My email is firstname.lastname@example.org. That's M-A-T-T@successpodcast.com. I'd love to hear from you and I read and respond to every single listener email.
I'm going to give you three reasons why you should sign up for our email list today by going to successpodcast.com signing up right on the homepage. There's some incredible stuff that only available to those on the email list, so be sure to sign up, including an exclusive curated weekly email from us called Mindset Monday, which is short, simple, filled with articles, stories, things that we found interesting and fascinating in the world of evidence-based growth in the last week.
Next, you're going to get an exclusive chance to shape the show, including voting on guests, submitting your own personal questions that we’ll ask guests on air and much more. Lastly, you’re going to get a free guide we created based on listener demand, our most popular guide, which is called How to Organize and Remember everything. You can get it completely for free along with another surprise bonus guide by signing up and joining the email us today. Again, you can do that at successpodcast.com, sign up right at the homepage, or if you're on the go, just text the word “smarter”, S-M-A-R-T-E-R to the number 44222.
Remember, the greatest compliment you can give us as a referral to a friend either live or online. If you enjoyed this episode, please leave us an awesome review and subscribe on iTunes because that helps boost the algorithm, that helps us move up the iTunes rankings and helps more people discover The Science of Success.
Don't forget, if you want to get all the incredible information we talk about in the show, links, transcripts, everything we discussed and much more, be sure to check out our show notes. You can get those at successpodcast.com, just hit the show notes button right at the top.
Thanks again and we'll see you on the next episode of the Science of Success.