Today, we have another incredible guest on the show, Dr. Colin Camerer. Colin is the Robert Kirby Professor of Behavioral Finance and Economics at the California Institute of Technology. A former child prodigy, he received his BA in quantitative studies from Johns Hopkins University at the age of 17, followed by an MBA in finance from the University of Chicago at the age of 19, and finally a PhD in Behavioral Decision Theory from the University of Chicago at the age of 21. His research is focused on the interactions between cognitive psychology and economics. Colin, welcome to the Science of Success.
Dr. Colin Camerer: Thanks for having me, Matt.
Matt: We’re very excited to have you on here. Obviously, you have a fascinating background. I’d love to hear the story of how you got started.
Dr. Colin Camerer: Okay. One of the early experiences, actually, was when I was 12, I started to go to horse races [INAUDIBLE: 0:03:28] with my dad and a friend of his who was interested in the stock market. I was fascinated by the fact that 12 horses come out on a race track, and they all look pretty physically fit, and you could buy a big newspaper called ‘The Daily Race Informant’ that tells you all about...facts about which horses had won before, and who was the trainer, and what was the sire and the dam—that’s the mom and the dad—and how well had they done. Somehow these markets were able to compress all of this information into a number, which was the odds. So, I was really interested in how that process worked. When I went to college I studied math, and physics, and psychology, and I was kind of searching around for a science that I thought had some mathematical structure, and some real scientific rigor, where it was about people. So, I ended up studying economics.
Then, I went to graduate school at University of Chicago to get a PhD, and at that time the popular view about financial markets was that you can’t beat the market because if there’s any information that’s easy to find about the earnings of a company, or what the CEO was up to, people are highly motivated to find that, and they’ll get it, and they’ll buy and sell and move the price around until the prices are such that there’s no way to easily beat the market based on something that’s easy to find out. That’s called the efficient markets hypothesis. I was kind of skeptical about that because, well, first, a lot of people invest their funds either themselves, or with hedge funds, and what’s called active management. People are trying to beat the market, and people are quite happy to pay 1% or 2% of their money in what’s called, you know, fairly high fees. So, a lot of investors think somebody can beat the market, which the efficient markets hypothesis says shouldn’t be the case. So, I was kind of looking around for something different, and at that time there were a couple psychologists called, Hillel Einhorn and Robin Hogarth, and they were at the beginning of a wave of people who were interested in human judgement and decision making. Their approach was very related to what Tversky and Kahneman later began to study, which was called, Heuristics and Biases. The idea was: maybe instead of making extremely complicated calculations, and using all the information, weighing it just perfectly, people instead use simple shortcuts like what springs to mind in memory, or what’s visually in front of them on a computer screen. So, that was the beginning of what came to be later called Behavioral Economics. So, I got my PhD, and I was one of the first people to really get a PhD in this field of decision theory, or decision economics, and then I went to...ended up at the Wharton School, where I happen to be today—I mean right now was we’re talking not as a faculty member—and they actually were encouraging about the idea of trying to study the psychology...essentially the limits on how much information people can process effectively, and how much willpower people have, and how selfish people are, or how much they care about others. None of those things were really incorporated into economic theory at that time, so that was the beginning of what we call ‘behavioral economics’, or kind of psychologizing economic theory. That was around the mid-1980s.
So, I was interested in a bunch of studies involving psychological shortcuts and how they might make a difference in what people do. One of the things we studied is called ‘framing effects’, which means...you know, you can describe something in two different ways, and even though they’re mathematically equivalent, it might wither evoke different emotions, or it might change people’s focus of attention so that they treat them differently. For example, the FDA, I think, at one point required salad dressings to label how much fat content they had in terms of percentages, not just on the back. So, suddenly you pick up a salad dressing, and it would say, “6% fat,” or “8% fat,” or “3% fat,” and that’s quite different than if you had said...6% fat is a lot different than 94% fat free. You know, 94% fat free sounds pretty great. 6% fat sounds more, “Ooh, yuck.” So, even though those two are mathematically equivalent statements, you know, 6% and 94% adds up to 100%, but it seemed to shift people’s focus of attention and actually affect choices. Those are the kind of things we began to study in behavioral economics.
Matt: That’s really fascinating, and I know that you specifically focus a lot on the ideas around game theory, which to some listeners may seem sort of like an esoteric field of knowledge that doesn’t apply to their daily life, but I’m curious: Could you kind of explain some of the basics of game theory and how it could actually apply to interactions that people have every day?
Dr. Colin Camerer: Sure, so game theory’s a very powerful mathematical system. It’s probably most developed in economics, but also a little bit in theoretical biology and political science. So, a game, despite the frivolous name, is a mathematical object, which is: a set of players, each player’s going to choose a strategy, and given some information they have about, say, what’s going to happen in the future, or maybe what the other player thinks, or how valuable something is if they’re bargaining. The players have strategies and information. While they all choose their strategies, there are going to be outcomes. The outcomes are...it could be biological fitness like reproduction, it could be territory in a war, it could be profits for companies, it could be a status for people, or for animals, fighting for territory. Then, we assume that the only mathematics really comes in because we assume that the players can mathematically rank how much they like different outcomes. That whole system is called a specification of a game. What game theory is, is to say: if the payers have these strategies, the outcomes, which they value numerically, what are they going to actually do? The interesting thing is to what extent players can figure out what other players are likely to do by kind of guessing. I should add that the players could be animals that have strategies which are kind of innate strategies, like degrees of aggression. They could be much more deliberate. It could be how much a telecom company wants to bid for a slice of phone spectrum that’s being auctioned off by the FCC. That was an actual thing that happened, not only in the US with the FCC, but in many countries where valuable phone spectrum was auctioned off, and tens of billions of dollars were actually bet so that the companies then had to decide: What do I actually bid? They employed a bunch of game theorists to kind of tell them: given the rules for the game, should they bid this much, that much, and what do you think other people will bid? I don’t want to outbid them by too much and leave money on the table, but I don’t want to get outbid and underbid, and lose. So, there’s the kinds of things game theorists used to study. What I brought to the analysis was: the standard idea in game theory is...I should say, the standard mathematical thing that’s computed, and that’s taught in every course, and this is the homework on the final exam, is what’s called a ‘Nash equilibrium’, named after John Nash. Equilibrium is a word that’s kind of taken from physics as sort of a resting point. The idea is: equilibrium, every player has a belief of what the other players will do, and their beliefs are correct, so they’ve somehow figured out what other players will do. In addition, they’re going to choose a best response. They pick the strategy which is the best one given this belief. One way to think of an equilibrium is: suppose you played tic tac toe lots and lots of times, and if you ever made a mistake you corrected your mistake the next time, After lots of play, everyone would know the strategies of the other players, and they would be choosing the best strategy for themselves, and it would be a kind of boring game, but mathematically it would have a nice precise structure.
So, what we started to look at was non-equilibrium, or pre-equilibrium, game theory meaning: what if people haven’t figured everything out yet? What kind of things could happen then? I’ll give you a simple example that’s not too hard to think about numerically, which we call ‘the beauty contest game’. Let me explain it first, and then I’ll say where that name comes from. In this game, which we’ve actually done in lots of experiments for money, everybody picks a number from 0 to 100, and we’re going to collect the numbers on a piece of paper, or you’re going to type them in a computer, or you’re going to send them on a postcard to the ‘Financial Times’—who actually did this a few years ago—and we’re going to collect all the numbers, 0 to 100. We’re going to compute the average number, and take 2/3 of the average. Whoever is closest to 2/3 of the average is going to win a fixed prize. So, everyone wants to be a little bit below average knowing that everyone else wants to be a little bit below average. If you figure out the mathematical equilibrium—this is the kind of thing that would be on a final exam in a course—the equilibrium is the number which is the only number which everyone, if they believed everyone else would pick it, they’d be best responding, and their beliefs would all be correct then. That’s zero. When you actually do the experiment what happens is you get a bunch of people that pick numbers anywhere from 0 to 100—60, 40, you know. Let’s say the average is around 50. There are a number of other people who seem to think: I don’t know what people will pick. It could be anywhere. So, let’s say they’ll pick 50, so I’ll pick 33 which 2/3 of 50. If you’re trying...if you think others are going to randomly choose in the interval of numbers, and you’re trying to match 2/3 of the average, you’ll pick 33. Other people do what we call ‘second level of thinking’. This is something that’s called ‘level-k model of behavior’. They’ll say, “Well, I think other people will think other people will pick 50, and those people will pick 33. I’m going to outguess them and pick 22.” You can do a couple more steps of thinking, but at some point you’re being, as the British say, too clever by half. If you actually play this game, and you pick 2/3 of 22, or 2/3 of 2/3 of 22, you’re actually picking a number that’s too low, because you don’t want to pick the lowest number, you want to pick 2/3 of the average number. So, typically what you see is an average number around 33 or 22, which is far away from the Nash equilibrium prediction, which is that everyone will somehow figure out how to pick 0. That’s an example of where psychological limits on strategic thinking gives you a better prediction of what people actual do. By the way, as you can imagine, if you play this game again and again, what happens is: the first time you’re playing in a group it might be the average is 28, and 2/3 of that is about 19. So, the winner is Matt Bodnar who picked 19, and everyone cheers that, and next time they think: Wow, people are going to...I should pick maybe 2/3 of 19, or maybe I should think other people will pick 2/3 of 19. So, if you do it over and over you do get numbers that are moving in the direction of the Nash equilibrium prediction. The idea of an equilibrium is actually often a good model for where a system in which there’s a lot of feedback, and learning from trial and error, is going to move over time, but it isn’t necessarily a good prediction of what will happen the first time you play even if it’s for very high stakes. These games had often been done with different groups of people. It doesn’t seem to make that much difference if you are really good at math, or if you played chess a lot, or anything like that. Most people will pick numbers somewhere between say, 10, or 15, or 22, or 33, the first time they play. So, we’ve developed a theory of that type of thinking called ‘level-k reasoning’, which has these kind of steps of thinking. The main idea is: the steps don’t get that far. There’s a little bit of strategic thinking, but it’s limited.
Matt: That makes me think of a couple things. One is: when I initially heard the beauty contest game, or I guess I also coined it in my mind ‘the 0 to 100 game’, I was too clever by half because my initial guess was the number one, which as you showed in some of your research, that was a terrible guess because people don’t adjust close enough to the equilibrium to make that meaningful. The other thing is: it was a sad day in, I think it was like 7th grade for me, or whenever, when me and my buddy discovered that there’s only like three or four moves in tic-tac-toe, and basically every single game should end in cats.
Dr. Colin Camerer: Yes. One thing that’s interesting is: some of the games that are actually really fun to play, like rock paper scissors, which is similar to tic-tac-toe—it’s a simple game and you can kind of figure it out—from the point of view of mathematical analysis are kind of boring, but they’re not that boring to actually play. Probably, it’s because people aren’t always in equilibrium, and they’re trying to chase patterns and see things that are other players are doing. If you were to design video games, or a game show on TV, it’s not clear that equilibrium game theory would be as helpful as something that would incorporate more of a concept of human nature, and fallibility, and what’s fun and engaging.
Matt: I’m curious, actually, that makes me think of another question: Rock paper scissors, is that a game, from sort of a game theory stand point, that has an equilibrium?
Dr. Colin Camerer: Yes. Actually, one reason [INAUDIBLE: 0:16:10] equilibrium is very powerful—and John Nash shared the Nobel Prize for this discovery—is you can show mathematically that if a game is finite, in other words there’s not infinitely many people bidding or playing, and they only have so many strategies they can choose like rock paper scissors, or so many numerical bids in an auction, even if it’s billions, as long as it’s not infinity, that there always exists an equilibrium. In rock paper scissors: equilibrium, as well as what we call, ‘mixed strategies’. That means that if you play rock every single time that’s not a best response, because someone will figure it out and beat you with paper to cover rock. So, the only equilibrium is one in which people choose rock paper scissors about 1/3, 1/3, 1/3 of the time. Again, when people play what happens is: usually people won’t play explicitly in that random way, although you could—it wouldn’t be very interesting—and then what happens is people try to pick out patterns and, “Can I predict what you’re going to do next time?” Associated with this is the fact that, roughly speaking, when you ask people to randomize, like if I tell you: imagine flipping a coin 100 times in a row. Write down a series of what you think 100 coin flips might look like. People are actually not that good at generating a truly random sequence. The main thing is they kind of over alternate. So, if you wrote down: head, head, head, then you’d write down tails, and you would actually have too few runs. So, you’d have strings of a couple heads, and a couple tails, and in a truly random sequence of 100 you should have about 50 runs, and usually people produce about 65 runs. In my cognitive psych class I used to do this, and I’d ask half the people to actually...I’d turn around and I’d ask half them to actually flip a coin, and half of them to simulate/imagine doing it, and then I would ask them to hand in their index cards, and I would see if I could tell whether it was human-generated or truly random. So, people aren’t typically—unless there’s special training or special tools—that great at randomization.
Let me backtrack to one other thing about game theory. Another practical application that we studied, that everyone, I think, can resonate, or appreciate, involves what’s called a ‘private information game’. So, private information is a wrinkle which you don’t have in rock paper scissors, and you don’t have it in the 0 to 100 game, which is that one person knows something the other people don’t know, but everyone knows that there’s private information. For example, the kind of game we studied—and here we go away from the simple clear games in the lab to the messy world—involved movies. The idea is: we assume that the people who produced the movie, and have watched it, and have seen the entire movie, not just a short trailer in an ad, or a short clip that you might show on a TV show for promotion, they have a better idea of the quality than movie goers. So, if people have seen it they can say, “This is, on a 0 to 100 scale, this is going to be an 82,” or, “41.” What we studied we called ‘cold opening’ which means: from about 2000 to 2009 we looked at all movies in the US that were open on a lot of screens, which is 300 or more screens, so that didn’t include some smaller, independent films, but most of the movies are in our sample, and about 10% of the time the movies were not shown to movie critics in time for them to write a review. In the early part of our sample, in 2000, this was in a newspaper like ‘The New York Times’, or ‘The LA Times’, or ‘The Chicago Tribune’. Nowadays, the newspapers have become a lot less influential because trailers leak, and Rotten Tomatoes, and lots of other websites are influential in sharing their opinions about what movies are good. During the part of our sample the newspapers were kind of a big deal. So, about 10% of the time the movies were not shown to critics so that there was a review, and the way you can tell is: if you open up the Friday newspaper—again, this is kind of historical big game theory—in Los Angeles, you would see an ad for say, “Ondine”, which was a Colin Farrell movie, and it would have a bunch of blurbs that would say, “Marvelous. Four stars,” from Manohla Dargis in ‘LA Weekly’, for example. So, the way those stars got there was that a version of the movie was sent to the critics a couple days early, and they would prepare the reviews, and then they would give it to the studios so they could put it in the print ads on Friday. So, in the Friday paper you’d see a print ad that had a...if it was flattering, that had a quote from a critic, and then in the same section of the newspaper you’d see the critic’s review that would say, “I loved this movie, Ondine.” Meanwhile, the movie, “Killers”, with Katherine Heigl and Ashton Kutcher was not shown to critics in time, so if you opened up the print ad, there’s a picture of the two stars, “Killers”, the name of the director, and it has no quotes from critics at all because critics weren’t allowed to see it. Of course the obvious intuition is: the critics are going to tell everyone how terrible the movie is and then people won’t go see it, but game theoretically, that’s actually a little bit surprising because movie goers should be able to infer: if there’s not review it’s probably because it’s really and, and they didn’t give it to the critics. In this case, no news is bad news. If you don’t see a review it’s probably because when the reviews eventually come in—usually movies are reviewed later, like on a Monday or a Sunday—they’re going to be pretty bad. In fact, empirically that’s what happens. So, we collected data from Metacritic...Metacritic is a great website, by the way, which averages from about 5 to 20 or 30 different critical reviews, and you get a beautiful little Gaussian normally distributed distribution where most movies are around a 50 on their 0 to 100 scale. If the movies are a 20 or below...a 25 or below, which included the Ashton Kutcher and Katherine Heigl movie, then the chance of not showing it to critics is much higher. So, if you don’t see a critic review, and you kind of knew about the statistics that we had gathered, you should say to yourself, “A lack of review is the same as a bad review,” basically. We took our theory of level-k thinking and the level-k theory says: some movie goers are just kind of naïve. Those are like the people who pick 50 in the 0 to 100 game. They’re just not thinking strategically: Well, wait a minute. What are other people going to pick? Because I should be responding to them.” So, the naïve movie goers say, “I didn’t see a movie review. That doesn’t mean anything. It’s probably kind of average,” and actually it’s not average. If there’s no review, statistically, it’s below average. The way we could tell that the movie goers were being naïve was: if you write down some very fancy math, and look at the statistics, your prediction is that the movies that aren’t shown to critics will earn about 10% or 15% or more than they really should, given their actual quality, because people are naively guessing the quality is much better than it is, and too many people will go to those movies. So, we looked at all the data, and did a very careful statistical analysis, and it turned out to be consistent with this theory that there’s some degree of movie goer naivety, and the result is: if you make a bad movie, don’t show it to critics and your movie will make about 10% or 15% more than it really should because you’re fooling some of the people some of the time.
Matt: That’s fascinating. I’d love to dig in a little bit more. Explain, or kind of tell me more, about the concept of level-k thinking, or the level-k model of behavior.
Dr. Colin Camerer: The basic idea is: we’re going to assume that whether it’s IQ, or practice playing games, or how motivated people are by an experiment, or by figuring out what the movie critiques are doing, that it looks like we can kind of sort people into people who are not very strategic. Those are what we call ‘level zero’, and that means that we think what’s going on is that they’re picking sort of a salient simple strategy. Maybe something just pops out, or they’re exhibiting naivety, like the movie goers. They open the ad, or they see an ad on TV, and the ad doesn’t have any critic information, and they don’t notice that there’s no critic information. So, they assume that no critic information is kind of like ‘average’. Then, level one players are players who think that other people are level zero. So, in the 0 to 100 game that we talked about earlier, those are people who think: ‘I think other people have no real clue. They’re going to pick numbers around 50, like lucky numbers, or their birthdays, or something like that, and “I’m going to pick 2/3 of that.” So, these players are being a little more clever because they have a concept of what others will do and then they’re responding to it. These would be something like a movie goer that says, “Gee, if the studio is smart then they’re not going to show their worst movies to critics, but I can’t tell beyond that how smart they are, or how bad the movies are.” Level two players think that they’re playing level one players. So, they’re going to pick 20 to 33, and so on. So, you can write down a kind of sequence of these types of players. The zeros choose something that’s kind of focal or random. The ones think they’re playing zeros and they respond. The twos think they’re playing ones and they respond. With just a couple of steps, usually zero, and one, and two is the only levels you need. Although conceptually, in principal, people could be doing three steps, or four steps. You might get that sometimes in a very complicated novel, or like a sci-novel, where, “I think that he thinks that she thinks,” and there’s double agents and things, but usually mentally it’s kind of overwhelming. It sort of boggles the mind to think more than two or three steps of reasoning. We’ve applied this type of framework to movies. It’s also been used to analyze some managerial decisions like when managers will adopt a new technology. It depends on how many other managers they think will adopt, and how many manager’s managers think, and so on.
We’ve also used it to explain a lot of different experiments we’ve run in the lab where the games are much simpler. You can often see...in fact, literally we can see, say, if we put different numbers on a screen which [INAUDIBLE: 0:26:07] the payoffs from choosing different strategies. If you’re a level zero player you’ll look at certain numbers and ignore some numbers. If you’re a level one player you’ll look at what the other player’s payoffs are. If you’re a level two player you look at everything. So, the more levels of thinking you do, we can tie that directly to what you look at, at the computer screen, and we use a measurement technique called ‘eye tracking’, which is basically a tiny camera that looks at your eye. If your eye moves a little bit, like to look at the left part of the screen instead of the right part of your computer screen, the camera’s sensitive enough to see where you’re looking. It can kind of locate where your eye is looking on a computer screen within the precision of about a quarter...a quarter coin. So, if we put the payoffs on the screen in a certain way we can detect, to some extent...not perfectly, but we can roughly detect who’s doing two steps of thinking because there’s certain information they like to look at in order to figure out what to do. So, a combination of eye tracking [INAUDIBLE: 0:27:04] experiments have given us an idea of... And basically, I should add we...typically we estimate for, say, college educated student populations that something like 10% or 20% of people are level zero, they just aren’t really thinking through at all. Maybe 40% are level one, that’s the most common. And maybe 30% are level two. Sometimes you’ll see what looks like much higher level thinking—level three or level four.
Matt: The first thing that makes me think of is poker, and longtime listeners will know that I’m a big poker player. We’ve previously had some guests on here talk about some of the psychological elements of that game. Poker’s a great example of a game where you, depending on what level of thought your opponent is at, you have to adjust your thinking and play one level ahead of them, but if you play two or three levels ahead of them you can...it can end up backfiring, and being kind of the same thing as picking 1 in the 0 to 100 game.
Dr. Colin Camerer: Every so often I think we should try to get a grant and study poker, or just study it, because from a game theory point of view it actually hasn’t been studied very much. Although, early in the history of game theory, some of your listeners will know that the Seminal book on game theory [INAUDIBLE: 0:28:13], in the 1940s. It’s somewhat weird in social science that someone writes a book and really creates a whole field, but their book really did. There was some earlier research that they had built on, but their book really made a big splash. They actually have a chapter on poker, but it’s a super simplified version in which you basically get a high card or a low card, and there’s one round of betting. So, they picked a simple enough example that you could fully analyze it and see what’s happening. Of course what makes real poker so interesting is that, you know, there’s some mathematics. You have to kind of figure out how strong your hand is, but it also depends upon, as you said, on what strategy you think the other player is going to play. Are they going to play tight and only bet when they have great cards? Are they going to bluff more? People who play poker a lot often talk a lot about building kind of a model of the opponent, which is essentially a level...what level is this person playing? [INAUDIBLE: 0:29:11] be pointed out, much like in the 2/3 of the average game, if you kind of over play your opponent, as if they’re really...for example, if you think they won’t fall for a bluff, you may not bluff enough. So, you’re kind of leaving money on the table. It’s also a cool game from a psychological point of view because if you play face-to-face you may have all kinds of information conveyed by facial expressions, which is something that neuroscientists have studied for a long time, including with animals. Of course, there’s that evil poker face which is related to what we call ‘emotional regulation’. You know, you have really great cards, and you don’t want to show that in your face, or you have terrible cards and you don’t want to show that in your face. And the concept of tells, in other words, only a certain amount of emotions can be well-regulated by us. So, unless you’re a sociopath, or a fantastic actor, it may be hard to control your emotions fully. So, somebody can really figure out what your tell is when you have terrific cards over hours and hours of watching you, and might be able to infer your hidden information, or what we call ‘private information’ in game theory, from what’s on your face, or from your fingers tapping, or brushing your hair, or so forth.
Matt: I personally definitely would advocate you studying poker. I think that’d be fascinating, and I’d love to dig into that research at some point.
Dr. Colin Camerer: Usually, and especially at places like Caltech, we have a lot of freedom to study what we're interesting in, and the nice thing about poker is I don't think we'd have any trouble getting volunteers to play. And, of course, there's lots of online data. There's no shortage of interest in and ways in which you can dig into poker as a neuroscientific, psychological kind of test bed. And, of course, probably lot of the basic processes are, you know, like bluffing or mind-reading or face-reading, happen in other kinds of things, like bargaining, and other things that are important in political science and economics and everyday life.
Matt: I'm curious going back a little to the level-k model of behavior. Why do you think people get stuck in level one or level two of strategic thinking?
Dr. Colin Camerer: Well, one variable that doesn't predict perfectly, but it is correlated, the correlations are around 0.3 or 0.4, where zero is no correlation at all and plus-one is perfect, and in these kind of social science type data, we rarely get plus-ones. So, 0.3 or 0.4 is not too bad. And anyway, a variable that's correlated about 0.3 or 0.4 with steps of reasoning is working memory. And so, working memory is basically, you know, I read you a list of digits--four, three, four, six, one--and then you have to quickly remember how long the list was and get the digits correct in order. And so, some people can remember five or six digits. That would be a pretty short working memory span. Can people can remember eight or nine. And working memory, how many things can you kind of keep track of, turns out to be a pretty good, solid but modest correlate of lots of types of intelligence and ability to be cognitively flexible, and also the number of steps of reasoning you took. So, people with more working memory tend to make choices that are consistent with level two reasoning. So, if I looked at the zero to 100 game, and I looked at people picking around 50 and around 33 and around 22 or lower, or someone like you picking one--which is a good guess if you're playing highly sophisticated people, but not for the first time--you probably would get a nice correlation, a modest but positive correlation between the number of things people can keep in mind like numbers and then how many steps of thinking they do when they're thinking about games.
Matt: Changing gears a little bit, I'm curious. One of the things you talked about, and you may have touched on this earlier, is the idea of the theory of mind circuit. Can you extrapolate on that a little bit?
Dr. Colin Camerer: Sure. So, this is an idea that actually came originally from animal research starting in 1978. There were some beautiful but very early studies with chimpanzees, and the primatologists, called Premack and Woodruff and others, were interested in whether chimpanzees have an idea that another animal could be thinking about something differently than they are. And so, shortly after that, some philosophers actually suggested a really clever test for theory of mind, which is called the false belief test, and the idea is...often it's done with children, with a kind of storyboard, or you could make a little video. But I think I can...hopefully I can describe it well enough that people can get the idea, or they can Google and learn more. And the false belief test [INAUDIBLE 00:33:50], so you see a little cartoon storyboard. Sally-Anne goes into the kitchen and takes a cookie out of a cookie jar. She leaves. Her mom comes in and takes the cookies out of the cookie jar for some reason. Maybe they're melting because it's hot, like it is now in Philadelphia, and she puts the cookies in the refrigerator. Closes the cookie jar lid. And, of course, Sally-Anne doesn't see that, because she went outside. The mom leaves. Sally-Anne comes back. The question is, where does she look for the cookies? And so, if you follow the storyboard, you know that the cookies are in the refrigerator, but if you have theory of mind, you have the capacity to know that Sally-Anne thinks the cookies are in the cookie jar, because you saw something--the cookies being moved from the cookie jar to the refrigerator--that you know she didn't see. And it turns out when children are two or three, they will typically say, "Oh, I should look in the refrigerator." And the reason is the kids know something, which is where the cookies are, and they can't imagine that somebody else doesn't know it. So, they think the cookie goes in the refrigerator. Sally-Anne must know there are cookies in the refrigerator. So, they're not able to maintain a concept of something being true where the cookies are, and somebody else having a false belief. And, as the kids get older, typically around five years old... And this is a very solid finding from many different cultures, and it doesn't seem to matter whether the kids are illiterate or in a developing country. There's been studies in several different continents, including Africa and Australia, and at around five years of age, the kids realize, "Oh, you know, I know the cookies are in the refrigerator, but Sally-Anne thinks they're in the cookie jar." And so, that's the correct answer. So, this test, and a number of other ones, have shown that there seems to be a somewhat distinct mental circuit called mentalizing your theory of mind circuit. It involves dorsolateral prefrontal cortex, which is sort of right in the center of your forehead, maybe an inch or two above your eyebrows, temporal parietal junction, which is kind of back in the temple, and areas in what's called the medial temporal lobe, and also regions of singular cortex, which is a kind of part in the center of the brain.
And so, another way to student mentalizing, which is shifting to the neuroscience, is some colleagues of mine have developed what they call the why-how test. And so, you might show, for example, a picture of somebody inserting a screwdriver into a toaster oven, and the how question is, "How are they holding a screwdriver?" Well, left hand, right hand. And that doesn't really require any theory of mind. It doesn't require you to think about the intention of the person or what's in the person's head. It's just a physical activity. So, that does not require theory of mind. The why question is, "Why are they using a screwdriver in the toaster oven?" And the answer might be it's broken or they're trying to get the toast out or something like that. That requires mentalizing. It requires to think about the person's intention, why are they motivated to do things in that way. And so, if you show people a series of why questions and a series of how questions, and you ask which areas of the brain are differentially active when they're figuring out why versus how, you get a nice clear map of what's called this mentalizing network. And a few studies have linked that to game theory, so that people who are doing more strategic thinking, like picking a lower number in the zero to 100 game, or presumably other games, or people who say, "Wow, there was no movie review. That's probably bad news, because I think the studios know if it's good or not, and it's bad, they don't show it to critics." So, they're making a strategic inference about the knowledge that another mind has -- in this case, the studio. And so, there's some evidence that more activity in this mentalizing region is associated with more strategic thinking, in terms of these level-k steps. Some of your listeners, again, will know, one of the reasons people became very interested in this mentalizing circuit is that children who are autistic tend to be slower to get the right answer in the false belief tasks, and the ideas that are part of autism is that, not necessarily a full inability, but a kind of weakness, or what clinicians call a deficit, in the ability to think that other people know things or think things that are different than what you know. So, the weak theory of mind is thought to be associated with autism. That's somewhat debated, because these things are never quite that simple, but the first couple decades of research, I think, are pretty solid about the existence of theory of mind and mentalizing and where it seems to be in the brain. And some of the medical questions about autism are a little more up in the air.
Matt: You mentioned chimpanzees. Tell me a little bit about the strategic differences between human and chimpanzee brains, and are we smarter than chimps?
Dr. Colin Camerer: So, we've done a little bit of work on that, and first, any time you work with animals--and the same thing with children, actually--it's harder to make very solid conclusions, because we can't ask the chimpanzees questions and we're never absolutely sure that they understand what we're trying to do. And also, the chimpanzees are usually motivated to do experiments by little cubes of food. So, if they're just not hungry, they're going to look like they're dumb. But it's not that they're dumb, it's that they're not competing for a reward. So, subject to that caveat, my collaborator [INAUDIBLE 00:38:58] who works in Japan, has a theory of what he calls the cognitive trade-off hypothesis. And the idea is kind of a very simple one evolutionarily, which is in the chimpanzee's natural ecology, it's really important for them to be able to play hide and seek games and to keep track of predators and prey and to do certain kinds of rudimentary strategic thinking. So, for example, if a bunch of fruit falls from a tree, it's really helpful if they can keep track of where the different pieces of fruit might've gone and where they are. And that takes a certain kind of working memory, right? Instead of a string of digits, like we talked about earlier, one, six, seven, the working memory that the chimps need is spatial working memory. You know, where did all this stuff go? And if they can do that better than other chimps, they can run and get food more quickly. So, you need some evidence that, especially with training, the chimpanzees are really good at spatial working memory, and the way he does it experimentally is to show them a bunch of numbers on a screen, like 1, 4, 3, 2, 6, in different places of the computer screen, for 200 milliseconds, which is very quick. You can just barely see the number. And then the numbers disappear and are replaced by black blocks, and in order to get a food reward, the chimp has to press the black blocks, which correspond to the numbers in order. So, wherever the digit 1 was originally has to press that box first, and then if the next digit was 2, in order, he has to press that, and if the next digit was a 4, he has to press that. And you can see on their website at the Primate Research Institute, called PRI, you can see some videos of this. The highly-trained chimps who do this thousands and thousands of times--they get really good at it--are really good. They're really good. With 200 milliseconds' exposure and a lot of training with five or six digits in a sequence, they can get about 80 or 90 percent correct. And people actually really aren't as good, although it's a little controversial, because it's hard to get human beings to do it for 10,000 trials. So, there's very few cases where people have been as trained as the chimpanzees. Anyway, so that motivated the idea that maybe the chimps are actually just really good, better than us, at keeping track of sequences of information that resemble something like fruits falling in the forest that's useful for them and their adaptation. And, by the way, the cognitive trade-off part comes in in the following way. So, the chimps are basically kind of like kids up until age two or three, and so a lot of the play they do among...the chimps playing, with chimps, kids with kids, is, you know, play that's kind of like practicing for strategic interactions or games that probably had some adaptive value as they were growing up. So, they play hide and seek, or the chimps are often...status dominance is very important for them, so we'll kind of wrestle and play fight to see who's stronger. And the difference in humans is, once children start to talk, a lot of their mental attention and probably brain matter is now solely devoted to this amazing tool which is called language. And also, children will shift over at age two or three or four to what's called group play. So, kids who were little would just play by themselves. Like, you get a bunch of kids in a room, and they're all sitting and playing completely independently, like little assembly line workers. When they start to talk, then they can start to play much more interesting games that involve talking to one another and bluffing and things like that. But the chimps never advance to that next stage. So, in a way, they get a lot more practice in their playtime in games that may require a certain kind of working memory, like hide and seek. "Where did that person run off to? I'm going to go look for them there." Or, "Where did somebody hide last time? I'm going to switch to a different location so that they'll go to the old location and not the new one."
And so, [INAUDIBLE 00:42:41] hypothesis is that the chimps get this kind of endless childhood of practice in games that involve working memory and hide and seek. And so, we actually did some experiments with chimpanzees where they don't actually play hide and seek, but they see a little computer screen. It's basically an iPad with gorilla glass, or chimpanzee glass, so they can't smash it, and a little light comes up and you either press on the left or the right. And there are two chimpanzees actually next to each other in a glass cubicle, and for different various reasons, we used mother and sibling pairs, so it's like a mother and a little son, a mother, little son, one mother, a little girl, chimpanzees. And one of the chimpanzees is the hider, which means they want to pick left when the other person picks right. And so, they both see two separate screens, and they're picking at the same time. And so, the hider gets a food reward if they mismatch. "If I hide, I pick left, you pick right. Ha, as if you didn't catch me." The seeker gets a food reward if they match. You know, so if they both choose left, food reward for the seeker. The hider gets nothing. And, when they play this game hundreds and hundreds of time for food, two things happen which are interesting. One is that their choices, they seem to do a better job of keeping track of what the other chimpanzee has done in the past and then respond to that. So, if you're a seeker and you see the other guy has picked left, left, left, they switch to left more quickly. They're kind of learning and they're recognizing patterns. And the other thing is that when you plot the percentage of times they can choose left and right, remember from rock paper scissors, in these games, if you alter how much food you get for different combinations. Like, if I'm a seeker and I choose left and you choose left, ha, now I get three apple cubes. If I choose right and you choose right, I still get food, but I only get one apple cube. If you move around how much, from these different configurations of choices, you can change the mathematical predictions of the Nash equilibrium game theory. And it turns out that if you make a graph, the chimpanzees as a group, if you average across the six different chimps, there's three pairs, one playing hide or one playing the seeker, the chimpanzees are incredibly close to theory. I mean, I claim... I know a lot about this, but maybe not everything. I'm sure not everything, and there's always new studies coming along. But I've said this to several game theory audiences, and no one has ever said, "I've found an interesting exception to your claim," that the chimpanzees, as a group of just the six chimps, come about as close to these predictions of the Nash equilibrium, the balance of left and right play, as any group we've ever seen. And it might be just a fluke, because there's only six. It might be that they're trained a lot. They do this for hundreds of times and they're very motivated. They do it when they're a little bit hungry, so they're motivated to eat. Or maybe they have this special skill, so maybe that the chimps are actually a little better than us at this special type of game that involves hiding and seeking and, most importantly, keeping track of what your opponent has done the last few times.
Matt: So, in that study, you had some human groups also either compete against them or just measure their activity, and they were further away from the game theoretical Nash equilibrium than the chimps.
Dr. Colin Camerer: That's correct. And, in fact, for robustness, we did with a group of people in Japan, and they actually used the exact same image. So, they used the same type of iPad and pressing. So, it's not that we give them instructions that are a little bit different. The chimps, we don't really tell them anything verbally. They just have to learn it by trial and error. But we also have a group of African people who worked at a chimpanzee reserve in West Africa, and the difference with them was, well, first of all, we didn't use the computers there. We didn't have them. But we had them play with kind of bottle caps, and they could play with the bottle cap up or down, and that represented kind of like left or right, and one of them wanted to match the other person's bottle cap and one of them wanted to mismatch. And the advantage of Africa was people are poor, and so we could pay them what was a typical amount of money for Americans, but in terms of purchasing power, it's a lot of money. So, sometimes with these experiments, we would prefer that whoever's participating in an experiment is motivated by money so that they're paying attention and they continue to think. And so, the Africans made the equivalent of, in half an hour, 45 minutes of playing a couple hundred times with each other, they made the equivalent in U.S. purchasing power of maybe $150. So, you know, and you could tell by watching them, they were kind or really into this. This is sort of pretty important. But even then, their patterns and their data looked very much like the Japanese people, even though the literacy levels are different and they're from two different continents and their genetic material's probably a little bit different, and their incentives were quite different. But, if you plot the human groups, the two human groups, Japanese and Africans look quite similar, and then the chimps are just off in this land of their own, within 1% of where the mathematical prediction says they should be.
Matt: So, for listeners who want to dig into not only that, but just game theory more generally, and some of the things we've talked about today, what resources would you recommend that they check out? Books, websites, etc.
Dr. Colin Camerer: I think one that's sometimes used as an undergraduate text, so it's not too technical and it's well-written, is by Avinash Dixit -- D-I-X-I-T. He actually has a book with Barry Nalebuff, so I'll just give you his last name, since it's easier to spell it. Remember, it's Avinash Dixit -- D-I-X-I-T. So, he actually wrote a kind of popular book, and he also has a textbook, which is often used to teach undergraduates that are kind of not... You can teach game theory, as you might imagine, and it's sometimes taught this way in economics and even computer science and engineering in an extremely mathematical way, but it's really a sort of storytelling about human behavior with some mathematical structure on it. So, the Dixit book with Nalebuff is kind of a chatty, fun introduction with lots of examples. And he has another book. I believe it's with Skeath--S-K-E-A-T-H--that's more like a textbook you would use in a class, but not too mathematical. There are lots of very mathematical books, one by Roger Myerson, who is a Nobel laureate. And I have a book called Behavioral Game Theory, which, again, is not meant for a popular audience, but a lot of people have read it and told me they like parts of it. And it's called Behavioral Game Theory, and that's aimed at, say, advanced undergrads who know a little bit about game theory, but they're mostly just interested in how do people, and sometimes children or chimpanzees, actually play these games, and other principles like this, level-k thinking, besides equilibrium thinking. What are the different kinds of mathematical ways we approach this. And so, I hope... My book, unfortunately, is not a trade book. It's a university press book, so it's not very cheap, but there probably are used copies on Amazon that are not as highly-priced as textbooks usually are. And, again, it's not written... I didn't make a big effort like with Dixit's books to reach a big audience, but I hope at least some of your listeners who are willing to put with a little bit more math would find it interesting. Anyway, there are a bunch of books, although there isn't... Unlike Daniel Kahneman's book, Thinking Fast and Slow, there hasn't been a really great, fun game theory book written with lots of cool stories. Maybe I'll write one someday or somebody else will. But so far, Avinash Dixit's book, I think, is the best one.
Matt: And what is one piece of homework that you would give listeners?
Dr. Colin Camerer: Well, I think, you know... Abraham Lincoln, I think, said, "Think twice as much about the other fellow as about yourself." And so, the usual kind of mistake people make is to think about what they can get out of something and not to sufficiently think, what motivates the other person? What are they likely to do? If I'm very tough on negotiation, will I walk away? Yes or no. If I'm really easy in negotiation, something could happen. And so, the level zero players that we're talking about, by definition, are not doing anything strategically thinking. They're not saying, "Why is somebody doing this? What is their motive? What do they know that I don't know?" And so, often, a little bit of analysis like that goes a pretty long way.
Matt: Where can people find you online?
Dr. Colin Camerer: On Twitter, my Twitter is CFCamerer. C-F-C-A-M-E-R-E-R. And I do have a website, although it's not up to date particularly recently, and I haven't written... I'm on Facebook, but I don't post very regularly. Twitter, I usually comment on certain things, and I also try to... If I come across a recent research paper, sometimes they're quite technical and sometimes they're more...you know, there's a fun, really instant, interesting takeaway. I'll kind of use it to advertise, sometimes, my own research and other papers I think that people who are kind of interested in science at the level of your listeners might find fun to read.
Matt: Well, Colin, this has been a fascinating conversation, and I just wanted to say thank you so much for being on The Science of Success.
Dr. Colin Camerer: My pleasure. Thanks for having me.