Hacking Bias, Talking Process, and Seeking Truth with Annie Duke

By Garvin Jabusch

Annie Duke is an author, a former world poker champion, a business consultant, and an expert on the behavioral pathways of decision making. Her book, Thinking in Bets: Making Smarter Decisions When You Don’t Have All the Facts is a favorite of mine (and is now in paperback), because it provides a clear line of sight toward ditching our biases and prior beliefs so we may more objectively understand the world.  I believe the job of an economist and investment manager is to communicate a straightforward, understandable view of the economic trajectories in play today, and Ms. Duke’s approach to decision making is directly applicable to that pursuit, as well as to portfolio construction itself.

Annie Duke

I was thrilled recently to have the opportunity to speak with her. Highlights from our long conversation follow below. A much shorter version of this appears on Worth.com – and who can blame the editors there for not wanting to run a 9,000 word interview! – but I wanted to post this longer version because I found Annie’s comments insightful and even illuminating. And, as I re-read the original transcript, I discovered that I was re-engaging with many of the concepts we discussed, and so I think readers might as well. I hope you do.

Garvin: Annie, hi. Thank you for taking the time today, I’m a big fan of your work.

Annie: Oh! Well, thank you. I’m looking forward to talking to you.

Garvin: You know a lot of investment managers are fans of your work.

Annie: I do, yeah. I think I seem to have resonated with them and it’s been really wonderful because they’ve allowed me to develop really great friendships with a lot of them. When we sat down and talked about the book with my publisher and my editor, and she said, “What’s your opinion of the way this book propagates?” And obviously this is assuming that the book does well, which is super tail event anyway, so it’s like, “Let’s pretend it’s the best all possible worlds and people actually read it, how do you see this propagating?” And I said, “Oh it’s going to get picked up. It’s going to be adopted by the finance community and then propagate out from there, and then I think people will figure out that this generally applies, but I know that this is going to appeal directly to that group.”

Garvin: No question.

Annie: Right, so I what didn’t foresee was that the book was going to create that kind of connection for me where I was going to not just have the good feeling from the professional success, but also the people that I got to engage with on a personal level, that my life is richer for that.

Garvin: That’s wonderful, and it’s also a great example of an unknown outcome. It’s part of your thesis of understanding we don’t have perfect information.

Annie: Exactly. That’s exactly right.

Garvin: On that point, you write something about poker players that I think is true of asset managers, “they embrace that uncertainty and, instead of focusing on being sure, they try to figure out how unsure they are, making their best guess at the chances that different outcomes will occur. The accuracy of those guesses will depend on how much information they have and how experienced they are at making such guesses.” A lot of asset managers don’t want to tell you there’s guessing involved, but I’ve never heard a more succinct description of our profession.

Annie: I’ve actually been thinking that about as I’m writing my new book. While we know, just objectively speaking, that there is an exact probability that you could assign to a particular outcome, but because we don’t have perfect information, that exact probability is hidden from us. And I think that because we are really dichotomous thinkers at our core, we know that we can’t get the exact number. So we know there’s something that’s right, like if we have perfect information there’s a right answer. And then we think about everything else as, because we think so categorically, that everything else must be wrong. And so what happens is that when we think about this idea of guessing, it turns into wrong. It turns into, oh that’s the opposite of this thing that we could do if we have perfect information.

Garvin: Or if we’re guessing, it must not be disciplined. Or there’s no methodology. But of course there is. Most investment methodologies exist to manage the fact of incomplete information.

Annie: Right, exactly. So what I’ve been trying to do to get across to people this idea, is that we do have a way to talk about guessing which allows us to get out of that trap. That trap of that dichotomous thinking, which is to say educated guess. And what do we mean by “educated guess?” It means we’re bringing to bear the information that we have, imperfect as it might be, on our explanation of what we think that probability is, right?

Garvin: So how do you communicate to people that by ‘guesses’ you mean ‘statements of probability?’

Annie: I’ve been thinking about that. There’s really kind of nothing that you can guess at that you know nothing about. Right? If I asked you, “What’s the size of the universe?”, even though that’s not known to man, right? Let’s say that I asked you that, and I just want you to give a number.” I know you’re not guessing zero.

Garvin: Right, we know something, and in other cases we know a lot, but still not everything.

Annie: Right? So I know there’s going to be some sort of range. If I said to you, “imagine a cat.” And I say to you, “How much does the cat weigh?” I know you’re not going to guess 100 pounds or more. And I really want to preach this. I want to get people to understand that it’s never a “guess” in the sense of “the opposite of right,” not a guess in the sense of ‘I’m just going to totally randomly spin a wheel, come up with an answer,’ but rather that all guesses are educated guesses.


I want to get people to understand that it’s never a “guess” in the sense of “the opposite of right,” not a guess in the sense of ‘I’m just going to totally randomly spin a wheel, come up with an answer,’ but rather that all guesses are educated guesses.

Annie Duke

Garvin: Yeah, it’s “guess” as in my best assessment of the probabilities.

Annie: Exactly, and then what’s really wonderful I think once you change your mindset about that, once you sort of say, “Okay I’m not going to think about a guess as just random. I’m going to think of all guesses as educated guesses.” What happens is that then you focus on the educated part. Because what you realize is if any guess I make is an educated guess, then I have to become more educated, I have to figure out what is the stuff that I know that’s relevant to this guess? And then what is the stuff that I’d like to know? That I don’t know, that would be helpful for this guess? And now you become laser focused on just vacuuming up knowledge and trying to extract what other people know, what the world knows. Trying to, as much as you can, find the base rate or a relevant reference class. You start to just say, “My job then is to become better educated.” And it focuses you on the right part of the problem I think.

Garvin: Yeah, figuring out what inputs you need to arrive at the highest probability of guessing the outcome. And then, go learn about those.

Annie: Correct, exactly. And I think this is the frame that I’m now working with, so this is going to be a very deep part of the next book that I’m writing.

Garvin: I’m looking forward to your next one, can’t wait!

Annie: Well I can’t wait for it to be finished. It’s due in June so I’m really under the gun right now.

Garvin: Okay, so thanks for making time for me when you’re supposed to be writing!

Annie: Absolutely. You know what? I consider that a real privilege because I just talked to you about something that I’m working through in the book right now. And so it’s like I get to see, “Well is that falling flat? Are they understanding the way I’m communicating about it?” I get to hear what your take on what I’m saying is and so then that helps me think about how I might communicate with it in terms of the ways it’s being understood. The ability to share these ideas and talk to people about them, that’s such a deep part of the process and it’s such a privilege that I get a chance to do that.

Garvin: I get that. At our firm, our investment thesis is a little bit outside of the mainstream, and when I go give a talk or just meet with a client, the extent to which folks are picking up what I’m putting down is my best input for how good my communication is. So I hear you.

Annie: Yeah, if I’m talking to someone and they just don’t get it, that’s not their problem. That’s my problem.

Garvin: Yeah. We need to find better language, or imagery, or storytelling… We focus on climate change related strategies and there is a segment of the population that just won’t hear me.

Annie: Yes.

Garvin: They’re epistemically sealed off from that.

Annie: Yes.

Garvin: And I think there can be cases where it isn’t my communication limitation that is keeping them from getting it, it’s their culture. It’s their tribal identity.

Annie: Yes.

Garvin: But I think the other 90% of the time it is me, and I need to figure out what I’m saying. I need to hone my messaging, you know?

Annie: Yeah, so I actually think about that in terms of the way that I may engage with people politically as well. I do have that category. The way I view it is, are they engaging in good faith? So if I’m talking to someone and they’re not listening to me in good faith, I can’t do anything about that. Maybe I could work on the top level thing, which is, “Did you open your mind and hear me?” But that’s a really big job, you know?

Garvin: No question.

Annie: So I don’t – if you look at my Twitter feed, I think it would be hard for you to figure out if I’m on the right or the left. I sort of wash out the extremes and then stay with the people who are really trying to engage in good faith.

Garvin: Which is part of your whole thesis of truth seeking, so that makes every kind of sense.

Annie: That is true. I do just try to live it.

Garvin: So can I circle back to the tribalism issue?

Annie: I’m about to start doing research on it so I would love that.

Garvin: So biologist E.O. Wilson…

Annie: Consilience, baby, Consilience!

Garvin: Cool, you’re a fan too?

Annie: Yes.

Garvin: In his most recent, Genesis, Wilson makes the case that one reason we’re so good at deluding ourselves is that over evolutionary history, tribal cohesion, largely via identifying with foundational stories, has conferred greater evolutionary benefits upon us than has understanding, or even recognizing the existence of, objective reality.

Annie: So we know that we sort of think that the process of forming belief goes: you hear something and you think about it, and you vet it. And then you store a belief about it or evolve. We really think that we’re being very reasonable and engaging in some sort of process with the information as it comes in before we actually store a belief, but there’s a lot of evidence that actually what happens is that we hear something and then we believe it and then if we have the time, maybe we’ll get to vet it. And I feel like there’s three reasons that that happens that relate to what you just said. The first one is that we we’re selecting for false positives, because the people who were the control group in experiments about whether lions attack you in the grass when the grass rustled got eaten. So we know that we have this selection for false positives, which is we’re sort of natural believers. The second reason is that for most of the history of our species, we could only form conceptual beliefs. Meaning you had to see something with your own eyes or experience it your own self in order to actually be able to store a belief about it. This is in the time before language –

Garvin: Which constitutes substantially all of evolutionary prehistory –

Annie: – And because there wasn’t language, which is the way that you would confer abstract ideas from one person to another. And so evolution is like, “Oh okay, I have this belief formation system. So I’ll glom the abstract information system onto that.”

Garvin: So evolution just pasted this direct form of belief formation onto language assimilation, even though language can be wrong in ways direct observation isn’t.

So evolution just pasted this direct form of belief formation onto language assimilation, even though language can be wrong in ways direct observation isn’t.

Garvin Jabusch

Annie: Right. So those first two don’t have to do with tribalism, but when you’ve already got a lot of pressure on belief, then on the tribal side, there’s so much pressure on, “just believe what you hear.” Because human beings are super pathetic. I mean, if you think about from a physical standpoint right? We’re incredibly weak for our size.

Garvin: Right, we had to adapt via leveraging our big brains.

Annie: And in particular, our big brains were social, and they allowed us to form these kinship groups.

Garvin: Wilson would say that’s group-level selection, how we came to dominate, by leveraging eusociality.

Annie: That’s exactly right. Now here’s the thing. Imagine within a group that’s trying to protect the resources and get our genes to the next generation. What if when you spoke to me, I thought you were lying? How on earth do you possibly execute on anything that the tribe needs you to do? I obviously should approach everything assuming that I’m not having people lie to me all the time, because there’d be literally no reason for human discourse if we went around thinking that people lied to us all the time. How could a tribe survive? And that’s really what Wilson is saying, is that there was so much benefit to sort of forming these kinship groups in order to overcome this physical issue, the fact that we’re physically weak. And then we use this big brain in this social way to form these kinship groups and for reasons of human discourse, so I should assume that what you tell me is true.

Garvin: And so we do. And so we don’t vet everything in a disinterested way, especially if you’re in my tribe

Annie: We like don’t vet anything. We never vet. We really vet very little.

Garvin: And so I find it key that you, correctly in my opinion, write that disinterestedness is an imperative in truth seeking. And yet, at the level of individual selection – of course Wilson is famous for multi-level selection, group level, but also individual level, and on the individual level, he writes that we’re just inevitably, endlessly self-interested. And so disinterestedness isn’t a thing for us. So is that a stalemate? Like what’s the hack? How do we get around ourselves and disinterestedly seek truth? How does a person hack their own wiring to break that stalemate?

Annie: The hack is trying. So when we think about it, what does a tribe allow you to do? An individual is rarely going to throw themselves in front of an oncoming hyena just for kicks. But to protect the tribe, they will do this. They’re not going to march 100 miles for resources for kicks, but they will do it for their tribe. There’s a few things a tribe gives you, but among them there’s three that I think are really important for this discussion. One is belongingness. The feeling that you belong to something. The other is distinctiveness, that you’re distinct from other groups. And then also epistemic closure. In other words, they are giving you your knowledge. They’re telling you what’s true and what’s not true, which is what we just talked about, right? So that cohesiveness, that feeling of belonging to a tribe is really kind of the biggest social reward that we can get. And when we’re left to our own devices, that reward is coming from sort of what we think about as a conformitory thought style or group think or echo chamber. We’re all united, and being team players means we all believe exactly the same thing and our beliefs in some way are distinct from others. And so by affirming the beliefs that we have, we feel like we belong to the group and we are distinct from other people. And then on an individual level, that’s just amplifying my own individual bias where I just want my beliefs to be true as well. And because my beliefs align with the group, it all becomes a disaster.

Garvin: So we’re hopeless…

Annie: So, but the tribe is the hack because you can basically form a tribe that says, “Our tribe, our distinctiveness in our tribe is that we’re epistemically open. Our distinctiveness is that we give credit where credit is due as much as possible. Our distinctiveness is that I admit that I’m wrong. What makes me distinct is that when I ask your opinion about something, I don’t first tell you what I believe. Or when I’m asking you to tell me what you think of the decision that I made, I don’t tell you the outcome in advance.” And I know that’s really hard, because our sort of innate feeling is you can’t give me a good opinion about the subject that I’m thinking about unless I first give you this really important data, which is my own beliefs. But if I’m in a tribe, we’re overcoming that. A tribe where marching that 100 miles, or throwing yourself in front of the hyena, that is your desire to tell everybody your own beliefs. If you can get yourself in front of that hyena for your tribe, you’re in really good shape. So I think that actually ironically, our tribal nature is the hack.

Garvin: That resonates. And yet, we now live in a time that is nothing like the savannah where that all was selected for. We now confront global existential risks. We need to begin to imagine a world where in we can be a tribe of seven or eight billion, you know? Not a zillion little tribes that each has an internal shared belief in, I don’t know, some ideology. So can we imagine a world where the whole seven billion and change of us becomes a single tribe? Or if you like, in your parlance, one truth speaking pod or tribe whose ideology is epistemic openness?

Annie: Here’s a few things that I think. One is I think that the more people that buy into this type of thinking, the better off we are. Is it going to be 100%? Are we going to talk to all seven billion? No. We’re just so tribal by nature, I don’t think that we can do that. But small steps, like small changes, make really big difference. Imagine if you could get 1% more people on this planet to start really being disinterested, to really live disinterestedness. To really start thinking probablistically. Think about the impact that that would have on the world, right? If you just increase by 1%. And you know maybe you can increase by 2% or 3%, like I have really high hopes that you can do that. So I think that’s number one.

Garvin: Okay.

Annie: Number two is that I’m putting my money where my mouth is here, because I have a non-profit which is really about creating a movement around decision education. And I feel like decision education is a really important thing to really start very early, like elementary school with kids, to get them to start thinking about this kind of stuff and understanding, “What is a decision? How do I make a decision? How do I interact with information? How do I foster open mindedness? How do I think probablistically? How do I even recognize when I’m in an emotionally good state to be making a decision? How do I think about that habit?” Let’s get this to third graders.

Garvin: That would be great, because you know what? If you do a Master’s in finance or an MBA in this country, at least when I was doing that, you still are not taught that. You are taught that the market has perfect information and the stock price for any given security is exactly right at any given time. Of course that’s absurd. It’s like your chess vs. poker analogy – perfect information as in chess isn’t a thing in the world.

Annie: Right.

Garvin: And so we have to learn to ask ourselves, “Wait a minute. What information might be missing?” Or, “Am I about to make a buy or sell decision because I’m on tilt? Am I decision fit?” And so your heuristic framework that you provide about being a rational decision maker in turn confers long term advantage upon the stock picker. And so that’s why it resonates with so many of us.

Annie: Right.

Garvin: But this also brings me around to something else, Annie, it makes me wonder if I’m in some ways being a hypocrite. Because one of the reasons I like your work so much is because it does validate so many of my priors. So is that me just being a hypocrite and admitting out loud that I’m not necessarily being completely disinterested, and therefore maybe not seeing everything for what it is?

Annie: Well, I think that the proof is in the pudding. If the hack is tribe, then when you decide, “I’m going to be somebody who is really thinking about how to be open minded. I’m going to be someone who is really looking to think about how I can do better, how can I overcome these biases, how could I be more disinterested and more skeptical?” You know when you’re kind of living this stuff, first of all the science shows that your decisions get better. So we know that. But you’re also going to be part of a tribe because by definition this is actually a really, really hard way – the mindset is very difficult. And so if the hack is tribe, that has to be a tribe that isn’t coalescing around a conformity thought style. It’s coalescing around an exploratory thought style. So it’s a tribe that says, “I understand that we’re tribal and that we want to feel good and belong to something and be distinct from others, but we’re actually deliberately trying to make it so that the things that are making us feel good align with what’s good for us in the future.” Right? So I think that’s where things go wrong, is that the things that make us feel good in the moment don’t align with what’s good for us in the future. Because amplifying the beliefs that we already have, not as being open-minded to people who disagree with us, obviously isn’t in alignment with what we want for ourselves in the future generally. So you do have to recognize that you’re part of a tribe. So what that means is that you’re not gamifying it. Because I think that you can turn it into an exercise without meaning, where you just start to say to people, “Now tell me why I’m wrong.” But you’re not really hearing it. It sort of becomes, “I’m going to go through the steps. Like tell me why I’m wrong. You should go red team this.”

Garvin: It’s just performative. You’re just checking a box-

Annie: It’s just performative. And I think that that can definitely happen, where it loses its meaning and it becomes, “Well I know that I’m supposed to be doing these things, so I’ll do them and then I can sort of grab the social approval for appearance of doing them. But I’m not actually engaging with it.” So I view this as a practice in the sense of like yoga, where you are never perfect at it. And you really have to practice this kind of stuff in the same way that you have to practice physical exercise, right? If you think about the exploratory style of thought and the conformitory style of thought, you’re mostly going to be engaging in the second thing, in the conformitory thought style. It’s too embedded in the way that we think. It’s too embedded in our mindware. To your point about Wilson, it is so strongly selected for, that that’s mostly what you’re going to be doing. But by getting some focus on understanding that the goal, that your North Star is exploratory thought, you will engage in it more than you otherwise would have.

Garvin: You and Wilson both point out that our brains aren’t going to change anytime soon. We’re running on a Stone Age operating system, what you just called ‘mindware.’ And what you’re asking the tribe to do is identify itself as a tribe of disinterestedness, and do the hard work of flexing the prefrontal cortex to overcome the tyranny of the amygdala. Right? That’s hard. People don’t love doing things that are hard. So, given that, how are you going to grow the tribe to a point where it’s got enough critical mass and we actually address globally systemic issues?

Annie: Oh gosh, that’s such a big question [laughing].

Garvin: Well, I just asked you to solve the world’s problems is all, no big deal.

Annie: So yeah. This is sort of off-course but I’ll get back to a partial answer for what you said, which is going to maybe be a little bit dark but whatever. It’s a dark topic.

Garvin: It is what it is.

Annie: Yeah. So one thing I just want to say on the tribal stuff is if you think about groups. People who are members of a group have a really deep need to be a team player. To feel like, “Rah rah rah, our team is great. Your team sucks, ha ha ha.”

Garvin: That is us, totally.

Annie: Then you ask the team to do a premortem, what does it mean to be a team player right? It means to come up with the best ways in which you might fail. So it’s kind of a have your cake and eat it too. So the reason I say that is I’m wondering, and I don’t know the answer to it, but I’m wondering if there isn’t some way to create that on a global level.

Garvin: Yes!

Annie: Or at least more globally. Because there is this way to do it within a small group, right? Well we’re going to talk about how we’re going to succeed, but then I’m also going to say, “Now we’re doing a different type of team exercise. And if you want to be a really good member of the team, you have to come up with the best reasons why we’ll fail.” And everybody’s on board doing that and so now automatically it creates disinterestedness, that creates skepticism on a group level. So I wonder if there isn’t some way that you could propagate that out. I don’t have an answer for that. That’s just sort of what I would mull over.

Garvin: In places like Silicon Valley, people are saying, ‘well maybe the new tools we have like AI and/or gene hacking can be ways to directly hack that.’ Like, Stewart Brand said something to the effect of, ‘Changing human nature isn’t going to happen, but changing tools is something we can do’. Which dovetails with what you’re talking about, basically hijacking the amygdala and getting the squirt of dopamine associated with being a good member of the tribe from doing something that actually increases your disinterestedness, gives you a better objective view of reality. So that-

Annie: Yeah, you know I think that with AI, with those tools, the worry that I have is that it’s kind of like anything else. It sort of could be used for good or evil. And I don’t mean evil in the sense of like the Terminator. AI is really sequencing data and it’s like what data is being put into the AI, what is the AI doing within all of those layers? And I think that what we forget is that data doesn’t kind of exist objectively in the sense of outside of human beings for collecting it and analyzing it and thinking about it. There’s people that are like, “I’m a gut person and I just go by my gut and the data be damned.” Right? But then there are also people I think who are just like, “The data is going to tell me the truth.” And it’s like, “Well no, because it’s an interaction between human beings and data.” And I’d like to see more focus on the interaction between the two things. So I’m a really big fan of Darren Marcus in this area and he has a book coming out actually with Ernie Davis. It’ll be out in about four months I think. I think it’s called Rebooting AI. I hope that’s the name of it.

Garvin: Alright it’s on my list.

Annie: And what they talk about is that it’s really this idea, it’s actually almost a celebration of the strength of humanity in talking about a much more I think clearer view of what the strengths of AI are and the weaknesses are. So for example, I mean this is one that Gary has given before. If I said to you, “Imagine a cat that’s blue and bigger than a house.” This is full credit to Gary because this is his example. And now I ask you to reason about it, right? Like would you be scared of that? It’s a blue cat bigger than a house. Would you be scared of that?

Garvin: Absolutely I would, because cats are predators. It would see me like a mouse and I would have no chance.

Annie: Right, exactly. Now the interesting part is that you’ve never experienced such a thing. You’ve never experienced that, and yet I can ask you tons and tons of questions about it and you’ll answer it the same way that other human beings will answer. But if I were to feed that into an algorithm, an algorithm would be like, “Huh?” What human beings are really good at is just common sense. We’re really good at bringing to bear what we know about the world on any situation that we walk into. And then data is really good at, you know algorithms are really good at, finding correlations. So the solution I think is going to be a really amazing interaction between the two. And not one or the other. If what I’m trying to do is engage in motivated reasoning, that sounds more powerful because I have a data story for it. I can do that. And once we start to sort of put this God-like quality on data, it actually can be a way to amplify the biases that we already have, which again has to do with we have to think about what the human part of that is and how are we interacting with the data.

Garvin: I think bias amplification is in full evidence around the world! Your comments around this remind me of Garry Kasparov’s notion of the “centaur.” The best use of AI, at least in the sort of intermediate term, for a given use case, is a hardworking, knowledgeable human using the algorithm to help them find correlations to reveal or improve what they’re trying to work on. So kind of a hybrid.

Annie: You know here’s kind of a weird, sort of dark way to think about how to band people together that I’m going to give you. Obviously at the moment, we’re feeling tremendous divide within our country, and it’s not just left and right going after each other, but there are different parts of the right that are going after each other and different parts of the left that are going after each other. So I mean we’re just fracturing all over the place, right? And I’ve been thinking about this, and people have been like, “Oh this is maybe social media’s fault.” And I’m like, “Well I can see how social media maybe is contributing to that for sure,” I can see how it’s amplifying it, but I actually think about something, and I’m interested to know what you think because I’ve sort of been mulling this over. I’ve been like, “Well I feel like there was this really big political change.” Which was until 1989 we all, Americans, had a common enemy, which was communism. And we had this real life expantiation of this philosophy called the Soviet Union. And the Soviet Union was really trying to encroach on the wall, and we were against them. And I feel like that allowed hundreds of millions of people to come together as a single tribe, united against this common foe. And united against the political philosophy of that foe and the things that that foe were doing. And so I just sort of think about that. Can you ever get the whole planet to band together as one tribe? And I think, “Well, if aliens attacked us.”

Garvin: That’s what Reagan said. He thought the Cold War would end quickly if aliens attacked.

Annie: Right? Yeah. So if aliens attacked us… The reason I feel that’s a little dark is because I’m like, “Well it would take a common enemy to get us to all come together.” But maybe the common enemy is there’s some sort of shift in the way people think about climate change for example. And that becomes the common enemy. I don’t know if it can be not aliens.

Garvin: Well, so, I hope it can be ‘not aliens’ because the visitation probabilities are obviously astronomically low, and if an interstellar-capable species were to attack, we’d almost certainly be Bambi to their Godzilla.  But it comes back to your idea of using group level selection or tribalism as the hack, right? It’s like the old Arab proverb, “It’s me against my brother. But it’s me and my brother against our cousin. And it’s me and my brother and our cousin against the stranger.”

Annie: Right, right…

Garvin: And it’s the ever-widening circle, and that’s multi-level selection, right? You’re self-interested first, and then maybe you can assimilate a personal tribe of about 150 people, and really no more, because group size on the savannah in our selective environment. And that’s what made me actually ask the question about the tech hack, right? Whether it could be AI or Crisper Cas9 that expands our cognitive capabilities past the limits of our mindware’s pre-loaded 150 tribe size, because we really need to think in numbers bigger than that now. And if that takes a tech hack, would it be worth it? Further, is that our only way out? Because, today, the problem with getting climate change front and center as the enemy we’re all trying to overcome is it’s been made key part of the culture wars, and I’m not sure there’s any hacking that bit of tribalism in the near term. Certainly not in a way that’s going to be timely enough to overcome the risk.

Annie: Yeah, I mean… I think there’s definitely the timely part. But I do think about this though, that you know… I feel like status quo bias is really strong. And it is hard for us to imagine that the way things are now is not going to be the way things are going to be. And I imagine for my children, for them the idea that at some point people were like, “Well African Americans aren’t allowed to be in this restaurant with people.” They’d be like, “What?” 

Huh? What do you mean?”

Garvin: Yeah, I’m glad that blows their mind.

Annie: And it’s not so long ago that that was obviously under debate. And it’s not so long ago that I think that if you’d asked people if it would ever be different, they would have said, “No. We’ll never be able to get people to come together. This is too much part of the culture war, that we’ll never be able to overcome that and get everybody to sort of think that that was very strange.” Right? So I mean, I know that I think about the fact that it wasn’t so long ago that Allen Turing was getting sterilized because he was gay. And it’s like, now you look at that and go, “What? Oh gosh that’s quite barbaric.”

Garvin: Point taken. So I’m too pessimistic?

Annie: Maybe I’m just an optimist or something. But I just sort of feel like you don’t know when that tipping point is going to happen. You don’t know when, even with how we’re so divided into these factions now, people are going to get tired of it and be like, “Enough. This is horrible. Actually this is awful.” And they’re going to look back on this time when we’re all at each other’s throats and be like, “That was so weird. I can’t believe people did that.” So I don’t know. I don’t know.

Garvin: I hope that you’re right. But so far, history surely shows that that’s not that likely. Unless we can –

Annie: I think Steven Pinker might disagree with you a little bit. I do like Steven Pinker’s example. Which is, he says in medieval times there was a game called “Burn the Cat”. And people would literally just light cats on fire for fun. And then at some point people were like, “That’s really cruel. We shouldn’t burn cats anymore.” He says it’s because at some point we develop this theory of mind for cats. We could imagine cats feeling pain. And cats kind of became more a part of our tribe, right? And so I think about that and I think there’s all sorts of ways where that has been true, where people who we sort of felt were less than human or we were dehumanizing, kind of have been brought… We just sort of at some point were like, “Oh no. That’s crazy. They’re a part of our tribe too.” Or whatever, you know what I mean?

Garvin: Yeah.

Annie: We used to burn cats. That’s sort of how I feel about it. We used to burn cats. So maybe we’ll stop burning cats.

Garvin: So I hope so too. But I just can’t resist being a devil’s advocate here –

Annie: So let me clear, you’re not playing devil’s advocate because I’m saying I kind of hope this is true. I have no opinion that this is the way it’s going to go, by the way. I’m just like, “Oh maybe we’ll stop burning cats.”

Garvin: Yeah, and maybe we will. What’s interesting about that is that what you term “reconnaissance of the future”, in my biz we call, “forward scenario planning” and we use it to think about macroeconomic trajectories and therefore, where to place our investments. So you want to combine that with your idea of the premortem. And together, reconnaissance of the future and the premortem should add up to roughly the totality of 100% chance of the array of the possibilities.

Annie: Right.

Garvin: And therefore, you want to be as realistic as you can with both of them. But it sounds like what you’re suggesting is that if we to some degree modify our best prediction, our reconnaissance of the future a little bit, maybe just a little bit, in the optimistic direction – to stop burning cats – that that itself could have the effect of changing outcomes. Like what if we want something to be the case and so we push that direction? So maybe the premortem gets a little less weight in our math.

Annie: So the way I would think about it is like I think it depends on kind of what your frame is, right? So are you naturally viewing things through a negative frame or a positive frame? So what we need to be aware of is the way that we view the present, whether we generally think that what’s going on in the present is negative or whether we generally think that it’s positive. I mean if we understand that if we do any kind of forecasting. In other words we’re not working backwards, we’re working forwards, that there’s going to be very strong influence of status quo bias. And whatever the frame is that we have in the moment as we forecast is going to be carried out into our forecast. So if we generally have a negative view of what’s going on now as we forecast, that’s going to be carried forward, and if we generally have a positive view of what’s going on now as we forecast, that’s also going to be carried forward. So what I would say is that to try to be aware of whether you generally are thinking negatively or positively about the present. So that you can understand that that’s going to be carried forward into your forecast and that you want to be adjusting in the opposite direction of whatever your frame is. It’s going to have a very strong influence. So I think that there is a certain amount of self-awareness that you need to bring into that puzzle because what we’re really looking for is to counter whatever your natural bias is.

Garvin: You know this is making think of Alexandria Ocasio-Cortez’s video that she dropped recently about the Green New Deal. And it’s really… It’s a piece of reconnaissance of the future. I don’t know if you’ve seen it, but she’s imagining herself 20 years hence as like a senior stateswoman. And she asks herself, “Wow how did we get to this great place where we’ve got this Green New Deal implemented?” And it’s very much a piece of future reconnaissance or forward scenario planning. Is her dropping that bit of optimistic future reconnaissance on the population in some way going to have a Heiserbergian effect of changing the outcome? And therefore, is it worth doing? Or is she just indulging in future hindsight bias?

Annie: So I think that, AOC aside because I haven’t seen the video, I think that one of the interesting things about doing a premortem is… So, say I have a goal. I want to be healthier in six months in some way. Say I want to lower my cholesterol or whatever, and I imagine six months from now that I’ve failed. That does actually have an effect on the outcome. Because by going through that, imagining a failure and trying to figure out how I failed, I now identify what the obstacles are, and I can now think about a few things. I can think about first of all, “how can I remove obstacles?” in a way that I otherwise might not be. I could find some points of friction that I need to reduce. I could find some places where bad luck might intervene. And I could think about sort of two things. One is are there actions that I could take that could reduce the probability of that bad luck occurring? Or the other thing that I could do if the answer to that is no, is I can say well given that I can’t, I now at least identify that that bad luck might occur. So let me try to think about two things. One is there a hedge available to me? So I could hedge. Or the other is let me at least have a plan in place. So that if that bad luck thing happens, I know what I’m going to do in advance so that my amygdala isn’t in charge at that moment. So by doing all of that, I am actually changing the chances of a bad outcome occurring. That’s kind of the whole idea of it.

Garvin: The whole idea of a Ulysses contract. It’s short circuiting the wrong decision.

Annie: It is, but I think that the whole thing is that yes, the act of observing it does actually change it. So I like the way that you put that. It’s very sort of Heisenbergian. I never really thought about it in that way but it’s actually a good frame.

Garvin: That came to mind because of the way you were describing the interaction between the forward scenario casting and the premortem. And it’s like we’re making our best probability assessment of what to do going forward. So in my industry, I think about what stocks do I want to buy and what sizing do I want to put on each one? But it isn’t just always about that. It’s also about my interaction with the world. By telling the world that investments better be innovation facing because that’s the trajectory of the global economy… Well, in some small way does that have the effect of accelerating investment into that space, and therefore helping to make that future more likely, however trivially, more likely to be realized? So I appreciate your answer.

Annie: Not intentional on my part, but I love helping activate, and having the opportunity to have a conversation go in an unexpected place. Clearly you think macro a lot right? So generally when I’m being interviewed people are talking a lot more micro. And so to be asked these questions that have to do with macro, that’s really challenging you know? Somebody did ask me the other day something about macroeconomics. And I said, “Well I tend to kind of go with a friend of mine who is a PhD in economics actually.” He said, “I started with macro. I abandoned it quickly because I realized who the hell knows how all these things are moving together? Micro you wrap your mind around it a lot easier.” So most of the conversations that I have are really micro, so being able to talk macro is uber, super fun and unexpected.

Garvin: Well, but your thesis is so clearly applicable to long-term planning, which straight up requires macro thinking. For an economist, or for a stock picker, the macro is the reconnaissance of the future part. And the micro is the fundamentals of the stock today part. And that’s just like your thesis, and it’s the combination of those two things that is the heart of portfolio construction. And that’s why you have all the finance nerds loving your book.

Annie: What I was trying to do with this book was to say, “look here is the case for thinking probablistically and embracing uncertainty,” and I imagine I’m just giving a different perspective to something that you’ve probably thought about quite a bit. I imagine people aren’t picking my book up cold. What I was trying to get at was a little different, was like really a structure for how you substantiate this kind of thinking. And that’s what I was trying to offer, is like here’s this way to actually do it.

Garvin: Helping us do the work of trying to be more objective, and not just falling back on the inherited paradigm.

Annie: Right.

Garvin: You know there’s an investment manager we know, Sharon French, who has said that “tradition is not a strategy.” And yet, most everyone in my business practices the inherited wisdom. Like, “Oh you know what, just buy the S&P 500 Index. That’s what everyone does.” Okay, but that assumes that markets are efficient and they always work perfectly all the time. And it involves not caring what an individual company actually even does. Right?

Annie: So can I give you a little bit of a different perspective on that? People don’t think necessarily deeply enough about competing incentives. Because we view the world through our own frame, right? Like we’re built for the inside view, not really understanding how things look from the outside or how it looks to somebody else. So we think about our own frame, that for what we want to do, tradition isn’t a strategy. But it depends, because there could be competing incentives. So let me just offer you how tradition can be a strategy. If you think about a decision type, and you think about it as status quo, we can have a status quo decision or you could have an innovative decision, something new. Something that doesn’t have a lot of consensus around it. So we can call it consensus and non-consensus, and then we can think about outcome quality, good and bad. And here’s what happens, this is what’s really interesting. If you have a good outcome from a consensus decision, people are like, “Good job!” If you have a bad outcome from a consensus decision, people are like, “What could you do?” If you have a good outcome from a non-consensus decision, it’s like genius. So you’re the genius of the earth. And if you have a bad outcome from a non-consensus decision, you’re fired.

Garvin: You’re a wingnut.

Annie: You’re an idiot. You’re whatever. So now think about how we have misaligned incentives, particularly when you start to think about prospect theory and the way that we as individuals process losses compared to gains. So we know that we aren’t going to approach this idea of being in the genius box equally to the way that we’re going to think in the future about being in the idiot box. Right? Those are not equal and we know that idiot is much more painful than genius is like a reward, right? So in general, we know that we’re trying to avoid that outcome. And then we can add onto that, that if we are going to have a bad outcome, we would like to be protected by luck. We would like luck to be a protection for us.

Garvin: We would love that, yes.

Annie: Right. So how is that going to substantiate? On avoiding bad outcomes in general, we view not deciding as kind of preferable, because then we didn’t do anything. So you can do some of that stuff just to sort of stay out of the bad outcome box. But the other thing that you can do is make sure that luck is on your side, and there’s only one way to make sure that luck is on your side when you have a bad outcome, and that is tradition as a strategy. In that scenario, all of the sudden, tradition is a strategy. And for some people it’s the correct strategy. I saw a great talk by Toby Moskowitz who pointed out that we look at Bill Belichick like the paragon of, “he does all this crazy stuff that nobody understands.” But when he was with the Browns, he did not do that. And in fact, you don’t start to see that really out of the box behavior from him until he’s won two Super Bowls. At which point, he doesn’t have the risk of losing his job.

Garvin: He had the cred to drop tradition and be a contrarian.

Annie: Right, so what I will say is that actually tradition is a strategy. And when you’re thinking about what is happening on your team, leadership needs to ask, ‘is your behavior influencing your team in such a way that tradition is a rational strategy for them?’

Garvin: So I hear what you’re saying and obviously tradition can be a strategy, but to extend the football metaphor a little bit, it feels a lot like playing prevent defense. Like you’re playing to not lose, as opposed to dominating.

Annie: Of course it is, I agree. So that’s why, as an enterprise, you don’t want that to happen right? So that’s why you need to think about “how am I making it so that this is a rational strategy for the people on my team?” And it’s back to the empathy gap between present you and future you, and how your behaviors in the moment are not necessarily aligning with what’s best for you in the future. This is a good way for you to see that right? Because we know that innovation is going to be really good in the long run, but you’re taking a whole lot of risk in the short run, you’re taking this risk of being in the idiot box in the short run. Because obviously it might not work out. And when it doesn’t work out, it’s incredibly painful.

Garvin: So what you want, and this is hard to find, is someone with a very clear line of sight that the inherited wisdom is wrong and that their contrarianism is going to work, right? A very high degree of conviction.

Annie: Yes, and you need to have an environment that allows them to try. So for example, startups tend to be those environments just by nature right? Because people understand that it’s mostly going to be failure and so it’s a little bit baked in and so that’s why very often you’ll see more innovation out of them than companies that are established. And I think that it partly has to do with this incentive issue.

Garvin: Yeah. Okay, I think you just described my whole career!

Annie: Well, it’s just when you said, “tradition is not a strategy,” I’m like “oh but the problem is it is! It’s a total strategy!” So I couldn’t let it go. I couldn’t let it go, sorry.

Well, it’s just when you said, “tradition is not a strategy,” I’m like “oh but the problem is it is! It’s a total strategy!”

Annie Duke

Garvin: But you just illuminated me because it is a strategy and, as you say, that’s the problem. Asset managers desire nothing more than to have their returns mimic the benchmark so they don’t go in the idiot box.

Annie: Right, exactly. Think about it. We know there’s all this data that shows that funds, when someone has a really good year, it’s like, “we’re going to reinvest. We’re going to reinvest in you because you had a good year.” And what is that telling people? Avoid draw downs. “You better not have a draw down because you’re going to get fired and we’re taking money away from you all of the sudden. We’re going to divest from what you’re doing.” So this becomes – the incentives are strong to play prevent. So then what you have is all these people who are like, “We’re results oriented.” And I’m like, “Oh, tradition is a strategy for your people. Okay, good to know.”

Garvin: And then the stories everyone in the industry tells are about, “well, the time to be greedy and buy everything is when everybody else is panicked.” That is extremely true, but few actually do it.

Annie: Right, no one actually does it. Exactly.

Garvin: Yeah, Prevent. Okay, thanks for that clarity.  So, I want to close on a different note by asking: given what you write about chess, do you ever play chess or enjoy it? Or is it just too imperfect, as a decision making model?

Annie: So I think that it’s very imperfect as a decision making model for most of the decisions we make, but that doesn’t mean that it’s not complex. A lot of what makes problems enjoyable is the complexity. So I don’t enjoy tic tac toe, which at it’s core, tic tac toe has a lot of the same characteristics that chess does. But I don’t enjoy that because it’s too simple. But chess is incredibly complex and so therefore, I find it very enjoyable. I happen to be quite poor at it. But I can imagine, because it’s so complex, it will reveal itself to you more as you play it. And I think that once anything in life has that quality where as you become better at it, more of what it is is revealed to you as opposed to less, there becomes less for you to know. In other words, as I become better, there’s objectively less for me to know but my view of what there is to know changes. So what I think chess is, is going to be much less than what I think chess is as I become better at the game. And I think that once a game has those qualities, I think it’s amazing. Luck or no luck, information or no information, it has the right amount of complexity I think to make it an incredibly interesting activity to engage in.

Garvin: I love that answer, because what I was doing was ending on something lighter, compared to some of the depth that we’ve been into. But of course, the Annie Duke response is nuanced and complicated and I love it. And the other thing you did was criticize yourself for not being good at it, which if I can editorialize, yeah, being world champion at one game at a time is fine. You don’t need chess.

Annie: I know. But yeah…

Garvin: You’ve been kind and generous with your time, and I really appreciate it.

Annie: Thanks, the conversation was super fun.


Important Disclosures https://greenalphaadvisors.com/about-us/legal-disclaimers/