Transcript: Conversation between Annie Duke and The Value Perspective

21/06/2019

Q: We very much enjoyed your book Thinking in Bets and were particularly interested by your thoughts on the benefits of thinking probabilistically. For someone yet to read the book, how would you sum up the way that thinking in terms of probabilities can help people’s decision making?

 

A: That is a very big question. Here is why thinking probabilistically is so helpful to decision-making – in my opinion. Really, at its base, every single decision that you make is a prediction or a forecast. Or you can think about it as a bet, where you are investing limited resources – you don’t have unlimited time, you don’t have unlimited money and you can’t exercise every single option that is available to you.

 

So there is an opportunity cost involved in any choice you make. And when you make that choice, it is not deterministic in nature in the sense that, if you make a decision, it is incredibly rare that the outcome is determined – in other words, there is only one of them that is available to you. In general, for pretty much any decision that you make, there is a set of possible outcomes that are available to you.

 

So as you make a decision, that decision only defines what happens in the sense that it defines the set of possibilities, and how likely each of those possibilities is to occur. And, as we’re thinking about different options, that is kind of what we are thinking about – it is, Option A creates this set of possibilities, option B creates this set of possibilities …

 

So, in that sense, thinking probabilistically is just accurate to the nature of the world. The more that we can think accurately about what the nature of the world is, the better off we are. The more that we can recognise that, when we make a decision, the outcome is always probabilistic, the better off we are.

 

Q: Naturally enough, you refer extensively to your experiences at the poker table. The system investors operate in is, however, likely to be more complex – to have more nodes of interaction, with more ‘unknowns’ than a six-person poker game. How can probabilistic thinking best be applied in a more complex environment?

 

A: It’s interesting. Obviously, it is a simpler problem, for example, that it’s two people versus 10 versus a larger and more complex market. But those to me are issues of how many people are interacting with each other and not so much an issue of what is the big problem? And the big problem is that, when we make a decision, there is unknown information and you're trying to get better at understanding what the unknowns are. And I'll get to that in a second.

 

And then also, you're trying to figure out as best as you can, what the probabilities are in terms of what the set of outcomes are. So, you know, I get told quite often when I'm talking to people, particularly people who are investing in more long-cycle types of decision, ‘Well, you know, I’m in a really hard situation, competitively, because I am dealing in a world where the probabilities are unknown’.

 

And my answer to that is, ‘No, you're not’, in the sense that, yes, I agree that you can't say this is 54%. Like with precision … it is very rare that you are going to know the exact probability. And that's also true, by the way, in poker, because in poker, obviously, the cards are face down, and at the end of almost 90% of the hands you play, the cards actually don't get turned face-up for you.

 

So while the probabilities are theoretically known – as they are when you are investing, they are theoretically known – in general, from a practical stance, you generally don't find out the answer and so you can’t actually get them to precision. But that being said, that's not really what your job is at a poker table or in investing. What you are trying to do is get better at narrowing down the range of possibilities.

 

So we’re almost never confronted with a situation where the judgment of probability is literally … I don't know, it could literally be somewhere between zero percent and 100%. Now, the more the complicated problem that you are entering into – the more players there are in it – it is probably going to be a little bit harder to get closer into that range.

 

So if I'm playing against one person, it might be easier for me to figure out how my hand compares against theirs – as opposed to if I am playing against six people – unless I'm in a known situation like I happen to have aces so I know it's the best hand, but that's rare.

 

So what I'm trying to do instead is I'm trying to get down to a range. So I'm trying to think – I know it's not anywhere between zero and 100% – let me see how much I can get this down. So I start thinking about situations like mine, I'm looking for reference classes that will help me along – if I know how often something has happened in a situation that’s similar. That can help me along.

 

I’m looking for base rates. So I’m looking for information, I’m looking for feedback from other people, I’m looking for the things that I particularly know about the position that I’m in. So I’m trying to merge inside-view with outside-view information, to try to get some sense of what the probabilities are there. And if I can go from, ‘I had no idea’ or ‘I wasn't thinking about it’ to ‘Well, I think this is probably somewhere between 30% to 60%’ – or, in a different way, I could say ‘My confidence is that I think I am going to win 70% of the time’ … so two and a half to one in my favour …

 

Obviously there is a lot of uncertainty in there … you are not anywhere close to 100% on that. But the fact that you are really making that commitment to try to get as close as possible, helps you with what the biggest problem that we have is, which is not so much that the probabilities are unknown – because they are somewhat known – but the fact that the incomplete information that we’re working with is actually the driver of most of our mistakes.

 

And we can think about incomplete information in two categories: there are things that you don't know. So that would be the on the information asymmetry side. But then also, within the things that you do know, your own knowledge of how good that knowledge is – in other words, your own knowledge of how biased it might be, for example, how confident you should be in in the knowledge that you have, is actually incomplete as well.

 

So we want to be driving ourselves to do really good internal auditing of our own knowledge, but also to figure out a way to get a lot of hunger and be more efficient at extracting the things that we don't know.

 

One of the drivers of that is this total unwillingness to accept that a probability is unknown. And as soon as you don’t accept that a probability is unknown, that puts that quest for better knowledge in the forefront, because it is the better knowledge that is going to allow you to start narrowing down those probabilities.

 

But, to do that, you have to be comfortable with the fact that it is almost never – and this is true in poker as well – it’s almost never going to have a precise probability, it's still going to be a range. And sometimes what's interesting about the range is that it is actually going to be polarised. You can get into situations where you're looking at what the information is telling you and your conclusion is: I’m either a really, really big favourite to win, or a really, really big favourite to lose, and nothing in between.

 

So those kinds of situations can come up as well. You can come up with situations, obviously, where there is heavy skew – so one of the things that you might be doing is trying to figure out what the shape of the distribution is as well as you can. But all of that desire to try to sort of uncover what most people are perfectly happy to leave hidden is what actually allows the more important piece in my view – which is this piece about, what do you know? – to really start to get better.

 

Q: To assess the probabilities around our assumptions, we integrate base rates into our investment approach. What tools do you see as most useful for applying probabilistic thinking in investing – particularly in an environment where the probabilities are unknown and will not reveal themselves after the fact?

 

A: There are a few things that can really help you in terms of figuring out how do you really thread this needle. So base rates are a wonderful thing to go find. And sometimes it is important to recognise that you want to actually be looking at different base rates. So, very often, the problem you are tackling is not exactly the same problem as … there is not necessarily a ton of data on it.

 

I’m not sure what kind of investments you do but usually there is some uniqueness to whatever the choice that you’re making is. So try to figure out … not thinking about it as one reference class but trying to really think out of the box and say, ‘What are other reference classes that I could go look at?’ ‘What other things have similarities to this problem?’

 

And I would really highly suggest, in terms of being able to discipline that view, of thinking about not just within investing, but there’s lots and lots of disciplines that you can look at that have similar problems. So we know that, for example, there are similarities in biology to certain things that happen in markets. So thinking outside of the box, and thinking broadly about what the reference classes are and what different frameworks can tell you about a way to think about a problem can be incredibly helpful.

 

So what you can do then is that, as you're thinking about how am I supposed to think what the base rate is, you don't get locked into thinking that you have more precision than you do – number one; and number two, looking for other types of reference classes that might inform and better refine the base rate that you're trying to discover.

 

So that's kind of number one. Number two is … part of coming to a really good conclusion is to try to think about the way that the problem looks from the outside of your own brain. Let me just step back and say, what is a base rate really good for? Obviously a base rate is really good for trying to get you an idea with more precision of a starting point for understanding how often something will happen in general.

 

But what we know is that's how often things will happen in general and we know that there are variants there – so 50% of people get divorced. But that means that some people don't get divorced. And some people do. And when you look across the population, the average number of people is 50%.

 

But that doesn't mean that if I pull one person out of the crowd, that I kind of know what's going to happen to them in particular. I just know that on average, they're going to get divorced half the time. So the question is, how can we figure out what is going on with the couple that we're dealing with? That's what we are trying to figure out.

 

So what we can broadly think about the base rate as fitting into and this would be true if you are looking across other disciplines as well, other frameworks for thinking about information on, is that that is getting you to focus on the outside view.

 

So we can think about the outside view as stuff that doesn't have to do with our particular viewpoint of the world. So when we come into a situation like we just get married, or making an investment, we have very particular knowledge of the situation that we're in that is special to us. And very often we overvalue the knowledge that we have that is special to us.

 

Here is a simple example of overvaluing the knowledge that is special to us. There was an analyst who announced right at the beginning of 2019 that there was a 95% chance that the market would be up at the end of 2019. That’s someone who overvalues their special knowledge because we know that the base rate on there is way too far away from that. So the base rate is like 65% or 70%, as you guys know, and so there is no special knowledge that you can have that should pull you that far off the base rate.

 

So the base rate is kind of acting as a way to discipline and make sure that you're fitting in the outside view. So there's other things that can do that as well though – for example, one of the great things to do is find out how would somebody who is not me and looking from the outside view my situation? That's another way to bring the outside view in and that can actually help you understand where your assessments of the probabilities might go wrong. So, it's important to think about how can you create systems that allow for people to see your view from the outside.

 

So there’s different ways you can do that. One of the ways, for example, is to make sure that when you are listening to feedback on a particular investment that you're trying to think about or trade that you're thinking about, that you're listening to feedback in a particular way that allows somebody who has a different opinion or somebody who might see something different than you to actually express that.

 

So that has to do with like quarantining your own opinion from them; when you are asking for advice to make sure you are operationalising the way that that is asked; making sure you are not allowing them to see what the conclusions are; if you are getting feedback after the fact, not allowing them to see how it actually turned out.

 

There is a variety of different things that you can do but what you want to make sure is that you are allowing in the process for cognitive dispersion. Because what will happen is that a base rate is great, but then what you want to do is get some sort of idea of what does Person A, what does Person B, what does Person C think the probability of this so that you can start to get some sort of interpolation among what those opinions look like, which helps you to make sure again that the outside view is coming in.

 

Then what you want to do is marry that with the inside view. So you don't want to necessarily go on base rates alone because you do actually have some knowledge that may be particular to the situation or particular to things that you see. And the intersection of the inside view and the outside view are where accuracy lives.

 

So you may understand something that allows you to understand – is it going to be a little bit more frequent than what the base rate tells you or is it going to be a little bit less frequent than what the base rate tells you? So we can think about the base rate like the disciplining piece that disciplines the way that you are applying your own personal knowledge to the situation.

 

Q: Given how value investors strive to take the emotional element out of investing, another idea that fascinated us in your book was ‘tilt’. Please could you explain the concept and offer some advice to disciplined value investors for those inevitable periods they go through where the outcome is adverse?

 

A: Let me try to attack this in two ways. Way number one is, what can you do as an investor? And way number two is essentially what can the group or the enterprise do to make sure that people are less likely to be on tilt? And I think those are both incredibly important.

 

So we are incredibly path-dependent in the way that we view the world. So one thing I really want to point out is that tilt doesn't just happen when you're in a period of tough performance. First of all, tilt can happen when you are in a period of really good performance – it’s called ‘winners tilt’, actually. So let's think about tilt as whenever your emotions are causing your risk attitudes to be distorted. Let's sort of broadly define it as that.

 

So sometimes when you're in a period of really good performance, that can actually cause you to go on tilt as well. And generally, the way that the upside tilt happens is that, when you are in the middle of upside,

very often the reaction is to try to clamp down on volatility. So, for example, if you have a winning position in a trade and you are winning a lot, it is much more likely that you will exit the trade in order to lock in the win, even if it were a trade that you would put back on tomorrow.

 

So that's kind of when you are within the winning period, you will actually try to sort of bring your volatility down to zero, because you actually want to lock the win in – you want to make sure that you are getting to lock that feeling of the win in, and oftentimes you are sort of imagining, ‘I’m winning, but what if I lost it back? I would be very sad’ – and so you are trying to stop that from happening.

 

And so you will tend to lean toward closing the position out even though, if I were to ask you in a rational way, would you put that position back on tomorrow, you would. So that is on the winner’ tilt side. And then the other thing that happens is that, once you have strung a lot of wins together, as you are now making a brand new decision, you will very often overestimate your expected value, which obviously causes you to put positions on that are too big, and to be too confident in your decisions.

 

So those are the two ways that winner’s tilt can express itself but the tilt that most people think about, is when you are, obviously, on a bad run. So that will express itself as, when you are on a bad run and you’re in the middle of the bad run, you will very often be volatility-seeking because what you are trying to do is get out of the loss.

 

So what that means, obviously, is that if you have a position that is losing, you are very unlikely to get off of it – even if, where you had a decision to make a brand new trade tomorrow, you would not put that position on. Sometimes you'll press; sometimes you'll put on other positions – if the position you are holding right then doesn't have enough volatility in it to get you back to zero, you'll actually go and seek out other ways to get other types of positions on that have enough volatility in order to balance out the position that you have.

 

And so those are all just desperately trying to get to zero and then, once you’ve gone through a really bad period, you sort of exit it and are now coming to make a new decision, once that is done, very often you will be risk-avoidant. So you'll be making decisions that are just lower-volatility so that you don't get into a big loss situation, once you have closed everything out. So that's the way tilt expresses itself, on the downside.

 

So what is interesting about that, of course, is that if you were completely rational, you would view what you're doing as an investor as one long game. So what was happening in the moment or on any given day, or in any given week, wouldn't matter very much to you because it would just be part of the normal upticks and downticks that are getting you to, hopefully, some sort of upward march.

 

But that's not really the way that our brains work. Our brains are incredibly path-dependent, and they get really lit up and caught up in what has happened recently. So much so that actually, what is interesting is that, let's say that I have a position on and the position has doubled, and now I lose half of that back, I'll actually be acting like I'm on tilt on the losing side. Even though the position overall is winning.

 

It doesn't matter because it is not at its peak, because it's dropped from its peak. So that's incredibly irrational, right? Because the position is actually winning. But I'm not viewing myself as a winner in that moment because I've come significantly off of the peak of where it was. Likewise, if you have had a position that's losing but now it's come back up, the way that my brain is going to look at it is as if I am winning.

 

So, in one case, you're processing a position that you're actually winning to and your brain’s acting like you’re losing. And in the other case, you're processing a position that you're actually losing to as if you're winning. That's how path-dependent we are.

 

So the question is, how do we deal with that? As individuals, what I want to think about is, how can I first of all understand what the cues are for myself – that tell me that I'm probably in this very ‘tilty’ state. The first thing is that there are physiological signs and you should really know your body so you can see what the physiological sides are – your cheeks flush and your palms are sweating and you're feeling very anxious and sort of ‘itchy’ to be getting on positions or off positions, or your heart rates up.

 

There is a whole bunch of physiological things that happen that you can notice. But then the other thing is that there are certain things that you say, like, ‘I can't believe this is happening to me’, ‘I got so unlucky’ – whatever it might be, you can figure out what those things are that you say under those circumstances.

 

And then there are certain actions that you feel like you should be taking as well. So if you know that a pattern for yourself is that when you have a big loss, you tend to put on more positions that have higher vol; if you know that when you have a big win, you tend to close those positions out – those would be behaviours that you're wanting to make that you know are signs in the past that you may not be thinking rationally.

 

So it is the job of any individual or any group to work out, what are those things that we're saying? What are those actions that we want to take that are signs that, possibly, there is tilt going on? It doesn't mean that there is tilt going on. But it means that there's possibly tilt going on.

 

Write those down. Literally make a list of those things and hold yourself and the people around you accountable to that list so that when people feel themselves trying to make those decisions, or they feel themselves having those thoughts, there’s a trigger in place that says, ‘Hey, you need to take a breath and you need to take a second because this is on that list of maybe you are on tilt’.

 

And once you do that, you can actually start to have processes in place for yourself or for the group to reduce the chances that tilt is going to influence decision-making. And most of those have to do with just getting out of your emotional brain. So you can basically just say to yourself, ‘If it's a year from now, and I were to have made this decision, how do I think I would feel about it?

 

‘If I had no position right now at all, and I were considering this trade tomorrow – as a brand new trade, it's a completely different position – would I put the position on or would I take the position off?’ And it’s very good if you are doing that with other people who are holding you accountable to that. If you just say, ‘It's a year from now and I'm looking at my behaviour at this moment – how do I feel about it? Do I think that I was behaving rationally? Do I think I was making my best decision?’

 

You could work through that with other people who are holding you accountable to that kind of thinking. Go through and think about it – as much as you can, have them help you think about it as a new trade or a new decision. And what all of that does is basically it just gets you to more likely recognise when you're in that situation, and you feel that way.

 

And then it gets you to start thinking about it more rationally, because the thing about this kind of ‘time travelling’ … imagine it's tomorrow, and you are considering this as a new trade. Imagine it is a year from now, and you're looking back on your behaviour in this moment, and you're trying to decide, are you proud of it? How do you feel about it? Are you happy about it? Are you sad about it?

 

This kind of ‘getting yourself out of the moment’, which is where that path dependence is occurring, and pushing yourself into the future makes you recruit the more rational part of your brain because you can't think about the future without getting into the more rational part of your brain that naturally causes the limbic system or the more emotional part of your brain to start to calm down.

 

And then the other thing that happens – particularly when you're recruiting a whole group in this behaviour – is it starts to change what the reward system is. So part of the reason why we’re trying to close out those winning positions, part of the reason why we're trying to seek vol when we are in those losing positions, is because we feel bad – if I'm winning, I want to get the win. It's good for the way that I feel about myself, if I get the narrative hit. If I'm losing, I don't want to take the loss there.

 

But when the group gets involved in having the focus beyond … you're different than other people. When you feel this way, you go through this process and you're willing to make rational decisions. You're willing to realise when you have a losing position and you shouldn’t be keeping it on. You're willing to realise and keep a winning position on even though you know it might go down and it might be hard. You don't react the same way that other people do.

 

And what happens is that the process itself – that willingness to sort of like, ‘I spotted when I was on tilt’ – that becomes the reward. ‘I did something about it. I thought about how I would feel in a year and I realised. I thought about what if I put the trade on tomorrow and I realised something and then I did something different than other people do’. That starts to override and become the reward – more so than that reward-seeking that is driving the tilt in the first place.

 

Q: We operate in an arena that is very focused on outcomes – in effect, good or bad investment performance. How would you propose resisting the temptation – prevalent among investors – of learning the wrong lessons from small samples based on the outcome (in essence the nature of ‘resulting’)?

 

A: There are a lot of places to go with this so I will just go to one specific place because there are so many places to go with this kind of outcome-driven performance. Ideally, what we’d like is that what we're really focusing on is decision quality. So how are we focusing on decision quality. How good were you, in the long run, of figuring out, given the options that I had, which option was going to create the best outcome for me?

 

So that doesn't necessarily mean, obviously, that you pick an option that causes you to win the largest percentage of the time because we know that there might be something that you would do that you will only win 10% of the time, but you're getting 20 to one on the bet. And so you're willing to do because you're making a lot of money.

 

So you're just looking at, given what your risk tolerance is, what option is going to have the highest EV. So let's just broadly call that which is going to get you the best outcome – however you define the best outcome. So what we would really like to do obviously, is be thinking process – process, process, process

 

This is what we care about and we know that, if we get our focus on outcomes and we get our focus on process, what’s going to happen is that, over the long run, the outcomes are going to come. And what's interesting is that, if you focus on the outcomes themselves, it's actually incredibly counterproductive.

 

And it's something that I've actually been covering lately. I’m in the middle of my next book and so this is a big part of my next book. It’s called the ‘paradox of experience’ and it is that we need experience in order to learn but any given experience that we might have will interfere with learning because we tend to overrate it.

 

So if we actually want to do well, we want to get our focus off of outcomes – certainly short-term outcomes and individual outcomes – and start to focus on process and decision quality. But we're pretty bad at doing that as human beings. So my suggestion, broadly, is to change what the outcome is that you're pegging on.

 

So it’s very hard for human beings not to focus on outcomes and say, ‘Oh, yeah, but that was in the cloud of possible outcomes and I'm not worried about it’. You can train yourself to do that more. But it's very hard to do that not at all. So how can we shift what we consider an outcome to be?

 

So what very often happens is that I'll talk to groups and they'll say to me, ‘We’re really process-driven around here but we’re really having trouble’. So I actually had this conversation with a group once – ‘We're really process-driven around here, but I'm having trouble because the people who are my traders, the people who are actually making the trades, seem to be very outcome-oriented.

 

‘So I'm telling them all the time – I don't care about outcomes, I don't care about outcomes, I don't care about outcomes – but they're acting like they care about outcomes. And I don't understand what's going on because I'm saying the right thing’. And I discovered pretty quickly what the problem is and I think that this is generally a problem for most organisations.

 

It is that it doesn't really matter how much process language you use – I mean, it matters some because it gets people thinking in the right space, but it doesn't really matter how much process language you use if you aren't behaving as if it's the process that matters to you, in the following way.

 

Generally, what happens when people have kind of ‘all hands on deck’ meeting or after-action reviews or post mortems, is that they're usually triggered by a downside event. So there's some unexpected loss that occurs and now that triggers a meeting where people go in and you're exploring why you made the trade – was it a good trade? Was the strategy good? Was it a strategy that you need to tweak? Is your model right? And so on and so forth.

 

But what’s interesting is, when the strategy overperforms or a trade overperforms, the same meeting doesn't occur. People aren't in the room saying, ‘What's going on? Did we have our strategy wrong? Is our model wrong? What's going on?’ So there's this interesting asymmetry in terms of the way that we naturally treat outcomes.

 

When something is unexpectedly to the bad, we’re doing a lot of exploration and that exploration may be using process language – it may be about our model and our decision process and what our forecast was and so on and so forth. But, back when we're doing that kind of process exploration, it is triggered by a downside – a bad outcome.

 

When something is unexpectedly good, we aren’t in the room asking the same questions. We're usually patting ourselves on the back that we're really smart and our strategy is great. But the problem, of course, is it's a symmetrical issue. If your model’s off in terms of overestimating what the world is, you're going to be, obviously, over-allocating your resources to that.

 

If your model is underestimating the market – which is what an unexpectedly good result might suggest – then you're going to be under-allocating your resources to that. Or, in either case, there may be risk – you may have not properly assessed what the risk of the position was. Because for something to be far off of what your forecast was, what your prediction was, there may be risks that you didn't see, for example So you would want to explore that equally on both sides.

 

Now, once we sort of understand that what's getting us in the room is a bad outcome and not a good outcome, what are you telling all of the people who are making decisions around you? ‘Be really afraid of bad outcome because that's all we care about around here – because if something happens to the bad, that's when we're all going to be in a room. If something happens to the good, we don't really care.’ So now what's happened is you have inadvertently put a bunch of emphasis on downside outcome.

 

So a couple of things can result from that. Thing number one is that people’s risk attitudes may be disturbed by that in the sense that, when they have a choice between a high-risk or a low-risk choice, they're going to tend toward the low-risk choice, because the low-risk choice obviously allows you to play a mini-max strategy, which minimises the size of the losses that you take. And that means that you’re less likely to be ‘in the room’.

 

But the other thing that they might do is … there's an interesting way out of this problem when something happens to the downside, which is that if you have a lot of consensus around it, or you're making a decision, or you're doing something that ‘this is the way that we do things around here’, that's another way out of that problem of loss. And now you're having a meeting about it and what you can say is ‘What could I do? Everybody agreed to it?’

 

This is a way that people end up using data actually, in a bad way, which is, ‘What could I do? This is what the data told me’. So they may be using data as a cover as opposed to a way to find the truth, for example, because that kind of gets you out of this problem of being ‘in the room’.

 

And then the other thing that we do around that is sort of two things, which I’ll just say really briefly. One is that we only ask the question in one direction. So when we're in the room when we lost, nobody's ever saying, ‘Should we have lost more?’ People are always saying, ‘How could we have lost less?’ But sometimes we should have lost more – sometimes you explore and you realise your position was actually too small and you should have actually lost more.

 

Likewise, when you win, people are saying – if you are asking that question – how could we do more of this? How could we make it better? But nobody's saying like, maybe we should have actually won less, which in the case that you didn't recognise risk, if you mis-assessed the risk, very often that means that you had too big of a position and the position actually should have been smaller and you should have actually won less.

 

Or sometimes – and this happens a lot – you win for a reason that wasn't included, that you weren't expecting to win for … it was a completely orthogonal reason that you that you won – at which point you maybe shouldn’t have had the position on at all.

 

But we are not really asking those kinds of questions that are allowing people to understand – I don't care about the outcome; what I care about is the prediction and the forecast. Notice that when you actually try to get symmetry across these kinds of questions, what you've done is you’ve said, ‘I understand that you are going to peg on outcome – there's nothing you can do about that, you’re a human being – but I want the outcome to be how close the result was to what your prediction was.

 

So if the result is somewhere far off of the prediction, that's the outcome that I'm going to care about – not what the quality of it was in terms of did you win or did you lose? Because I don't want you worrying about win or loss so much, because that's something that's going to come out in the wash, it's going to come out in the long run. What I care about is that you're a good predicter of the future.

 

Q: Our investment process requires levels of patience that can be alien to human nature and certainly involve time periods much longer than a hand of poker – years rather than minutes. To what extent do you think differing timeframes, and the length of time to feedback, change the probabilistic thinking model?

 

A: That's a little bit of a data problem. There's a few things you can do to help yourself along. Obviously, when you have an expanded timeframe, what you're trying to do is figure out, well, how do I get enough data to figure this out?

 

So the first thing that's kind of interesting is actually having a shorter timeframe can actually interfere – it’s back to the reason of the paradox of experience. So when you're getting a lot of outcomes very quickly, that path dependence that I talked about … how much you can see – have I been winning recently, or have I been losing recently – becomes much more powerful in the way that your brain is processing the world.

 

And so, interestingly enough, getting very tight and fast feedback can actually interfere and become problematic, in a weird way. So my point to that is just, getting fast feedback, having a really fast closed feedback loop – when there's noise in the data, which of course there is in any kind of probabilistic activity that you engage in – has its downsides as well. So it's not all totally rose over on the side of ‘I'm getting a lot of feedback’. So I just wanted to say that.

 

But obviously, when you're getting a lot of feedback, if you can sit back and take time to aggregate, there's a lot more information there. And that's clearly helpful. So the question then becomes, how do I deal with this data problem? So thing number one that I suggest is always, try to figure out what are some interim predictions that you could make before you actually get the final answer.

 

So you can make interim predictions, for example, about what the movement of the position might be and that can help you. So as many places where you can find interim predictions to make that you can actually peg against is really helpful.

 

Thing number two is that, obviously, whenever you’re putting a position on, there's a whole bunch of positions that you didn't put on and one of the things we do is we tend to ignore those. So, if we're trying to figure out, how do we solve for the fact that we're not putting very many positions on, they're taking a long time to realise, I feel like I don't have a lot of data? How do I start to really refine my model?

 

Given that, what I always suggest is grouping your investments into three categories. Category one is ‘hits’ –  those are things you actually did. So that's obvious – you're tracking those, clearly. Category number two, is ‘near-misses’ – what are those positions, decisions, investments, choices that came close, that were under consideration, that you were really thinking about doing?

 

Make sure that you have a ‘shadow book’ of those. And those are really important because those are tipping-point decisions, right? When we’re thinking about our model and we're thinking about a ‘yes’ or a ‘no’,

anytime it's close, this has to do with the refinement of the model. When is the model tipping me to a ‘yes’ versus when the model is tipping me to a ‘no’.

 

And now you can create a shadow book where you're following all of those kind of near-misses. I tell people also, if you can, in those near-misses, if you can't track it unless you own it, you can put a tiny little position on it because it's really important to collect that data. If it’s a near-miss, you may not even be losing, by the way with that little position. It’s not clear but it allows you … you may not have to – it depends on what instrument or what it is that you're investing in. But keep a really good shadow book and I recommend tracking all of the near-misses because that is very important data to you.

 

But then the other thing is to take a sample of the clear misses. So there is a whole bunch of stuff that you think is a no-brainer – that’s, like, obviously we're not going to invest in this. This is ridiculous. It’s not a winner. I'm not going to invest in this. Now, there's a resource problem in terms of tracking all of those, so you probably can't do that – particularly if you're tracking all of your near-misses. But you can track some sort of sample of them.

 

So if you can take some sample of the things that you thought were no-brainers, it’s incredibly helpful for understanding the world and your data because these are things that you know that the model very strongly predicted weren’t going to win. And, first of all, just confirming that and showing they are not winning is really helpful.

 

But then what's incredibly helpful is sometimes some of them do really well. And that could be because it's a tail event, it's an outlier or whatever, but sometimes that's going to tell you something really, really important that's going to help your model along. Sometimes you're going to fail – ‘Well, my model was right, but then there was a paradigm shift and that's why this did really well and now I should pay attention to that’.

 

This is where you can start to see those things in advance and start to get ahead of those kinds of changes and market conditions. You can find out where your model maybe has a really big mistake – where there's something that really needs to be changed by tracking some clear misses. So I just call those ‘clear misses’.

 

So we've got hits, near-misses and clear misses – make sure that you track some of those clear misses. So that's the problem. So now you're going to be getting more results more often, because you're tracking a whole bunch of other things.

 

So the other thing that is actually really nice about that, is that another problem with people pegging on outcomes and being really outcome-driven in the way that they assess performance is that you start to buy as … people start to view the world, through things that they did, and not things that they didn't do. But in the world of investment, in the world of decision-making in general, but in the world of investing, stuff you didn't do can be really costly if you were supposed to do it. It’s huge.

 

So you want to start to shift people to not think so much about, ‘Wow, if you don't do something, I'm not going to so much notice it’. So in those situations where, say, you are on tilt and you're sort of trying to feel like I just don't want to really take on any vol; where those tipping-point decisions, you're going to be tipping towards ‘no’ all the time, in order to just kind of stay out of it; in the situation where you're trying to stay out of that room with those bad outcomes; maybe you're tipping towards ‘no’ a lot in order to try to get out of it – if you know that the firm is tracking the ‘no’s, the firm is tracking those omissions, then you start to treat them equally – they equalise in their decision-making importance and that, again, starts to get people to focus on the right thing.

 

As opposed to just the quality outcomes – did I win or lose? – they start to think about the decision-making quality and what their accuracy is more, when you do this as well. So there happens to be this good side-effect of trying to solve this data problem, which is that you get people to understand that an omission is the same as a commission.

Author

Important Information:

The views and opinions displayed are those of Nick Kirrage, Andrew Lyddon, Kevin Murphy, Andrew Williams, Andrew Evans, Simon Adler, Juan Torres Rodriguez, Liam Nunn, Vera German and Roberta Barr, members of the Schroder Global Value Equity Team (the Value Perspective Team), and other independent commentators where stated.

They do not necessarily represent views expressed or reflected in other Schroders' communications, strategies or funds. The Team has expressed its own views and opinions on this website and these may change.

This article is intended to be for information purposes only and it is not intended as promotional material in any respect. Reliance should not be placed on the views and information on the website when taking individual investment and/or strategic decisions. Nothing in this article should be construed as advice. The sectors/securities shown above are for illustrative purposes only and are not to be considered a recommendation to buy/sell.

Past performance is not a guide to future performance and may not be repeated. The value of investments and the income from them may go down as well as up and investors may not get back the amounts originally invested.