Jochen Runde On Uncertainty, Probability, And Heuristics
Jochen Runde is Professor of Economics and Organization at the University of Cambridge.
By Aiden Singh, February 17, 2024
Aiden Singh: You're a scholar of, among other things, decision-making under extreme uncertainty, explanation in the social sciences, and the economics of the Austrian and Keynesian schools of thought. Over your scholarly career you've written about Karl Popper's philosophy of probability, Frank Knight's distinction between risk and uncertainty, Friedrich Hayek's views on social scientific methodology, and Keynes' views on the weight of arguments. But the common thread of investigating uncertainty, how we navigate it in our decision-making, and its consequences for how we study the social sciences runs through all this work. Before we explore your thoughts on these subjects, let's begin by defining some terminology. What is critical realism?
Jochen Runde: Critical realism is a stream in the philosophy of science, especially the social sciences, which became popular in some circles during the 1990s and early 2000s. The central figure in its general development was the late philosopher Roy Bhaskar, and the person who did most to bring his ideas into economics was Tony Lawson of the Faculty of Economics at Cambridge.
The question of what critical realism is, is a big one, because it developed in many ways both in Bhaskar’s own writings but also in that of his followers. But I guess that its core features are an ontological orientation and the idea that the world is layered in the sense of comprising three levels: what is given to us in experience (the “empirical”), the events and states that arise and may be the subject of experience (the “actual”), and an underlying and invisible reality of causal mechanisms that give rise to the actual (sometimes called the “deep”). A key theme in critical realism is that these three realms may be out of phase, that causal mechanisms often come and combine in ways that preclude sharp regularities between events or states of affairs at the level of the actual, and where what is experienced at the level of the empirical needn’t bear a one-to-one relation with what is happening at the level of the actual. In the language of critical realism, the world is open, which, if so, means that the kind of empirical regularities or “laws” sought according to standard positivistic accounts of science don’t occur spontaneously (i.e., that, save for a few exceptions like the movements of the planets, they are the products of human intervention in which causal mechanisms are sealed off from countervailing factors). These ideas played a central role of Tony Lawson’s critique of mainstream economics, the argument being that the methods of mathematical and statistical modelling presuppose closed systems and are therefore profoundly misapplied in an open social world.
- - - - - - -
Aiden Singh: In 1942, Friedrich Hayek published Scientism and the Study of Society, a now famous essay discussing his views on social scientific methodology. Interpreting this essay has since been the subject of some debate. Can you elaborate on your interpretation of Hayek's arguments in this essay and how they relate to critical realism? [1]
Jochen Runde: I think you may be referring to an old paper I published in The Review of Austrian Economics in 2001, in which I attempted to defend Hayek’s Scientism essay against criticisms raised by Tony Lawson from the perspective of critical realism. While Hayek and Lawson were very much on the same side in their rejection of positivism, Lawson argued that Hayek ultimately failed to do more than reproduce certain of its errors by creating a subjectivised version of it for the social sciences. At the root of this failure, according to Lawson, was Hayek’s failure to develop a social ontology on the lines advocated in critical realism. This is not the place to go through the details, but I argued that Hayek wasn’t guilty of all the errors he had been accused of and that, especially on the question of the existence of relatively stable social wholes over and above the concepts of individual actors, he was somewhat closer to critical realism than he had been given credit for.
- - - - - - -
Aiden Singh: You've also scrutinized the views of one of Hayek's colleagues at the London School of Economics (LSE): Karl Popper. In a 1996 essay, you analyzed Popper's propensity interpretation of probability, arguing that this interpretation of probability has affinities with critical realism. [2] Could you expound upon this argument?
Jochen Runde: The problem that led Popper to what became his “propensity” theory of probability was that statistical frequencies are a property of series of repeated trials and cannot be used to talk about the probability of single cases. His solution to this problem was to argue that the probability of some outcome should be thought of as a measure of the strength of the physical propensity, something like a disposition or tendency, of that outcome to be realised in a given kind of situation. It would then be possible to talk about the probability of an outcome in a single case, even if the actual measurement of the propensity concerned would still require counting instances in repeated trials.
Popper’s final (1990) and much neglected book A World of Propensities generalises this idea in ways that bear a close match with critical realism. Some important parallels include the idea that there exists a largely unobservable physical reality and that it is only some of the superficial effects of this reality that we can observe; that there need not be a one-to-one relation between the generative mechanisms in play in some situation and some particular ensuing set of outcomes; that it is generally impossible to measure all but a small bit of this reality statistically because such measurement requires repeated experiments/experimental control; and that outside of a few exception like the movement of the planets, natural laws of a deterministic or probabilistic character are unlikely to be found.
This is all quite radical for what it implies for science when repeated experiments aren’t possible and therefore for social science in particular. It also potentially reduces the scope of the falsificationist doctrine that Popper is famous for because, as he said himself, falsificationism requires repeated experiments.
- - - - - - -
Aiden Singh: You've also suggested that this interpretation has implications for economics. [2] Could you discuss?
Jochen Runde: The main implication is the same as that advanced in critical realism, that in an open world, the search for empirical regularities between events or states of affairs that are stable enough over time to form the basis of policy is likely to be in vain. If so, there is then the question of what economists should be doing instead. The answer often given by adherents of critical realism is that they should focus on uncovering causal mechanisms, that is, begin with some phenomenon of interest and then attempt to retroduce its causes, that is “go behind it” by asking what needs or needed to be the case for that phenomenon to arise.
- - - - - - -
Aiden Singh: Hayek and Popper were colleagues at the LSE, but the former was famously an intellectual rival of another influential social theorist: John Maynard Keynes. Along with your work on Hayek and Popper, you also have an interest in Keynes' views on uncertainty, how they impacted his economic thought, and how they influenced his views on the weight of arguments. [3] Can you expound upon how Keynes' views on uncertainty impacted his thinking on the weight of arguments?
Jochen Runde: Keynes had a rich and sophisticated take on uncertainty in economic life, unsurprising in view of the years he had spent working on an earlier fellowship dissertation that would finally emerge in 1921 as A Treatise on Probability. Although cutting edge at the time, the theory proposed in this book, sometimes called the logical approach to probability, can look odd to the modern eye. The key idea is that probability should be interpreted as an indicator of the strength of an individual’s belief in some hypothesis h justified by the evidence e bearing on that hypothesis. What Keynes had in mind was something like a logic of partial entailment analogous to standard logic, where evidence that only partial entails h would justify a probability between 0 (where h is logically incompatible with e) and 1 (where h is a logical implication of e). An interesting, and from a modern point of view, highly unusual feature of the theory is that Keynesian probability relations generally don’t correspond to precise numerical values. The theory is therefore mostly about qualitative comparisons of probability although he insisted that in some cases even qualitative comparisons aren’t possible.
The evidential weight idea is a natural one in his framework, I think, because interpreting probability as the strength of the logical relation between hypothesis and the evidence bearing on it leads naturally to the extent of that body of evidence. In particular, when making a decision it seems reasonable to ask not only about how strongly one believes a conclusion given the evidence, but also about how comprehensive in some sense that body of evidence is. Evidential weight is a measure such comprehensiveness, the suggestion being that a judgement of the probability of some conclusion based on a greater body of evidence should be preferred to one based on a lesser body of evidence. While Keynes was in fact somewhat hesitant about the importance of this idea in A Treatise on Probability, it is interesting that it resurfaces explicitly in his later economic writing where it is related to investor confidence and liquidity preference.
- - - - - - -
Aiden Singh: Your research into the history of thought on decision-making under uncertainty also included an investigation of Frank Knight's famous distinction between risk and uncertainty. You've argued that this widely-used distinction ignores subtleties in Knight's proposal. [4] What are these overlooked subtleties?
Jochen Runde: Knight’s distinction between risk and uncertainty is still debated but nevertheless quite widely interpreted as marking off cases in which decision-makers know and can assign numerically definite probabilities to contingent outcomes (risk) from cases in which they do not have access to such probabilities (uncertainty). But Knight had much more than this to say about uncertainty in economic life, not least in its role as the foundation of his theory of profit as the residual return entrepreneurs receive in reward for their uncertainty-bearing (all in his 1921 book Risk, Uncertainty and Profit). The chapter devoted to the meaning of risk and uncertainty is especially rich in insight and quite philosophical, touching on epistemology, phenomenology, metaphysics and the question of determinism, as well as the nature of probability (including an early urn example that also appears in Keynes and that would go on to influence the “ambiguity” literature in decision theory later precipitated by Daniel Ellsberg). It is interesting that, like Keynes, Knight also distinguished between people’s probability judgements and the confidence with which they are held. This distinction has all but evaporated in modern Bayesian decision theory, where people’s (subjective) probabilities or degrees of belief are effectively read off their inclination to act on them in choices over lotteries. On this approach the individual’s judgement of how strongly they believe in a hypothesis is conflated with their confidence in that judgement.
- - - - - - -
Aiden Singh: You also suggest that there is a fundamental confusion underlying this distinction [4]. Can you elaborate on what you believe this confusion is and how you propose it can be rectified?
Jochen Runde: The problem with Knight’s formulation, I think, is that it seems to present the movement from risk to uncertainty in terms of a continuum or gradation running from infinitely sharp numerical probabilities determined on the basis of assumptions about perfectly equal probabilities (classical or what he calls a priori probabilities) to more or less tightly defined empirical but still numerically-definite statistical probabilities or frequencies, though to cases in which numerical probabilities cannot be determined at all and people instead arrive at what Knight called “estimates”. The first two categories are associated with risk, and third with uncertainty. While I don’t know if he intended his formulation to be read in this way, the problem is that there isn’t a continuum at all because a priori probabilities are determined on the basis of the assumed equiprobability of each of a set of exhaustive and mutually exclusive outcomes, whereas statistical probability is determined on the basis of the tabulation of a particular outcome in a series of repeated trials or experiments.
- - - - - - -
Aiden Singh: A subject which seems to be of particular interest to you is black swans. [5] How do black swans fit into the widely-used risk/uncertainty distinction?
Jochen Runde: The risk / uncertainty distinction tends to presuppose that the decision-maker has all possible outcomes (or states, in Bayesian decision theory) in mind and is able either to assign numerically definite probabilities to those outcomes (risk) or not able to do so (uncertainty). In contrast, when people speak about black swans, they tend to be thinking of outcomes that come as a complete surprise because they were not even imagined as possibilities before they occurred. If so, this of course means that black swans represent a form of uncertainty other than that captured by the risk / uncertainty distinction.
This is my own view, though, and there are authors who argue that Knight’s conception of uncertainty extends to the cases in which people can’t even imagine possible outcomes. I have however not been able to find any passages in his work in which he says so explicitly.
It is true that Keynes is famous for his remark that in situations of extreme uncertainty “we simply don’t know”. But the outcomes of interest he was talking about here were potential investment returns, a case in which all the possibilities are known simply because they all lie on the real line. What his remark refers to, in my view, is factors that lurk in the future that we don’t know about, and which have the potential to become black swans that upset our plans. On this interpretation, if it is right, it is possible to talk about the risk or uncertainty in connection with investment returns where all possible outcomes are known, at the same time as potential black swans that might affect such returns that we can’t identify even as possibilities before they occur.
- - - - - - -
Aiden Singh: Having discussed different interpretations of risk, uncertainty, and probabilities, perhaps the most fitting way to close out our conversation is by discussing your work on how we might navigate conditions of risk and uncertainty. [6] Can heuristics be useful in navigating conditions of Knightian uncertainty?
Jochen Runde: Nassim Taleb, famous for his book The Black Swan: the Impact of the Highly Improbable uses the metaphor of a tunnel to describe people operating in the face of uncertainty about the future, where the possibilities they are aware of lie within the tunnel and those they are not aware of lie outside it. The metaphor only goes so far, of course, but does convey something of the limits of what we know and can imagine about how things will unfold. It also quickly leads to the question of what keeps us trapped in our tunnels and how we might escape or at least “widen” them.
It seems that at least part of what keeps us in our tunnels is due to psychological factors, not least some of the well-known biases identified in the literature on economic psychology such as the endowment effect, the status quo bias, and so on. The most pernicious of these is probably the confirmation bias, a tendency many people seem to have to look for and interpret evidence in a way that is overly kind to what they want to believe. And if it is true that biases of this kind are part of the problem it then follows that the thing to do may be to look to ways of counteracting them.
There are heuristics that might help here, and one that I have worked on with my colleague Alberto Feduzi is what we call the “Baconian algorithm”. This heuristic is a modified version of the scientific method proposed by Francis Bacon in the 17th century. Imagine someone has developed an initial hypothesis about how the future might unfold and wants to put it to the test. We suggest a two-stage procedure by which this might be done. The first is to think “out of the box” and come up with unusual factors that, if in play, would likely lead to alternative hypothetical futures quite different from the initial hypothesis. The second is to do some research aimed at producing specifically positive evidence in favour of those factors. It should be evident that in asking for factors that would lead to hypotheses different from the preferred hypothesis, and asking only for positive evidence in favour of the factors that might produce those alternatives, work to counteract the confirmation bias in favour of the initial hypothesis.
My experience with this method is that it is beneficial even where people ultimately decide against going with any of the alternative hypotheses. The reason is that encouraging people to come up with and do some research on and validate out of the box ideas has the effect of getting them to look in places and learn things relevant to whatever their project is, that they would never have otherwise. That is to say, it has the effect of widening their tunnels. And in my experience, even if they end up rejecting all the alternative hypotheses, the heuristic very quickly leads to their learning things that leaves them to want to adjust and improve on their original hypothesis.
- - - - - - -
Footnotes
[1] Professor Runde's views on this subject are articled in his 2001 essay Bringing Social Structure Back into Economics: On Critical Realism and Hayek's Scientism Essay.
[2] Jochen Runde. On Popper, Probabilites, and Propensities. Review of Social Economy. Vol. 54, No. 4. Winter 1996, pp 465 - 485.
[3] Jochen Runde. Keynesian Uncertainty and the Weight of Arguments. Economics and Philosophy. Vol. 6, Issue 2. October 1990, pp 275 - 292.
[4] Jochen Runde. Clarifying Frank Knight's Discussion of the Meaning of Risk & Uncertainty. Cambridge Journal of Economics. Vol. 22, Issue 5. September 1998. pp 539 - 546.
[5] Phil Faulkner, Alberto Feduzi, & Jochen Runde. Unknowns, Black Swans, and the Risk/Uncertainty Distinction. Cambridge Journal of Economics. Volume 41, Issue 5, August 2017, Pages 1279-1302.
[6] Alberto Feduzi, Phil Falkner, Jochen Runde, & Laure Cabantous. Heuristic Methods for Updating Small World Representations in Strategic Situations of Knightian Uncertainty. Academy of Management Review. 47(3). October 2002.
——————