Press "Enter" to skip to content

Sailing Through The Dark

Nishith Hegde explores how the presence of uncertainty affects decision-making and its implications for economic policy.

We possess neither the ability nor the technology to perfectly know everything about our world, and yet find ourselves with no choice but to navigate it, directed only by fragments of information that we use to illuminate and infer the path ahead – as though sailing a ship through the ocean guided only by the stars. This state of partial information, uncertainty, is what couches every action we take, and every perception we form.

When it comes to decision-making, therefore, two problems arise. Firstly, we are almost always making decisions under false assumptions – or rather, under a simulacrum of the truth that is merely an imperfect impression. Put simply, we misunderstand the world we live in, and, in response, select options that therefore lead to negative outcomes. But the second is even more fundamental: even when the first problem is solved by just giving us the full information we require, when we’re given hard statistics, or dealing with perfect hypotheticals that pose no risk to us personally and cannot be affected by unforeseen factors, the mere presence of uncertainty creates infuriatingly inconsistent decision-making.

The foundational pillars of modern economics rely upon the economic agent being rational and utility-maximising. In reality, this archetypal economic agent cannot even consistently determine their own preferences.

Our decisions entails important consequences. They shape our behaviour and our interactions – They are the difference between a happy day and a depressing one, between winning big and losing it all, between a guilty verdict and an acquittal. But moreover, they have severe implications for policymaking. Widespread, false, assumptions create perverse electoral pressures for legislators, producing ill-suited policy. But there are additional consequences to this irrational tendency: a framing bias affects the testing of different policy options, and even well-suited policies are limited in their effectiveness. Economic models may have all the answers – but they remain ineffective when their consumers themselves are stubbornly imperfect.

So why does the presence of uncertainty encourage these behaviours? And are there solutions to accommodate them in policymaking? Slowly, our understanding of both is starting to advance.

Ultimately, this behaviour is caused by our use of shortcuts. In order to save precious mental processing power, and form decisions quickly, we adhere to a set of rules of thumb in order to form conclusions about the world. These are known as heuristics. Here’s an example: studies repeatedly show that groups of test subjects demonstrate a tendency to prioritise perceptions over evidence. A key 1974 study by Amos Tversky and Daniel Kahneman demonstrated that when a company with 30 engineers in it, and 70 lawyers, was described, and then some characteristics of a particular employee (meek, isolated, introverted) given, a significant proportion of test subjects believed the employee to be an engineer. The probability of being an engineer was registered by subjects as close to 50%, even though the real probability, absent stereotypes, is 30%.

The really interesting finding came when they compared these outcomes across groups who were given different information. With no characteristics at all, groups correctly identified that the likelihood of this employee being an engineer was 30%. But when told irrelevant characteristics (no children, unmarried, or demonstrating promise) that were completely outside of the stereotypes of engineers and lawyers, test subjects reverted to their previous prediction of close to 50%: a partial information set led to less accurate decision-making than no information at all. What this suggests is that in the presence of even the smallest uncertainty, our decision-making switches from rational to heuristic.

Further to this, have you ever wondered why restaurants stock incredibly overpriced wines that are clearly poor value, and that almost no-one buys? One reason may be the compromise effect. If presented with two distinct options, a cava for £10 a bottle and prosecco for £15, you may find yourself preferring the lower-priced cava. But in the presence of an alternative, for example a £35 champagne, there is a strong tendency to choose the middle-price wine ahead of the lowest: people outright reverse previous preferences for no reason other than the presence of another alternative. This shows how even our own preferences become incredibly context-dependent, and subject to a framing effect; while the restaurant takes in an extra fiver in revenue.

Likewise, there is strong evidence that individuals exhibit an anchoring effect. This effect essentially says that estimations and predictions are not rationally and independently determined; rather we take some initial ‘anchor’ and shift that value upward or downward. This again leads to a number of problems. For example, adjustment is often insufficient – when shown a random two-digit number, and then asked to predict whether that number was higher or lower than the real percentage of countries in Africa, test groups often adjusted vastly differently based on their starting figure. Groups that started with 10 had a median final estimate of 25%, whilst a group that starts with 65 will only adjust down to 45%, nowhere near the true figure: 28%.

Similarly, even when individuals know that a figure circulated in the media is too high, for example that £350 million a day are being sent to the EU, downward adjustments to that figure remain grossly insufficient. And yet, despite consistently misjudging probabilities, most individuals demonstrate overconfidence about their probability decisions relative to the amount of knowledge they actually have!

So, what does this mean for economic policy?
If your takeaway from this is that individuals shouldn’t be trusted, you’re on the right track. Contrary to economic theory, preferences are significantly arbitrary and can be manipulated by normatively irrelevant cues. Even writing policy using ‘objective’ tools such as cost-benefit analyses proves difficult when individuals cannot discuss their own preferences correctly, since the experience of that potential benefit is itself uncertain to them.

Economics is, at essence, the act of comparing the value of things. We can give a machine a price, using its cost, depreciation, demand, and so on – but what of the environment, or health? It is here, in welfare economics, that irrational decision-making proves particularly frustrating. One example: individuals have a very low willingness to pay for services like healthcare, but their willingness to accept its absence is also low. So, a policymaker trying to spend a government budget to maximise utility will find that funding these services ‘misallocates’ funds, and yet, so will withdrawing those resources. 

It is not that citizens are lying about whether they value healthcare highly – it is, perplexingly, that they simultaneously do and do not, based on how you ask them. And since we cannot plug people into some utility-reading machine, and utility is inherently subjective, asking them is the best way we’ve got: we just cannot fully trust their answers.

Some solve this dilemma by outright ignoring reported utility and preferences, and instead elect to just make decisions on people’s behalf, but this itself runs into philosophical issues regarding paternalism. Can the state truly know us better than we know ourselves? Perhaps such a route would prove no better at creating positive policy outcomes than the status quo, or if it did, only achieve them by stripping individuals of their own agency: after all, why then elect representatives in the first place?

In light of this, the clearest solution appears to be stripping away as much uncertainty as possible. Stronger limits on the way information is presented and restrictions on the extent and nature of political campaigning may have a significant effect, as might reducing our reliance upon polled data for welfare and utility analysis. Ultimately though, the battle of policymakers against these confounding outcomes is a battle against human nature; for when things are uncertain, our decision-making is the most human of all: fundamentally irrational.

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: