Categories: ISJ Exclusives

ISJ Exclusive: Exploring epistemic security

epistemic

Share this content

Facebook
Twitter
LinkedIn

Elizabeth Seger PhD, Researcher at the Centre for the Governance of AI (GovAI) explores the catastrophic risk of insecure information ecosystems in a technologically advanced world.

Imagine a world in which you receive an SMS emergency message warning that there is a high risk of a nuclear strike on the nearest major city. Your first reaction is that the message could very well be a fake. There is no real way of knowing.

For several months, news outlets have been reporting about tensions between major nuclear powers, so the warning is not completely out of the blue. People on social media seem genuinely terrified about the possibility of war and you’ve seen clips of leaders threatening nuclear retaliation against aggressors. However, you are aware that many profiles on social media don’t belong to real people, but are “bots” deployed to try to sway public opinion and sow confusion.

This scenario may sound like a fictional dystopia, but it does not feel too distant from the one in which we currently live. If the pandemic has shown us anything, it’s that coordinating public action in response to a crisis is extremely difficult. Massive feats of public organisation require robust veins of communication. Decision-makers must receive high quality information from trusted sources about how to respond and members of the public must, in turn, be able to trust those decision-makers. But on these fronts we have shown ourselves to be woefully unprepared.

Throughout the pandemic, misinformation about ineffective cures, unverified treatment discoveries and the efficiency of face coverings and social distancing spread like wildfire. Conspiracy theories about the malicious origins of COVID-19 and the censorship of important information sources also degraded trust in health authorities. The resulting fragmented response to COVID-19 points to a worrying trend that bodes ill for our capacity to respond to future crises.

You will have heard of cybersecurity and national security. This article is a call for increased attention to “epistemic security” – without it we may struggle to respond to the worst crises humanity has yet to face.

What is epistemic security?

If national security is about keeping our countries safe, and financial security is about keeping money safe, then epistemic security is about keeping knowledge safe. Episteme is a Greek term meaning “to know”. Epistemic security therefore is about preserving and improving our capacity to deal in knowledge. It is about being able to produce true information about our world, to distinguish fact from fiction, to identify unsupported claims, untrustworthy information sources and deceptive or misleading content. It is about being able to communicate and cooperate with each other to respond to challenges.

In our technologically advanced age, it is easier than ever to make good decision-guiding information accessible. In this respect, our world is more epistemically secure than ever before. Technological and scientific advances continuously lead to new discoveries, enabling us to respond to ever more catastrophic threats. For example, the development of vaccines allowed us to eradicate diseases that once decimated civilisations and we more recently learned that we might avert an impending asteroid strike with a well-aimed rocket or two. Modern communication technologies also allow good decision-guiding information to be spread widely and quickly. Our theoretical capacity for coordinating well-informed public responses to crises is, therefore, high.

However, epistemic security is not just about improving knowledge production and dissemination. It also requires that information recipients are sensitive to the information they receive; when an information environment is unsafe – that is, when falsities are mixed among truths – people must be able to distinguish fact from fiction and trustworthy information sources from untrustworthy. This is where we struggle.

Hijacking human psychology

People use heuristics (quick mental shortcuts) for deciding who and what to believe when time is short and attention is spread thin. For instance, if lots of people seem to believe the same thing, we are more inclined to believe it is true. We also check whether the people close to us share a given belief. In our day-to-day decision-making, these quick heuristics generally serve us well – the problem is that they are easy to hijack.

For example, automatic text generation systems like “Twitter bots” might be used to flood online information spaces with messages to make a particular claim or ideology look much more widely accepted than it actually is. Deepfake video and audio can be used to produce “evidence” of trusted figures expressing particular opinions or giving instructions; these capacities are becoming more easily utilised and indistinguishable from reality.

Deepfakes can not only be used to deceive, but their existence can undermine trust in otherwise authoritative evidence and evidence streams. For example, malicious actors have successfully used allegations about faked content to avoid accountability. Claims about faked videos have been used to justify a coup in Gabon and to exculpate a Malaysian cabinet minister.

Content recommendation systems can also be used to target political advertisements and propaganda at individuals whose opinions are most likely to be swayed. The most effective attention grabbing and persuasion strategies involve appeals to emotion. In particular, invoking a reader’s sense of community or group identity encourages prolonged content engagement and commitment to a belief. The problem is that emotional appeal is not necessarily truth-tracking and a group-ish mentality generally makes people more distrustful of outsiders and less willing to consider alternative viewpoints.

Modern information producing and mediating technologies are making it easier to sway opinion, sow confusion, spread disinformation and seed distrust. What does this all mean for the future of crisis response? At the most basic level, when people are provided with conflicting information they are likely to reason to conflicting conclusions; this makes it difficult to coordinate cohesive collective action in response to threats like that of a pandemic. Disintegration of trust in experts and key information authorities results in the breakdown of key communication pathways and organisational capabilities. Distrust of outsiders further inhibits productive communication and cooperation both within and between societies.

Effective crisis management is not just about responding to ongoing crises, but also about scanning for risks in order to avoid the conditions giving rise to crisis in the first place. Unstable information environments may reduce the efficacy of these procedures by making it difficult to identify emerging threats. Technologically-enabled disinformation campaigns might also be used to intentionally distract decision-makers from developing risks. For example, aggressive messaging about rising nuclear threats could provide a smokescreen for other nefarious activities.

Manufacturing panic

Crisis is defined as a time of intense difficulty or danger. Sometimes, however, it is not the crisis itself that causes the most damage, but our response to it. If we panic, injury and loss can be much worse than it might have been otherwise. However, an actual crisis is not necessary to instigate a panicked crisis response. A perceived danger is sufficient.

The Oxford Circus Panic in 2017 demonstrates this phenomenon on a small scale. A minor altercation on a tube platform (two men shouting) led to a human stampede in Oxford Circus, at the time packed with Christmas shoppers. The panic was primed by an environment of fear due to London’s “severe” terrorist threat level at the time; fuelled by false reports of gunshot spread rapidly on Twitter.

Nobody intended to instigate the Oxford Circus Panic, but it is easy to imagine how much worse such an event could be if facilitated by a disinformation campaign, carefully planned and executed to maximise damage to people and property. The idea that epistemic insecurity makes us vulnerable to manufactured panic is particularly worrisome with respect to financial crises. Financial markets are highly susceptible to uncertainty and fear.

My research investigates just how worried we need to be about technologically exacerbated threats to epistemic security. In turn, I study how epistemic insecurity makes humanity vulnerable to catastrophic risks. If we are to be adequately prepared for when the next global crisis rolls around, we need to attend to epistemic security.

This will, in large parts, involve working to ensure that our information technologies improve human capacities for cooperation and coordination while reducing their potential for detrimental interference. The overall goal is to increase costs in epistemic adversaries in spreading disinformation and sowing discord while decreasing costs to information recipients in accessing and identifying good decision-guiding content.

This article was originally published in the December 2022 Influencers Edition of International Security Journal. To read your FREE digital edition, click here.

Newsletter
Receive the latest breaking news straight to your inbox