Disinformation is a complex topic. Truth and lies are never clear-cut categories and there are fuzzy boundaries that separate them.
Lots of work has focused on how disinformation exists, and how it spreads, is amplified and is received and consumed online and offline. This is all important work and contributes to our understanding of what it means to disinform.
Something that is often overlooked is why disinformation exists. Why do people publish intentionally false content? This is a big question and one that needs answering. Understanding the motivations behind the production of disinformation helps inform our analysis and understanding of disinformation as a phenomenon.
In this post I’m going to talk about 6 main reasons for why disinformation can exist online:
- Ideological (political)
None of these are mutually exclusive, and they’re not presented in any particular order. These also aren’t exhaustive – I’ve decided to highlight ones which in my experience are most common.
A piece of disinformation or a disinformation campaign may have various reasons for its existence. For example, hostile-state information operations may also be intrinsically ideological in their disinformation campaigns, while also being used as profitable ventures.
I’m going to go through each one of these now.
Clicks = money
This is a foundation of a lot of modern journalism, and partly an explanation for the increasing clickbait-isation of modern news.
If clicks = money, then disinformation is a big market.
This motivation for producing disinformation is fairly simple. People produce false stories that will redirect traffic to a website. When people visit this website to read the full story they are presented with pop-up and banner ads across the webpage. Each of these generates revenue for the website owner.
When a story gets 100,000s of views, this can lead to a serious income.
Avaaz, a US-based non-profit, found that during the 2020 elections, political disinformation was “found to have reached over 158 million estimated views” (link). This is a lot of views, and would lead to a lot of revenue.
In this sense, disinformation is no different from some other online news sources. You write news to get clicks to generate revenue. The only difference is the disinformation is fabricating stories.
The report from Avaaz above leads nicely into this section: ideological disinformation.
This is when disinformation prioritising a certain belief or belief system is produced to promote or discredit a third party. This third party can be a person/group, an action, or something more abstract such as a belief.
In the real world, this can realise itself as disinformation promoting conservatism or liberalism, attacking science and rational thought, or casting doubt on an individual or group.
There are various ways this can manifest:
- Promoting an ideology – such as ‘pro life’ (link)
- Attacking minority, protected groups (link)
- Seeking to discredit an individual (link)
An example of this in practice is the “disinformation dozen”.
According to the Centre for Countering Digital Hate (CCDH), twelve anti-vaccination campaigners are responsible for “almost two-thirds of anti-vaccine content circulating on social media platforms” (link).
These individuals, who also run online shops and have sophisticated marketing strategies to create revenue, are prolific anti-vaxxers who routinely spread COVID-19 and vaccine-related disinformation.
Let’s focus in on one of these dozen: Robert F. Kennedy Jr.
Kennedy is an environmental lawyer and relation of president John F. Kennedy. He has gained notoriety by asserting himself as one of the leading anti-vaccination voices in the US over the past few decades.
He is chairman of the Children’s Health Defense (CHD), an activist group that prolifically shares vaccine disinformation, specifically to minority groups in the US (link).
I’m simplifying here but Kennedy does this because he simply believes vaccines are more dangerous than the diseases they protect against. He misrepresents science and publishes false content to reaffirm this belief, and is doing so out of conviction. In other words, he is using disinformation as a vehicle to promote his ideology.
There are many differences between ideological disinformation and hostile-state (dis)information operations (HSIOs).
When I discuss ideological disinformation, I’m referring to content that is not produced as a tactic of warfare. HSIOs, however, are. They are a weapon in the arsenal of governments and militaries and are used to destabilise an enemy.
A well-known example of this is hostile-state information operations executed by the Internet Research Agency (IRA) at the behest of the Russian Federation. These campaigns will be well known to most people reading this, and rather than going into this any further, I’m going to suggest some further reading:
- Russia’s information warfare, Politico
- Explainer-Russia’s potent cyber and information warfare capabilities, Reuters
- Biden, Putin and the new era of information warfare, Financial Times
HSIOs use disinformation, alongside propaganda tactics, to negatively impact an adversary.
It is often assumed these are well-oiled, streamlined operations but this is not always the case. These operations can often be realised as a “by any means necessary” campaign, where disinformation for and against all parties and ideas are disseminated to simply “sow discord” (link) and create confusion, anger and unrest.
In other words: throw disinformation against the wall and see what sticks.
Two months ago, the US remembered the 20th anniversary of the September 11 attacks that killed 2,977 people.
Out of this horrific attack spawned people who call themselves truthers. A loosely based collective who claim to be seeking the truth, and who reject the widely accepted, proven cause of the tragedy: a terrorist attack.
These conspiracies use disinformation as a tool. This can be to legitimise their own views (confirmation bias) or to promote their cause to others.
People in this movement, whether intending to deceive or genuinely believing it be true (see: disinformation vs misinformation), create and share one-sided, biased disinformation that agrees with their views.
Essentially this boils down to: I believe X, and therefore I will produce and consume Y that also states X.
In this way, disinformation can be viewed as a tool used by conspiracy theorists, alongside other methods of persuasion such as sensationalism, emotional manipulation, selective reporting, and misrepresentation of facts.
Trolling can be something that confuses a lot of people. It’s an action that can seemingly have no motivation.
Trolling is prone to over-analysis. By this I mean that we often want to ascribe motivations to people’s behaviours to make sense of them. However, with trolling there is not always a thought-out reason for a behaviour.
To describe it visually, I’m going to use this scene from the British TV series “The Inbetweeners”:
While not over simplifying trolling, and the various ways it can manifest, sometimes trolls simply perform an action for the sake of it. Further, the troll themselves might not even recognise what they’re doing as trolling – to them it’s “just a joke”.
Trolling and disinformation often go hand-in-hand. Deliberately supplying someone with disinformation brings about a sense of satisfaction when someone falls for it and can boost an individual’s ego as a successful deceiver (troller).
In the same way that Rickrolling or other bait-and-switch pranks work, sometimes disinformation is just a way to agitate or provoke other internet users.
I’m going to talk about two types of false content here:
- Misunderstood satire
- Schrödinger’s disinformation
1. Misunderstood satire
Misunderstood satire is when a piece of information not intended to be taken seriously, is decontextualised and treated as legitimate news.
This can happen for a number of reasons. On FakeBelieve I previously discussed an example of political satire taking on a life of its own as disinformation (Fact Check: Did Bath MP Wera Hobhouse claim “£7,456 expenses last year for vegan cheese”).
I’m now going to discuss another, more recent example of this from the U.K. Defence Journal.
On Thursday 20/05, the UK Defence Journal (UKDJ) shared an image on its Facebook and Twitter accounts showing a drastically shrunken Charles de Gaulle carrier alongside the two Queen Elizabeth-class aircraft carriers. Some context: HMS Queen Elizabeth had just embarked on a world tour.
UKDJ positions itself as “dedicated to providing impartial and complete coverage of defence matters” but didn’t explicitly (or arguably even implicitly) mark these posts as being jokes.
You could argue that the satire here is self-evident, but as we’ve learnt in the past that’s a weak defence and seldom is even the most obvious satire evident to every reader who consumes it.
Here’s a summary of how the posts spread:
- Retweets and shares circulate alongside screenshots posted to private, closed groups on Facebook
- The posts find their way into mainstream media and are shared by media personalities
- The posts trigger xenophobic abuse aimed at UK-seeking migrants
- Corrections are issued by UKDJ
- The corrections fail to have the reach of the original posts.
UKDJ issued corrections and statements criticising the abuse triggered by this image but by that point it was too late. This image, originally intended as satire, has been misappropriated and used as a springboard for racist abuse at minority groups.
This is a prime example of something with humorous intent that evolved into becoming disinformation.
There’s a very fine line here between misinformation and disinformation. One could argue this was all accidental and therefore it’s misinformation; but you could also argue this was irresponsible satire and therefore disinformation. Neither views are wrong.
2. Schrödinger’s disinformation
A guy who says offensive things and decides whether he was joking based on the reaction of people around him.Urban Dictionary
Satire can be used as a defence by those who have been accused of spreading disinformation. In these situations, the blame is placed on the audience who has supposedly failed to understand the satire, rather than the producer who has peddled disinformation.
Similarly, people may say something is satire depending on the response it receives. In other words:
If a disinforming piece successfully deceives, it is not satire. If a disinforming piece is unsuccessful and challenged as being false, it is (claimed to be) satire.
This blog post has outlined some of the reasons why disinformation exists. Understanding the motivations behind the production of disinformation can help us better understand and predict its dissemination, reception and replication.
Mistaken satire behaves differently to HSIOs which behave differently to cash-for-clicks disinformation.
At the same time, this list is far from exhaustive. There are 1000s of other reasons why people might produce disinformation, and these are seldom mutually exclusive from each other.
Understanding the nuances between these types of false content means we can analyse disinformation more accurately and have a better chance of limiting and stopping it’s spread.