What does the disinformation ecosystem of 2022 look like?
You’re living in it.
In Rachael Levy’s talk as part of Mozilla Festival’s MisInfoCon, she highlights the frightening reality we all live in. The era of disinformation.
Rachael Levy is the Head of Geopolitical Risk at ActiveFence, where she leads the company’s services and solutions for misinformation, disinformation, and emerging trends. Levy has nearly 20 years of experience in security, defence, journalism, and international relations both in the private sector and academic world.
Levy began the talk by explaining how disinformation has been around for a millennium. The term means a deliberate falsehood promulgated by design and has become somewhat of a buzzword in the past few years. However, according to Levy, it has a rich history.
Gengis Khan used to fire arrows that would whistle through the air to terrify his enemies, and as Levy noted one can only imagine what he could’ve done to intimidate his enemies and their populations in today’s age.
Disinformation is a type of psychological warfare now fashioned into a digital age, and it continues to evolve at a rapid pace.
When we think about disinformation, elections are often the go-to. Think Russian interference in elections, like the 2016 American election.
However, now there are new mediums, new targets and new actors. They work in information operations, coordinated networks that purely act to sway thought.
Who is spreading disinformation?
Levy spoke about the ‘democratisation of disinformation’ defining it under 5 key groups: nation state actors (intelligence agencies, foreign/defence ministries, embassies, police), domestic state actors (political parties, lobbyists, political party youth wing activists), semi-official state actors (oligarchs, think thanks), ideologically motivated actors (hacktivists, terrorist organisations, issue activists, dissident organisations, academics) and financially motivated actors (digital media outlets, scammers, troll farms for hire, digital marketing agencies, IT experts).
With disinformation actors established, what tactics are used to spread disinformation?
Levy broke down four main tactics: bot networks, authentic actors, authentic actors running inauthentic networks, and deep avatars.
Firstly, you’re probably familiar with bot networks. They flood comment sections, and their social media accounts are often reused for a plethora of different contexts, from elections to the Olympics.
These single-shot bots often use fake pictures, are newly created and have little activity on their accounts. Their disinformation can be used by state media to cause an overwhelming quantity of disinformation.
In Levy’s own words, bot networks are often “ground zero for disinformation narratives”.
However, social media websites are evolving alongside the growth of bot networks. They are increasing their own toolkits to identify and remove these networks.
If bot networks prioritise quantity, then authentic actors focus on the quality of their disinformation posts. These authentic actors are real people and engage with larger audiences, and can be incredibly convincing to the average social media user. There is real activity behind these accounts.
Levy proceeded to show an example presenting authentic actors in action. She played a shocking video showing a coordinated disinformation campaign where Russian TikTok users repeated nearly the same disinformation-filled testimonials to defend the current Russian invasion.
What makes authentic actors difficult to crack down on is the fact they are real people. They could be an influencer, academic, or even a journalist contributing to state media.
These real people find themselves participating in, perhaps unknown to themselves, larger inauthentic networks.
To highlight the complex image of an inauthentic network, Levy focused specifically on a social media influencer in the video. The influencer was represented by a marketing agency connected to many individuals in the video, and in the past, that same agency had coordinated campaigns that tried to sway public thought on civil unrest in Russia.
What is most terrifying about inauthentic networks is that the most seemingly authentic account could be inauthentic at its core. These accounts are all over our social media, and we might not be able to discern the differences unless we amp up our knowledge of the evolving disinformation tactics.
Levy then briefly moved onto deep avatars, unlike authentic actors, these are not real people – they do not exist offline. However, unlike bot accounts, deep avatars are very difficult to judge as fake because they don’t use stock images, engage with their audiences, and make other accounts on various social media platforms to give themselves legitimacy.
To sum up the talk, Levy posed a question to the audience – are we in a “post-truth era?”. A short Q&A followed, where one participant asked what the post-truth era looks like?
To put it simply, Levy replied that it was not a “pretty vision”. It is a deeply concerning prospect, where it is difficult to trust any information.
What we can do, according to Levy, is to keep our fingers on the pulse of how these tactics evolve at a rapid pace.
What is true, and what is false? Levy’s eye-opening talk ‘The Disinformation Ecosystem in 2022: New, Authentic, and Recurring Actors’ makes it clear that it is up to us as digital citizens to evolve alongside disinformation and learn how to acknowledge its presence even when it seems the most convincing.
0 Comments