Contemporary internet-based technologies have completely changed the information ecosystem, which is increasingly driven by digital platforms, algorithms, big data, machine learning and even artificial intelligence. This new ecosystem has created not only tremendous opportunities for democratization in terms of producing, accessing and sharing content, but also a fertile background for the emergence of disinformation 2.0. This is a new type of disinformation that, while exploiting our old cognitive shortcuts and biases in dealing with information, comes up with new formats, tools and dominant practices.

From information scarcity to information overload

The challenges related to contemporary disinformation have been acknowledged across the globe on several occasions, from electoral campaigns, to national referendums and recently the COVID-19 global pandemic. Disinformation has  been around for some time, accompanying war efforts, social disruptions  and economic crises.

However, contemporary disinformation campaigns are structurally different from similar efforts deployed only 50 or 70 years ago, for example. Contrary to these efforts, which took place under circumstances of information scarcity, they rely on the massive dissemination of multiple contradictory narratives to create ‘information overload’ or ‘information fatigue’ in which people are bombarded with information (i.e. content, to be more precise). Their main goal is not necessarily to convince the targeted publics of something, but to strategically shape public discourse, create confusion and sow distrust.

Disinformation 2.0 relies both on new tools for interfering with content

(increased possibilities for information bombardment and for text, photo and video manipulation) and on new tools for interfering with content amplification (increased possibilities to amplify content by means of automated accounts, trolls, bots, fake profiles, fake crowds and click factories). Understanding the new phenomenon of disinformation 2.0 as interfering both with content (along the true–false continuum) and with its amplification (organic, i.e. done by humans, and inauthentic, i.e. done  by machines) becomes an essential skill for navigating today’s information environment responsibly.

Disinformation 2.0: production and amplification of falsehoods on an industrial scale

The new transnational information ecosystem driven by digital platforms, algorithms and big data enables disinformation campaigns to be deployed instantaneously and with high impact. The new disinformation campaigns can be carried out using both low and high budgets, with much money going not so much into content production (as was the case with propaganda disseminated through mainstream media) as into securing access to data and to users’ digital profiles. The use of these profiles enables new persuasion techniques, such as data-driven micro-targeting, psychographics profiling and precision segmentation, that is, targeting users with hyper-personalized content based on their previous digital fingerprints and preferences.

The new, technology-enabled disinformation campaigns weaponize both information (creating an information-rich ecosystem where for each fact there is a plausible counter fact, for each narrative, a plausible counter narrative) and our digital behaviour (using our personal data and our digital fingerprint in order to feed  us only with hyper-personalized, bias-confirming content).

Based on the digital fingerprint of each and every user, recommendation algorithms prioritize the more familiar post to users, while driving information based on popularity and likelihood for engagement rather than accuracy, facts or public interest.

Around the world, digital platforms have offered many opportunities to amplify and spread disinformation on a massive scale with the help of various tools and practices: bots and networks of bots, fabricated and automated accounts, ‘like factories’, troll farms, click farms and fake followers/crowds (both for validation and defamation).

For example, documented disinformation campaigns on Facebook or Twitter are often based on bots, totally autonomous programs that will tweet or post hundreds of times a day and night on a specific topic, or on cyborgs, hybrid accounts that combine bots’ activities with periodic human involvement. An example of this was mentioned by the US Director of National Intelligence in 2017, who concluded in a report that the Russian government conducted a complex disinformation campaign during the 2016 US presidential elections. The campaign was based on both inauthentic coordinated behaviour on social media platforms, involving bots and an army of trolls and statecontrolled media sources that pushed specific narratives, falsehoods and conspiracy theories in order to ‘undermine public faith in the US democratic process’[1].

Limiting the spread of disinformation

As already underlined, the primary goal of the current disinformation campaigns is not so much to convince the public of something, but to sow distrust, instil political turmoil, exploit socio-economic cleavages, increase emotional appeal and diminish the very possibility for rational, factual debate. They go to people directly, displacing traditional information intermediaries, such as opinion leaders or mainstream media.

Disinformation 2.0 is definitely a phenomenon with strong technological roots. Its effects can be felt socially, culturally, economically and even epistemologically (since disinformation campaigns, based on the bombardment of facts, pseudo-facts and alternate realities, can attack the possibilities of a community or society to act on  a shared factual basis, i.e. on a shared epistemology). Despite being a phenomenon deeply rooted in technology, its resolution is not only technological, but technological,

social, economic and political at the same time. There are different categories of responses, among which the most prominent ones are: evidence-based research; education; diversification of the information ecosystem; adhering to voluntary codes of conduct; transparency and accountability of digital platforms (funding, function of algorithms, data collection practices); establishment of permanent oversight structures; and co-regulation and regulation.

Above all, limiting the spread and effectiveness of such disinformation campaigns demands dealing with the internal vulnerabilities confronting Western societies,  such as financial and economic challenges; socio-economic inequalities (‘people  left behind’); poor governing results; weak media and information ecosystems. The temptation to explain these vulnerabilities solely or predominantly via disinformation campaigns must be resisted, while investing in resilience should provide a means of addressing these vulnerabilities and many others.

[1] US Office of the Director of National Intelligence, Background to Assessing Russian Activities and Intentions in Recent US Elections, The Analytic Process and Cyber Incident Attribution,  6 January 2017 [accessed 6 March 2021]. Available on dni.gov: www.dni.gov/files/documents/ICA_2017_01.pdf.