The first chapter of the DOMINOES Handbook sets out to map the main current developments in the informational environment so as to better understand and lay the foundation for the most efficient and effective means of countering disinformation.
As such, the first section of the chapter identifies the evolution of disinformation in the mainstream media and social media and more precisely analyses the ways in which information which is not accurate finds itself in individuals’ newsfeeds and social media pages, how and why it becomes viral and in what ways online and offline communication both contribute to the spread of disinformation.
The second section focuses on the ways in which narratives could become the essence of disinformation and propaganda activities, by exploiting elements of cultural identity, ideology, feelings of anxiety and anger, etc. At the same time, narratives could provide the best instruments for countering disinformation and boosting resilience to propaganda.
Related to narratives, but more ample in scope, conspiracy theories have also become an integral part of disinformation and propaganda campaigns, altering the ways in which social reality is construed by citizens and subverting constructive public debates.
Last but not least, the chapter looks into a very current development in countering disinformation which is the instrumentalization of intelligence in strategic communication, whose goal is to prevent propaganda from reaching its goal. Strategic communication can convince the target audiences by informing them promptly and timely about the truthful evolution of events.
1.2 Evaluating data, information and digital content;
2.1 Interacting through digital technologies;
2.3 Engaging citizenship through digital technologies.
The present section investigates the ways in which disinformation is currently spreading in the online environment as well as in traditional media with a view to identifying the challenges that are raised by the increasing speed at which information (of any kind) circulates in society. It also provides an overview of the types of incorrect information that exist, their features and potential effects.
Main research questions addressed
● What is disinformation and fake news?
● What are the types of disinformation?
Today, we are part of a community in a highly interconnected world, mainly by communication technologies.
Most people don’t fact-check or verify before sharing, therefore, they contaminate social media feeds with rumours, misinformation or conspiracy theories.
As we have become so interconnected, we share and look for our news online. The variety of information sources has contributed to the spread of alternative stories and facts, while the Internet and social media let us share any form of content. Fake news finds a fertile ground online and the information chaos of the pandemic has proven this statement. Most people don’t fact-check or verify before sharing, therefore, they contaminate social media feeds with rumours, misinformation or conspiracy theories. In addition, social media filter algorithms are prioritizing the more familiar post to users; consequently, we are less exposed to alternative views. As social media has become a significant part of our life, it’s crucial to develop some media literacy skills, including critical media literacy skills. During these times, digital skills education that will help us recognise fake news and take proper measures is a key competence for all, from an early age throughout life. In recent years, the issues surrounding the spread of disinformation in the online environment have been acknowledged and confirmed across the globe on several peak occasions, such as several electoral campaigns, Brexit, the independence referendum in Catalonia, the COVID-19 pandemic, the war in Ukraine. Its consequences, as documented in the literature, include negative effects on political attitudes, distrust in media, and polarisation of opinion within online echo-chambers.
The subject of "fake news" has become of global interest since 2016, with the result of the referendum on the exit of Great Britain from the European Union and with the result of the US presidential elections of the same year, especially since the label of fake news was one frequently used by former US president Donald Trump. Since then, the interest in information disorder or information pollution, as Clare Wardle and Hossein Derakhshan put it (2017, 5), in the major impact that digital platforms have on the information ecosystem, including on the traditional mass media that they have displaced from the position of information mediators, has grown constantly. Interest exploded during the pandemic, when the threat that misinformation could pose to public and individual health was strongly documented and acknowledged.
At the origin of the broader phenomenon of information disorder and the more precise one of disinformation are the changes produced by the explosion of digital platforms. In short, this online explosion has dislodged the mainstream media from the position of recognized intermediaries of information and allowed the production, dissemination and amplification of content without any editorial filter. Furthermore, the way digital platforms work has allowed amplification to happen not only organically (as a result of real people participating in online conversations), but also through algorithmic amplification and personalization (bombarding the user with content in accordance with their preferences deduced from previous digital behavior) and through artificial amplification (troll factories, bot factories, engagement amplification software, fake crowds, fake followers, etc.).
The phenomenon is not new, and it has been around for some time. When newspapers first appeared, hundreds of years ago, news and articles containing conspiracy theories, lies or sensational stories always sold well. Fake News became a catch-all description for the current information chaos. Fake news can be an invention, a lie, a media source that imitates an organization’s official site or even a hoax created to persuade people that things that are unsupported by facts are true. The term “fake news” is often seen as inadequate and imprecise, while the concept of “disinformation” offers a broader perspective for the phenomenon. Disinformation may be defined as verifiably false or misleading information created, presented and disseminated for economic gain or to intentionally deceive the public. Disinformation includes various forms of misleading content such as fake news, hoaxes, lies, half-truths and, also, artificially inflated engagement based on automated accounts, trolls, bots, fake profiles that spread and amplify social media posts. Which is why understanding this phenomenon requires a complex approach.
Fake news represents false or misleading information presented as real news. They can take the appearance of real news and mimic legitimate news sources very closely. These fake stories are deliberately fabricated to deceive and fool readers or to become viral on the Internet.
There are several warning signs that help us spot fake news. Firstly, we should consider and check the source, the author, and the date of the story. It is essential to verify the information, claims and cited sources by using a search engine, a reliable news source or a fact-checking organization. Many times, fake content comes from websites that use clickbait titles or stories, poor grammar and words in ALL CAPS or even websites that present past events as recent news and facts. Fake news also comes with a great emotional appeal and can provoke various emotional reactions, from fear and anger, to sadness and joy. In the age of technology and social media, fake news has more tools at its disposal than ever before, including text, video or photo manipulation. Also, they can destroy trust, from trust in media and journalists to trust in public institutions, government, scientific experts or health experts.
This new phenomenon can be understood from a broader perspective than the buzz word “fake news” would suggest. This latter term is seen as “inadequate, imprecise and misleading”, and the phenomenon requires a more inclusive and complex approach. For the purposes of this study, we adhere to the understanding of disinformation as “all forms of false, inaccurate, or misleading information” that was created to “intentionally cause public harm or for profit” (European Commission, 2018). When defining disinformation, the current focus is on the actual content and its truth value, and consequently, on specific countermeasures (i.e. fact-checking, debunking, coming up with counter-narratives), whereas digital disinformation relies on emotions and visual discourse, disseminated and, most importantly, amplified in the new digital ecosystem whose features we have previously underlined.
The Internet and social media allow disinformation campaigns to be created immediately and through automated accounts, fake profiles, bots or “army of trolls” shared over digital platforms, while having the advantages of low cost, rapid spread and high impact. All of these actions and actors form an artificially inflated engagement (based on likes, comments, shares), that leads to the necessity to identify and combat the disinformation from a multi-layered perspective. Furthermore, the fact that manipulative and deceptive content manages to engage the users directly is a highly successful strategy, as it creates a sense of ownership over the message (users have the capacity to “endorse”, contribute to, alter, and share disinformation that confirms their worldviews). This practice allows disinformation to infiltrate the most intimate spaces of communication. Given that viral content is inadvertently beneficial to digital platforms, their content curation algorithms are not prepared to deal with these particular challenges. Applying clear-cut rules and criteria for bringing down viral fake information would only tear up the whole fabric of social media, which is designed especially for promoting emotionally engaging content irrespective of its intention to deceive or not. Through artificial engagement, disinformation can reach large audiences, and has the potential to virally multiply its effects long before giving the digital platforms or public authorities the chance to spot it and react.
Fake or manipulative content has never been more prevalent than it is today. The Internet, new media and social media, along with instant messaging, are channels and means of communication through which the spread of misleading news and disinformation is facilitated. This phenomenon weakens democracy, lowers trust in the authorities and accentuates political and social polarization (also see 2.3). Furthermore, at the level of perception, disinformation is one of the major concerns of individuals, for example, 85% of European citizens consider that the existence of news that misrepresents reality or is even false is a problem for their country (Eurobarometer 2018), while 83% state that it is a problem for democracy in general.
The phenomenon of disinformation in the online environment takes on various forms. Researchers in the field include, within communication diseases (information disorder), various forms of manifestation of intentionally misleading information (disinformation) or unintentionally misleading information (misinformation). Some of these refer to content falsification, such as photo-video manipulation content, fabricated content (100% fake), impostor content (mimics legitimate sources of news and information). Others rather fall into the sphere of interpretations that may mislead: false or misleading contextualizations of facts, misleading content, clickbait (characterized by the lack of concordance between title and content, images and content, etc.), satire and parody (which through the effects documented on the audience may be harmful or misleading). The phenomenon of disinformation encompasses all forms of false, misleading or inaccurate content, from true information around which a false context has been generated, photo manipulation or video propaganda by actors with an ideological agenda, to entirely fabricated content.
In the current digital context, disinformation coupled with technological possibilities has determined a technological revolution, producing what we could call "new generation disinformation", "disinformation 2.0". Amplification can be achieved through fake accounts, like factories, influencer networks (real or fabricated) and through "precision segmentation", "computational segmentation”, through which messages are targeted to users, taking into account their digital profile and digital fingerprint. Another important aspect of digital amplification is represented by the combination between bots and influencers. Lotito et al (2021) performed a study focusing on how three types of agents (a) commons (ordinary users); (b) influencers, (c) bots disseminate disinformation messages in open social networks (OSN) and the reach they each have.
They reach two important conclusions:
① Bots, as automated accounts, help the conspirators promote their messages as well as keep them alive by constantly retransmitting them;
② Influencers “amplify the small world effect” meaning they speed up the spreading of information in general, which may also include fake news. Therefore, the authors argue that it is important to pay special attention to educating and correcting influencers as they are a vital node in the viralisation of disinformation, and, consequently, could play an equally big role in minimizing its spread and impact.
Disinformation can influence a number of social and political processes, from undermining democratic values and citizens' trust in governments, to polarizing discourse or amplifying divisions between different social groups or state political allies. Disinformation is created and circulated for many reasons, including political, financial, social or psychological reasons. For example, disinformation put into circulation for political reasons can generate political instability, influence citizens' behavior, undermine democracy or cause election fraud. For example, disinformation created for financial reasons, can have the goal to obtain income from web traffic and advertising, based on a business model centered on viralisation. Disinformation created or put into circulation for social and psychological reasons can be aimed at entertainment, the desire to harm known or unknown people on the Internet, the consolidation of group identity according to ideological orientation, political and ideological conviction/partisanship.
Rubén Arcos
Online communication has been identified as a great promoter of disinformation. However, one must not dismiss the role played by offline communication in the process. Communication flows are not limited to the online or to the offline, but often interact, enhance each other’s reach, and, if used to their full potential, they could turn disinformation into a societal-altering instrument, whose effects become more and more visible in the real world.
Communication flows influence the reception of news stories and opinions disseminated in online media, and more broadly, they shape the opinions, attitudes, and behaviors of individuals particularly towards public issues. The COVID-19 pandemic has shown that perceptions of individuals about the measures implemented by public health authorities can have an impact on their acceptance or rejection of such measures and hence influence their behaviors and the global health situation. Misinformation and conspiracy theories disseminated in online and offline interactions have populated the pandemic context, generating confusion during the first stages and hence impacting public health.
There is a current trend to dismiss offline communication and face-to-face interactions of individuals in human social networks of family and friends. However, offline communication continues to be an important source of influence in shaping perceptions and behaviors in our current information environment of digital communication. Offline communication continues to be an important component of communication strategies, as demonstrated by studies on February 20 movement in Morocco,
According to all the interviews with senior communication and political strategists of the movement, the activists had to rely on both online and offline communication platforms. They did not manifest as an oppositional binary. Instead, they functioned as an organic hybrid that combined face-to-face interactions and communicative practices on the ground, with online mobilization and exchange of information. In several ways the February 20th activists used complementary communication strategies for both the online and offline environments (Abadi 2015: 128)
Another important aspect which one should be mindful of when analysing information flows and online-offline interactions is the role that “opinion leaders” play in setting the tone for these debates and in framing and viralising the messages. In their classic study The People’s Choice, Lazarsfeld, Berelson and Gaudet postulated a two-step communication flow and the importance of “opinion leaders” – in our digital era we may name them as “influencers”– in the reception of information by other less active media consumers,
One of the functions of opinion leaders is to mediate between mass media and other people in their groups. It is assumed that individuals obtain their information directly from newspapers, radio, and other media. Our findings, however, did not bear this out. The majority of people acquired much of their information and many of their ideas through personal contacts with the opinion leaders in their groups. These latter individuals, in turn, exposed themselves relatively more than others to the mass media. The two-step flow of information is obvious practical importance to any study of propaganda (Lazarsfeld, Berelson and Gaudet 1948: xxiii).
Elihu Katz (1957), in a later review of the two-step flow hypothesis for Public Opinion Quarterly, highlighted three aspects of interpersonal relations in influencing decision-making: they are information channels, “sources of pressure to conform to the group’s way of thinking”, and also sources of social support (Katz 1957: 77).
The model has been subject to criticism since its formulation for different reasons (See: Weimann 2015), including the transformation of the information environment by developments in communication technology since the mass media era in which the hypothesis was formulated (Bennet and Manheim 2006). However, more recent studies have tested the hypothesis of the two-step flow in the context of Twitter-based discussion groups and found that a two-step flow of communication took place in the online political discussions (Choi 2014).
Also with respect to communication on online platforms, Hilbert et al (2017) developed a useful taxonomy of Twitter “communicator types” in the context of social movements in Chile –voice, media, amplifiers, and participants– and found that different proposed flow models –one-step, two-step, multistep– fit the finding of their empirical study (Hilbert et al. 2017). Along the same lines, Thorston and Wells have proposed the concept of “curated flows” of information/content, by considering that “the fundamental action of our media environment is curation: the production, selection, filtering, annotation, or framing of content” (2016, 310). In this framework they differentiate between five different actors practicing curation:
» Journalistic curation
» Strategic communication curation
» Individual media users
» Human social networks (family, friends, colleagues)
» Algorithmic filter.
These studies bring into sharper focus aspects related to how influencers may shape the informational environment and curate the information that is transmitted on social media (Ewing &Lambert, 2019). Recent studies and case analysis have focused on examining the ways in which social media influencers use digital media to shape public opinion, especially given their increasing involvement in societal debates, be they related to politics, education, healthcare, the environment, etc. Kadekova and Holienčinová (2018) identify four categories of social media influencers, according to areas of expertise: celebrities or macro-influencers, industry experts and thought leaders, bloggers and content creators, and micro-influencers.
Micro-influencers deserve increasing attention as they are more difficult to identify, however, their combined reach could prove transformative for society. Chen (2016) argue that given their smaller number of followers, micro-influencers are seen as being more authentic, closer to everyday people’s concerns. Ong et al (2021) draw attention to the sway power that influencers can have, and they do not limit their research to what they term mega-influencers (people with more than 500,000 followers), but they also focus on micro (10,000-100,000 followers) and nano-influencers (1,000 to 10,000 followers). The reason behind this is that high profile mega influencers draw a lot of public attention to their posts, and this visibility exposes them to criticism, sanctions and even suspension of activity from social media when they are discovered as peddling disinformation. However, “micro-influencers roped in to seed political messages in a much more clever and undetectable manner. This trend, we argue, has grave implications for electoral integrity and creates challenges in enforcing democratic principles of transparency and accountability” (Ong et al, 2021).
Micro-influencers are more likely to be able to infiltrate communities to which macro-influencers, perceived as too distant, or too high profile, or too controversial, would have no access. The reason behind this accessibility is what media anthropologists have coined “contrived authenticity” The term is used to “describe internet celebrities whose carefully calculated posts seek to give an impression of raw aesthetic, spontaneity and therefore relatability. This makes it easier for them to infiltrate organic communities and evade public monitoring” (Ong et al, 2021, 22). These micro and nano-influencers engage more directly with their followers and are more intimate in their approach to the dissemination of messages (be they fake or mot). For this reason, we argue, they are even more dangerous when it comes to disinformation, because they mimic the relationships that people have with their offline communities, close circle of friends, in which trust is a given, interests and points of view are shared freely and more often than not common, and therefore, fake messages could be more easily accepted, with less scrutiny and examination as they prey on the previously formed trust and commonly held beliefs.
The reason why offline interactions should never be neglected when analysing the reach and impact of disinformation is that those networks, in which people actually know each other well, have previously formed and established relationships, share a bond of trust, are fertile ground from the unconditional and unverified acceptance of fake content meant to mislead. And this dynamic, as previously explained, could be transposed in the online environment, in close communities, where people have a history of interactions and a set of common beliefs, which are similar to bonds of trust, and which lead to more permissiveness in terms of messages accepted.
Trolling is the behavior of posting content on the Internet, especially on social networks, with the express and sole purpose of provoking a reaction, sowing discord, provoking an emotional response. The most common behaviors that can be considered as trolling: personal attacks, provocative comments, insults or vulgar language. Often, the identity used for online trolling behavior is a fake one (fabricated social media account that cannot be associated with a real person).
What could alert us that an account is fake/fabricated?
1. Bennett, W. L., & Manheim, J. B. (2006). The One-Step Flow of Communication. The ANNALS of the American Academy of Political and Social Science, 608(1), 213–232. https://doi.org/10.1177/0002716206292266
2. Chen, Y. (2016, April 27). The rise of micro influencers on Instagram. Retrieved from: https://digiday.com/marketing/micro-influencers/
3. Choi, S. (2015). The Two-Step Flow of Communication in Twitter-Based Public Forums. Social Science Computer Review, 33(6), 696–711. https://doi.org/10.1177/0894439314556599
4. Erlich, Abadi, Houda, "The February 20th Movement Communication Strategies: Towards Participatory Politics." Dissertation, Georgia State University, 2015. doi: https://doi.org/10.57709/7367528
5. Funke, D. and Flamini, D., ‘A guide to misinformation actions around the world’, The Poynter Institute, 2018, available at https://www.poynter.org/ifcn/antimisinformation-actions/
6. Hilbert, M., Vásquez, J., Halpern, D., Valenzuela, S., & Arriagada, E. (2017). One Step, Two Step, Network Step? Complementary Perspectives on Communication Flows in Twittered Citizen Protests. Social Science Computer Review, 35(4), 444–461. https://doi.org/10.1177/0894439316639561
7. Kadekova, Z., & Holienčinová, M. (2018). Influencer marketing as a modern phenomenon creating a new frontier of virtual opportunities. Communication Today, 9(2), 90-105.
8. Katz, E. (1957). The Two-Step Flow of Communication: An Up-To-Date Report on an Hypothesis. The Public Opinion Quarterly, 21(1), 61–78. http://www.jstor.org/stable/2746790
9. Kjerstin Thorson, Chris Wells, Curated Flows: A Framework for Mapping Media Exposure in the Digital Age, Communication Theory, Volume 26, Issue 3, August 2016, Pages 309–328, https://doi.org/10.1111/comt.12087
10. Lazarsfeld, Paul F., Berelson, Bernard and Hazel Gaudet (1948). The People's Choice (2nd edition) New York: Columbia University Press.
11. Lotito, Q. F., Zanella, D., & Casari, P. (2021). Realistic aspects of simulation models for fake news epidemics over social networks. Future Internet, 13(3), 76.
12. Michele Ewing, A. P. R., & Lambert, C. A. (2019). Listening in: Fostering influencer relationships to manage fake news. Public Relations Journal, 12(4), 1-20.
13. Ong, J., Tapsell, R., & Curato, N. (2019). Tracking digital disinformation in the 2019 Philippine Midterm Election.
14. Thorson, K., & Wells, C. (2016). Curated flows: A framework for mapping media exposure in the digital age. Communication Theory, 26(3), 309-328.
15. Törnberg, P., ‘Echo chambers and viral misinformation: Modeling fake news as complex contagion’, PLoS ONE, 13(9), 2018, pp. 1–21, doi: 10.1371/journal. pone.0203958.
16. Vosoughi, S., Mohsenvand, M.N. and Roy, D., ‘Rumor gauge: Predicting the veracity of rumors on Twitter’, ACM Transactions on Knowledge Discovery from Data, vol. 11, no. 4, 2017, pp. 1–36.
17. Weimann, Gabriel (2015). “Communication, Twostep flow,” International Encyclopedia of the Social & Behavioral Sciences, 2nd edition, Volume 4. Pp. 291-293. http://dx.doi.org/10.1016/B978-0-08-097086-8.95051-7
18. Understanding Information disorder https://firstdraftnews.org/long-form-article/understanding-information-disorder/
19. What are 'bots' and how can they spread fake news https://www.bbc.co.uk/bitesize/articles/zjhg47h
20. Tackling online disinformation https://digital-strategy.ec.europa.eu/en/policies/online-disinformation
21. INFORMATION DISORDER: Toward an interdisciplinary framework for research and policy making https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c
22. Investigating Disinformation and Media Manipulation https://datajournalism.com/read/handbook/verification-3/investigating-disinformation-and-media-manipulation/investigating-disinformation
23. European Commission, ‘Final report of the High Level Expert Group on Fake News and Online Disinformation’, 12 March 2018, available at https://ec.europa. eu/digital-single-market/en/news/final-report-high-level-expert-group-fakenews-and-online-disinformation
24. Code of Practice on Disinformation (2018) https://ec.europa.eu/digital-single-market/en/news/code-practice-disinformation
25. Flash Eurobarometer 464: Fake News and Disinformation Online - Data Europa EU
Made with
Offline Website Maker