Debunking and fact-checking have been deemed important methods of countering disinformation. Funds and time have been allotted to implementing technical solutions that could assist in fact-checking online posts and marking them according to their veracity. There are numerous NGOs involved in determining the reliability of online information, and internet platforms such as Google, Facebook, Twitter have departments in charge of checking the content posted in their digital spaces. However, there are limitations to these methods and new ways of countering disinformation focus on inoculation theories and develop means for people to become resilient to disinformation before being exposed to it. This is known as prebunking and is gaining ground. It employs serious games (which will be discussed in detail in section 5) as well as reverse psychology to assist people in developing skills that allow them to spot disinformation and not fall prey to it. The current section focuses on how debunking and fact-checking operate, what their limitations are and what further enhancements prebunking can bring to the fight against disinformation. It is clear from the onset that none of these methods are sufficient on their own. However, joint efforts both before exposure to disinformation and after disinformation becomes viral and is tagged as such, prove to be another effective means of stopping disinformation from spreading and eroding democratic societies, civil trust and cooperation.
Main research questions addressed
● What is debunking?
● What is fact-checking?
● What are the differences between debunking and fact-checking?
● How can fact-checking disinformation help stop its spread?
● What is prebunking?
● What role can reverse psychology play in educating people with respect to disinformation?
● What lessons can be learned from existing examples and practices of effecting debunking and prebunking?
Strategies to combat disinformation have been, until recently, relatively under-researched in comparison to other issues in the broader field of the psychology of belief internalisation. Mahl, Scheffer and Zeng (2022) undertook a synthesis of the available literature on disinformation and on strategies to combat it. They identified several categories of academic literature: those that focus on the willingness to believe or share disinformation, those that looked at the disinformation narratives and those that focused on the strategies to combat disinformation. Recently, effective strategies to combat disinformation have become highly relevant given the wide spread of disinformation on topics such as climate change, vaccination, COVID-19 and the conflict in Ukraine.
Two types of strategies have been identified in the literature on combating mis- and disinformation: (1) debunking, often associated with fact-checking; and (2) pre-bunking. Moreover, the literature also focuses on how psychological factors can be included in the attempts to counter disinformation and more attention has been paid to (3) reverse psychology.
Fact-checking appeared as a profession after the technological boom that created a 4.0 environment (Javaid et al 2022) characterized by a rapid and unconventional informational flux, new informational platforms and tools and new, challenging online informational consumerism behaviors. Online falsehood has a considerable impact on the behavior and attitudes of the public opinion (Allcott & Gentzkow, 2017). For this reason, fabricated stories, as part of the disinformation phenomena, represents nowadays a global problem which needs to be addressed properly (Pal & Banerjee, 2019).
Even though fabricated stories have always been encountered throughout history (such as Pizzagate) (Edson, et al., 2018, p. 137), their number has grown in recent years as a result of the new era of Internet hoaxes (LaGarde & Hudgins, 2018). Complementary, one of the phenomena that influenced the massive dissemination of fabricated stories in the last years is the so-called phenomena of infodemics, recently defined by the World Health Organization as a “superabundance or excess of information, including false or misleading information, regarding a topic” (World Health Organization, 2022). In this situation, true content is being mixed with false data (World Health Organization, 2022) in order to confuse the user. Infodemic contexts are fueled by the digital communication system (Del Mar Rivas Carmona & Vaquera, 2020) which contributes, especially in crisis situations such as the COVID-19 pandemic, to an accelerated spread of the misleading messages and hoaxes. As a consequence, disinformation, as well as misinformation affects the digital information consumption and generates negative effects on the social behavior of individuals (García-Marín, 2020)
In addition, the young generations (such as Generation Y and Z) get their information primarily online and trust the data available on different social platforms and networks (Shifman, 2013). As a consequence, this new form of information consumption transforms young people into likely victims of Internet hoaxes and online disinformation disseminated on online communities and circles (Perez-Escolarr, et al., 2021, p. 3).
Therefore, as part of the natural process of adapting to the current reality, each person needs to acquire the skills and competencies necessary to conduct self-fact-checking processes, in order to ensure they get informed in a correct and opportune manner (Mantzarlis, 2018). Moreover, the response to disinformation and misinformation must be complex, since these phenomena are not only the problem of mass media outlets, but they are also a social phenomenon, which requires a response to a democratic problem (rather a simple response to a problem of lack of credibility of media institutions) (Persily, 2017). Therefore, the response should be constructed by a matrix of actors, including the main news and media actors, as well as social networks and other technology companies. In this equation, developing such a response requires skills specific to the journalistic specialization, as well as new methodologies and tools to cope with the demands of a continuous developing information system (Herrero & Herrera Damas, 2021, p. 53).
Fact-checking, as an activity, has been historically associated with journalism, reflecting its professionalization in the 20th century, as a “fact-centered” discipline (Graves & Amazeen, 2019, p. 3). However, in time, fact-checking developed into a broader infrastructure consisting of people, organizations, routine and practices, principles and tools which work interconnected to ensure that the general public is better informed (Graves & Amazeen, 2019, p. 1) (Cotter, et al., 2022, p. 2).
Defined as an activity, fact-checking is considered to be a “continuous, consolidated practice of checking the veracity of public discourse” (Herrero & Herrera Damas, 2021, p. 51). Complementarily, in accordance with the author Alexios Mantzarlis, the fact-checking process can be also defined as “a scrupulous analysis driven by one simple question – ‘How we do know that?’” (Mantzarlis, 2018, p. 84). Fact-checking cannot, therefore, be considered a simple “spell-checking process, since there does not exist a dictionary-style guidebook comprising all the possible facts or a software solution that can examine all documents and flag anytime something has been misstated as fact” (Mantzarlis, 2018, p. 84). In recent times, the fact-checking does not limit itself only to correctly informing individuals, but extends to monitoring, spotting and disproving any piece of information.
Fact-checking plays at present an essential role in the so-called “post-truth” media landscape (Cotter, et al., 2022, p. 2). In the 21st century, fact-checking begun to revolve around ensuring institutional accountability (Graves & Amazeen, 2019), as a result of a several claims in the political domain that proved to be false and the claims of the Bush administration with regards to the weapons of mass destruction as justification used as a justification for the Iraq War (Marietta, et al., 2015, pp. 578-579). Technological developments gave Internet users the opportunity to develop news-like content, and also allowed independent fact-checking sites (such as Snopes.com, Maldita.es) to establish in order to help dispel conspiracy theories and rumors while also trying to fill the role of watchdogs for politicians, journalists and other public figures (Cotter, et al., 2022, p. 3).
Fact-checking was initially the prerogative of media outlets but it has since extended to nonprofit entities, think tanks, nongovernmental organizations and academic institutions which joined the community of fact-checkers (Stencel, et al., 2022). The increased level of disinformation disseminated during the 2016 U.S. elections demanded further engagement for the fact-checking community, which resulted in a 200% increase in the number of fact-checking entities (Fischer, 2020). In accordance with the conclusions of the Duke Reporters’ Lab annual fact-checking census, in 2021 there were 391 active fact-checking projects, 378 of which were still operational in June 2022 (Stencel, et al., 2022).
In 2015 the International Fact-Checking Network (IFCN) was established at Poynter, “at the initiative of the checkers themselves who started to meet informally in 2014 in order to exchange good practices and also errors” (Herrero & Herrera Damas, 2021, p. 66). The main objective of IFCN is to provide a platform which brings together the growing community of fact-checkers worldwide and to advocate for factual information in the global fight against disinformation (Poynter). IFCN promotes the excellence of fact-checking to more than 100 organizations worldwide through advocacy, training and global events (Poynter) and also developed a code of principles as an instrument of accountability to guide fact-checkers worldwide so as to ensure a nonpartisan and transparent verifying process (Perez-Escolar, et al., 2021, p. 4) .
At present, fact-checking is considered an essential instrument for combating the negative effects of false information, especially in the online environment. While past works demonstrated that fact-checking corrections can create a backfire effect, and reinforce the original and inaccurate beliefs of the public opinion (Nyhan, et al., 2013), more recent research in the field showed that fact-checking registered improvements in terms of accuracy of beliefs, supporting the ability to evaluate claims in a correct manner and contributing to the reduction of intentions to share deceiving headlines on social media (Nyhan, et al., 2020) (Yaqub, et al., 2020). Given all the above presented aspects, it can be concluded that fact-checking has the ability to fulfill its core objective of ensuring a correct and well-informed public opinion, representing an essential tool for online platforms, as they have transformed into the main source of information for the general public, keeping abreast of current events and news (Cotter, et al., 2022, p. 3).
The literature often employs the terms debunking and fact-checking as almost perfect synonyms, but some distinctions exist.
Some researchers make a difference between fact-checking, which happens once a material is made public and gains public relevance, and verification, which takes place before said material is publicly available. Verification is at the heart of journalistic integrity and focuses not only on the truthfulness of statements, but also on the identity of the producers and transmitters of content, be they human or digital (Balancing
Occurs after a material becomes public, and refers to the process of verifying if the facts in a piece of writing or in a speech are correct. To this end, fact-checking employs information from experts, academia, official, governmental institutions (UNESCO Journalism Handbook 2018).
Traditionally, fact-checking is performed by journalists, newsrooms, political analysts. However, at present, due to exponentially increasing volume of public information, fact-checking is also undertaken by independent organisations and NGOs, with a view to holding the authors accountable and informing the public with respect to the validity and factuality of their claims.
Is considered a subset of fact-checking, as it relies on similar skills, but focuses more extensively on fake news and hoaxes (UNESCO Journalism Handbook 2018) and on user-generated content. Pemmet & Lindwall (2021) explain that debunking is not limited to exposing falsehood, but also focuses on instances in which something is presented as less important, less good or less true than it actually is. They state that the overarching objective of debunking is to counteract or minimise the effects of potentially harmful mis- and disinformation” (Pemmet & Lindwall 2021 6; RESIST 2 2021).
The objectives of debunking are mainly to: a) assert the truth; b) catalogue evidence of false information; c) expose false information and conspiracies; d) attribute the sources of disinformation; e) build capacity and educate (Pemmet & Lindwall 2021 6).
The most important differences between fact-checking and debunking are reviewed by Pemmet & Lindwall (2021):
a) Debunking may be partisan (if conducted by governments to expose certain actors), while fact-checking is impartial;
b) Debunking is targeted on a particular actor or a specific topic. The target is chosen in accordance to the effects it could produce if the mis- or disinformation is left unchallenged; while fact-checking is broad in scope and targets any mis- or disinformation.
c) Debunking is strategic, as it prioritises its targets and does not focus on everything with equal effort. Some mis- or disinformation attempts, which are not perceived as posing threats to the debunkers interests and/or priorities, are not addressed.
Moreover, debunking also exhibits additional traits:
d) Debate-shaping as its efforts are directed at preventing or correcting manipulation of public debate.
e) Transparent regarding the debunker’s actions, objectives and funding.
f) Awareness-raising because they also strive to educate with respect to manipulative techniques (Pemmet & Lindwall 2021 16-17).
Despite the conceptual differences outlined above, more often than not, in practice, fact-checking and debunking work hand in hand. As explained in Balancing Act: Countering Digital Disinformation While Respecting Freedom of Expression (2020 66), fact-checking also employs the proactive debunking techniques in order to expose the process behind the falsehoods, as well as the process through which they have been revealed.
Debunking is guided and informed by the principle that mis- and disinformation should not go unchallenged. To this end, Lewandowsky et al (2020 12) and RESIST 2 (2021 44) recommend using counter-messaging in order to correct, fact-check and debunk. Counter-messaging refers to the construction of corrective messages that maximise clarity and impact and that have a specific step-by-step structure:
1. Stick to the subject - do not get distracted into other wider or collateral false narratives, but rather focus on the particular interest in that particular case of mis- or disinformation.
2. Use facts, examples and evidence as much as possible and from as many credible sources as there are available.
3. Do not engage trolls and their rhetoric because it is counterproductive and time-consuming (RESIST 2 2021 44).
Lewandowsky et al (2020) emphasise the importance of countering disinformation as it appears because, if left unquestioned, it will “stick” due to the emotional appeal it is constructed on and the familiarity backfire effect (i.e., once misinformation is repeated a number of times it is perceived as accurate because it becomes familiar not because of its inherent truth value). Even once disinformation starts spreading, debunking can help curb its dissemination, however, the communicator must choose which pieces of disinformation to tackle because human fact-checking resources are limited and should be used on the most potentially destructive myths.
Lewandowsky et al (2020) emphasise the importance of countering disinformation as it appears because, if left unquestioned, it will “stick” due to the emotional appeal it is constructed on and the familiarity backfire effect (i.e., once misinformation is repeated a number of times it is perceived as accurate because it becomes familiar not because of its inherent truth value). Even once disinformation starts spreading, debunking can help curb its dissemination, however, the communicator must choose which pieces of disinformation to tackle because human fact-checking resources are limited and should be used on the most potentially destructive myths.
Debunking can be considered a "retroactive" approach, aimed at correcting factually wrong information, when this is already in circulation. There are opinions that claim debunking may have been useful in a time in which there were few written outlets and people would read the same newspaper every day, but that its efficiency has declined considerably with the advent of the new forms of media. Vosoughi, Roy and Aral (2018) have shown, by studying tweets on politically relevant topics, that fake news travels much faster and to a much wider audience than the rectification that follows: it took information that corrected falsehoods six times more to reach the same number of people and twenty times more to be shared a similar number of times. Swire et al (2017) also showed how some people were unwilling to change their support for a candidate to political office in response to being shown correct information about a false statement made by that candidate, if they previously believed that statement was true. Arcos et al. 2021 have identified some serious challenges that need to be considered for advancing in the fight against information manipulations and misleading content.
These challenges can be grouped into the following issues:
● The limits of fact-checking practices from the perspective of the audience/publics: This includes factors like acceptance of the verified information once the receiver has been exposed to the misleading content or the mis/disinformation piece.
● Persistence of the falsehood or erroneous/biassed information even after exposition to the fact-checked content or news story.
● Scope and implications of the so-called backfire effect.
● Real impact of fact-checking and debunking: cognitive effects (understanding, retention), but also those associated with attitude change/reinforcement and behavioural (e.g., medium and long-standing behaviours in the consumption of content and access to trusted sources of information and news stories)
● Professional challenges and issues of fact-checking practitioners: the volume of information, speed of dissemination of misinformation stories, perceived political neutrality of the fact-checking organizations, and others.
Looking into more detail at these issues, researchers have noticed that pre-existing attitudes and beliefs play a fundamental role in the acceptance of mis- and disinformation content by audiences (Ewoldsen & Rhodes, 2020). According to the priming theory (Berkowitz, 1984), people react to the messages they received depending on how they interpret the message, the ideas they bring with them and the thoughts that the message evokes. On the other hand, an implication from the cognitive dissonance theory (Festinger, 1962) is that individuals, struggle to accept new information that challenges the previously accepted, and actively seek information that reinforces the previously accepted belief or behaviour –to reduce the dissonance and reinstate the balance
The hostile media phenomenon, or effect, (Vallone et al.,1985) refers to the tendency for partisans to view media coverage of controversial events as unfairly biassed and hostile to the position they advocate” (p. 584). The phenomenon is very relevant for fact-checkers since no matter how much neutral identical journalistic reporting and hard news stories are, the same news coverage of events will be perceived as hostile to their own positions by partisans (Feldman, 2017, p. 2). Walter et al., (2020) argue that holders of partisan positions are more vulnerable to disinformation and misinformation consistent with their own views, but also will likely be more resistant to debunking and fact-checking processes, if the content challenges "pre-existing beliefs, ideology, and knowledge” (Walter et al., 2019).
Several authors (Pennycook et al., 2020; Walter et al., 2020, Nyhan, 2021; Ecker et al., 2022; Ecker et al., 2020) have addressed the so-called backfire effect, which refers to the stronger adhesion to pre-existing beliefs or thoughts when people are confronted with corrective information that challenges these beliefs or thoughts.
Another relevant limitation to the effectiveness of fact-checking is the persistence of the disinformation piece in the memory of the audience and why and how discredited information often continues to influence people’s thoughts and behaviours (Lewandowsky et al., 2012; Nyhan & Reifler, 2010). Johnson & Seifert (1994) refer to this persistence as the continued influence effect (CIE) and since then, it has been a topic of attention by researchers (Johnson & Seifert, 1994; Sussman & Wegener, 2022; Ecker et al. 2022; Kan et al., 2021).
With respect to the backfire effect, initial research on the persistence of disinformation found that the backfire effect was widespread. However, subsequent studies suggested that the backfire effect was “extremely rare in practice” (Nyhan, 2021, p.2). Even if this phenomenon is less frequent in scope than initially considered, it should not be disregarded. Confrontation with contrary information from fact-checkers will result in fact-checkers being perceived as hostile to the own positions of holders of partisans views (Feldman, 2017) and can lead partisans to isolate themselves informatively, reject balanced media, and seek ideological similarities, sources, and communities. This in turn, may reinforce their views resulting in greater social polarization.
Addressing these effects is nowadays a significant challenge for fact-checking practitioners, as well as understanding why misinformation and disinformation persist (this being also a critical question for scientists from the communication and psychology fields).
However, more recent studies have demonstrated that the effects of debunking and fact-checking cannot be overlooked entirely. Several recent studies have focused more precisely on the effects that marking information in social media as being inaccurate have on the participants willingness to further disseminate said information to their networks. Chung & Kim (2020) have focused on whether fact-checking can deter the spread of fake news. They relied in their hypothesis on the fact that third person perception (i.e., the perception that while the person themself will not fall prey to fake news, others might) may decrease individuals’ willingness to share that news on social media. The researchers also discovered that fact-checking information also moderates the effects of social media metrics and social sharing intentions, meaning that even if the information appears to have been shared or liked extensively in social media (which could indicate popularity and appeal), if there is a fact-checking warning, individuals will refrain from further sharing it.
Brashier et al (2021) noticed that correcting misinformation may have an effect in the short term, but might fade in the longer term and tested which for of fact-checks produced the longest lasting results. They introduced true or false tags before (prebunking), during (labelling) or after (debunking) the participants read the headlines and concluded that providing the fact-checks immediately after the headlines (debunking) was conducive to longer term retention of the veracity or lack thereof of the information. Their assessment is that debunking is thus correlated with feedback, which boosts long-term retention.
Pennycook & Rand (2017) assessed the role of cognitive reflection in the persistence of misinformation messages. They identified a correlation between the propensity to engage in fake news (negative), on the one hand, or well discern them from real ones and analytical reasoning (positive), on the other hand. In other word, when cognitive reflection exist, people are less likely to engage with fake news. However, the reasoning affects some individuals more than others. Political partisans will be more likely to scrutinize and try to develop counter-arguments against fact-checking contents (Walter et al., 2019). Other authors point out that a rational judgement must be completed with visual information such as images, graphics, and fact-checking labels on social media platforms, and verbal quantifiers of the trustworthiness of the sources (Nyhan, 2021¸ Hameleers et al., 2020; Van der Bles et al., 2020).
Pennycook et al (2020) analysed firstly the implied truth effect which occurs when people who are used to seeing the warnings regarding the veracity of information online, are not presented with such a warning and are therefore more likely to believe the information is accurate, even if, in fact, it is not. This raises serious issues because third parties are needed to examine that information and verify its accuracy, which is an impossible task given the large volume of online available information and creates a dilemma in the public’s mind: if the warning is absent, does it mean the information has been verified and deemed accurate or that is has not been verified. The result of Pennycook et al’s study indicates that unmarked headlines are viewed as more accurate and are more likely to be shared. Therefore, debunking efforts should be undertaken to provide accuracy warnings regarding trending topics.
Other studies have indicated that polarization on controversial topics can be reduced by explaining the scientific consensus on the topic (Nyhan, 2020). A paradigmatic example that we can find is that of the climate change, where there exists a consensus of 97%; however, only 12% of the American population knows that the consensus is almost absolute, and the critical or denialist scientist community only reaches a 3% (Leiserowitz et al., 2017).
As far as the backfire effect is concerned, it has been discussed by research on radicalization (Day & Kleinmann, 2017) and it has been identified as especially strong within the conservative audience in the USA (Nyhan, Reifler, and Ubel, 2013). Margolin, Hannak, and Weber (2018) highlight the relevant role of the underlying social structure for correcting disinformation and how the existence of a previous relationship between the individual who receives the disinformation corrected and the individual from whom the disinformation comes from, makes the acceptance of the debunked content more likely. The authors did not find differences in the correction of political and non-political rumours. In both cases, it is observed that the corrections from followers and friends are more likely to be accepted.
The backfire effect has also been associated with emotional reactions. Trevors (2022) established a predictive relationship between the refutation of contents and the emotions provoked in individuals. These emotions when they are related to attacks against one’s identity result in a negative emotional reaction that anticipates the revision and refutation of the contents. On other hand, negative emotions (e.g., anger or fear) might be particularly likely to evoke the continued influence effect (CIE). Confrontation with information that provokes negative emotions might lead to that individuals experiencing discomfort, and thus that individuals would try to avoid it, by forgetting the (fact-checked) information that is uncomfortable for them. This phenomenon would perpetuate the CIE which is the result of the activation of mechanisms both emotional and cognitive (Susmann & Wegener, 2022). According to these findings, by avoiding potential discomfort through customized fact-checking strategies, fact-checkers may be able to reduce the persistence of misinformation in holders of partisan positions.
Researchers have also noticed that numerous organisations involved in fact-checking and debunking are also developing educational programs to assist the public in developing their abilities to detect mis- and disinformation online. As there is much information to be tackled by small groups of fact-checkers and debunkers, it is important to analyse these educational and popularisation endeavours and the materials they have developed. This will be the scope of section 6. Moreover, it is necessary to develop a strategy of engagement with communities of users as part of the fact-checking endeavour, because this will increase the chances of exposure by individuals to verified content and news stories, not directly from fact-checkers but from friends and relatives.
Defining the core skills of the fact-checkers has been the objective of both employers and educators, most of the studies that focus on this topic were based on the hypothesis that journalistic skills and competencies should not be transferred to skill practice (Ornebring & Mellado, 2016). Since the scarcity of skills of the lack of time are two main factors that influence the ability of journalists to verify a certain piece of information and conduct fact-checking activities, it is true to say that skill practice is influenced by time (opportunity) and context (Himma-Kadakas & Ojamets, 2022, p. 870).
The core skills of journalists are usually transferable, with journalists using them in various stages of the news reporting process. However, before focusing on the main skills a fact-checker should possess, it is important to identify and define the fact-checking skills that can be developed during/within a study program, given the fact that nowadays each journalism student learns how to use available journalistic tools and methodologies in order to avoid becoming a victim of the disinformation phenomenon.
Therefore, a study conducted by a research team from the Loyola Andalucia University on first year high school students enrolled in a specific course at two universities from Spain, extracted a series of fact-checking skills within the context of social competencies, as briefly described in Figure 2 (Perez-Escolar, et al., 2021, p. 7).
Ability to use information and communication technologies to communicating, accessing information sources, archiving data and documents to create content, presentating tasks, learning, research and cooperative work
Ability to integrate knowledge and cope with the complexity of formulating judgements based on information that, being incomplete or limited, includes reflections and decision-making based on evidence and arguments related to the application of their knowledge and judgements
Ability to think and act according to universal principles that are based on the value of the person, the cultural heritage and are aimed at the full personal, social and professional development of the student
Ability to question things and researching the foundations on which ideas, values, actions and judgements are based and to promote the capacity for initiative in analysis, planning, organization and management
Ability to present knowledge in all areas of knowledge, in a clear and unambiguous way, showing interest in interacting with others and ability to maintain a critical and constructive dialogue, as well as to speak in public if necessary
1. Use and mastery of technological means
2. Expression in the media without grammatical or spelling errors
3. Analysis of information sources
4. Ability to generate audiovisual content
5. Management of computer tools
6. Strengthening of research knowledge and skills
7. Capacity for collaboration, cooperation and connectivity
8. Manipulative skills and simultaneous tasks
9. Information search and analysis skills
10. Improvement and development of receptive communication.
11. Cognitive reflection
12. Assertiveness and empathy
13. Have an ethical and responsible attitude of respect for people and the environment, with responsible consumption
14. Interpretation, argumentation and problem solving
15. Decision-making capacity
16. Social responsibility
17. Respect the fundamental rights and equality between men and women
18. Increased selective attention and mental alertness
19. Do not incite hatred, racism, homophobia, etc
20. Rejection in the face of hoaxes
21. Ensure the veracity of the data
22. Information contrast
23. Thought and critical reasoning both deductive and inductive
24. Participation and social gathering
25. Being able to integrate and work efficiently in multidisciplinary teams assuming different roles and responsibilities
26. Flexibility and/or adaptability
27. Integrity and value of the performance of the professional activity
28. Tolerance to stress
29. Self-employment
30. Initiative capacity in analysis, planning, organization and management
31. Creativity
32. Motivation for achievement
33. Initiative and leadership
34. Fostering the imagination
35. Development of innovative capacity
36. Independent learning
37. Identify, practice and project proactive competition
38. Retention and synthesis capacity
39. Teamwork
40. Social responsibility
41. Dialogue critically and constructively
42. Ability to interpret, argue and solve problems
43. Self-confidence
In addition to these skills, the studies on journalistic skills and competencies identified the following skills specific for the fact-checker profession:
All in all, one can say that the profile of the fact-checker differs from the traditional journalistic role, going beyond the traditional journalistic practice. However, this topic has been poorly addressed until now and still requires further research in order to be able to develop a general fact-checker profile that can be applied to all the professionals that fulfill this role in different domains of activity.
Given the informational overload to which a person is subjected, studies have found that another efficient strategy to stop the spread of disinformation is a proactive approach, called "pre-bunking" (preemptive debunking). This relies on the idea of "inoculating" people against disinformation, so that they are better trained to identify disinformation tactics when they are faced with them. Those who support pre-bunking (Lewandovsky and van den Linden 2021) believe that, just as in the case of a real vaccine, once a person comes in contact with a "weakened" version of the practice of disinformation, then they will become immune when encountering that practice in the real world.
One of the first mentions of the term pre-bunking in the academic literature on disinformation is the article by Cook (2016), who strongly criticises the debunking approach. He argues that debunking is inefficient because people build mental models in which the false information fits neatly. Once a retraction is circulated, that particular mental model would be incomplete if the new information was accepted and the old one corrected. However, according to Cook, people prefer complete, even if incorrect, mental models over incomplete ones. The alternative is to help people build correct mental models through inoculation, especially by preemptively exposing the logical fallacy employed to spread a particular piece of disinformation.
The PROVE framework (Blastland et al 2020) for evidence-based communication stipulates the five important rules that can help develop informative messaging of a variety of topics, that can keep the audiences engaged and can help educate them even on complex scientific matters. The goal of evidence-based communication is not to persuade, but to inform, and eventually re-empower the individuals.
These steps would ensure that the public feels confident trusting the communicators when they try to warn them with respect to potentially harmful mis- or disinformation and listen to their recommendations more openly than if they tried to use persuasive techniques to reach them. Moreover, Blastland et al (2020) also explain that prebunking can play a decisive role in inoculating people against disinformation but it must be done strategically, that is public forums and popular news sources must be scanned constantly for indicators of what the public is concerned with so that the topics for disinformation might be detected ahead of time, before they become engrossed in disinformation. Thus strategies for evidence-based communication on those topics could be devised in a timely manner.
Before the advent of the online games (see section 5.1), the inoculation approach was conducted through in-class teaching of the controversy on various topics or through Massive Open Online Courses. Students taking part in classes involving pre-bunking strategies were then able to identify the main myths in their assignments.
A series of studies summarised by Lewandowsky and van den Linden (2021) have shown the efficiency of inoculation against fake news. Thus, van den Linden et al. (2017) and Cook et al. (2017) both conducted an inoculation experiment where people were presented with disinformation about climate change as well as an inoculation treatment through warnings about disinformation techniques. Those who had received "inoculation" before seeing the particular piece of disinformation tended to rate the accuracy of the false statements as much lower than those that had not been exposed to the inoculation treatment.
Pre-bunking also exhibits certain limitations. Firstly, it is moderated by partisanship as the effect is diminished for people from one side of the political spectrum. The second limitation concerns the setting: pre-bunking interventions are considerably more efficient in a laboratory setting than in the real world. Finally, the inoculation tends to wane after a while, as people are again exposed to the usual disinformation (Roozenbeck and van den Linden 2022).
Inoculation or pre-bunking has shown consistent results in stopping people from believing and sharing disinformation. By giving people a forewarning about the strategies that actors spreading disinformation use, pre-bunking convinces people to stop and think about what they are seeing, attempt to rate the accuracy of a piece of news and evaluate whether this is worth sharing further.
Reverse psychology has led to the development of another strategy aimed at creating awareness and resilience to propaganda and disinformation.
As a persuasion tactic, reverse psychology has been extensively used in marketing and is based on encouraging the target audience to do what is desired by advocating the opposite behaviour in a way that makes the alternative more convincing and alluring. Illustrative examples are offered by the study of social influence tactics under the compliance paradigm and include:
--> foot in the door, in which compliance with a small request increases compliance with a later, larger request
--> door-in-the-face, in which noncompliance with a relatively large first request increases compliance with an immediate, smaller request;
--> and disrupt-then-reframe, in which a request is phrased in unconventional terms then reframed to the advantage of the influence source. (Donald, Nail, Harper, 2010, 1)
Such tactics are normally used to favour influence in unbalanced power distance, in which the less powerful actor may wish to introduce conformity to its suggestions. And while acting against self-conformity to true intentions may be often counterintuitive, the success of such tactics, as evidenced by field literature on the model of social response, prove that it is noteworthy documenting strategies of persuasion that may anticipate the behaviour of the other actor engaged in communication. (Nail & Van Leeuwen, 1993; Willis, 1965).
Serious gaming, a concept that shall be studied extensively in chapter 5.2, has allowed pairing of the reverse psychological tactics with the therapeutic paradox, namely the use of the therapist of a situation “that can only be controlled by using direct communication and by abandoning indirect tactics” (Klein, 1974), that is by renouncing pathological behaviour patterns.
It has been proven that we can use the development of a psychological paradox situation to apply reverse thinking and expose target audience to a deeper understanding of the manipulative tactics used by propaganda and disinformation outlets. Think of a situation in which one takes his understanding of news from alternative channels (clones of news portals, grey zone sites and blogs, promoted via bots etc.). When exposed to a situation in which one has to step in the shoes of the propaganda and/or disinformation agent and use disinformation tactics, the first expected outcome is to grow resilience to tactics to which he/she has familiarised via debate, study, practice and serious gaming. In psychological terms, this equals to creating a situation that cannot be mitigated via old patterns of information collection and in which the individual needs to react in full awareness and defence to his/her exposure to false information and manipulation techniques. Hence, the expected cognitive and behavioural change.
However, in order to favour behavioural change through reverse psychology techniques employed in e.g. serious gaming tactics, one must “incorporate sound cognitive, learning, and pedagogical principles into their design and structure” (Greitzer, Kuchar, Huston, 2007, 1).
In an age where social media has become the main source of information for almost all social categories, the fact-checking process represents nowadays an essential step to be followed in order to ensure a correct judgment of the credibility of information obtained from the Internet. Given the technological boom which characterized the last decade, that facilitated the development of multiple technological instruments and tools to be used online for a both positive and negative output, the increased level of fake news sites, hoaxes and misinformation online is now considered a concern and, to some extent, a security issue (Stenger, 2016).
In this context, being able to make the difference between reality and manipulated information represents no longer a capability self-developed during lifetime, but is nowadays defined as a necessary skill to be trained and practiced with regularity in order to keep up with the evolution of the volatility of the online environment. As a consequence, the fact-checker has become a key profession in both informative/communicative and democratic processes that occur in contemporary society (Herrero & Herrera Damas, 2021, p. 51), with professional fact-checkers representing an essential factor for the control of information that is disseminated/circulated in the online environment (Herrero & Herrera Damas, 2021, p. 49).
Some experiments have been conducted to evaluate the effects of the exposure of disinformation on target audiences. These effects cover three different sets of effects: cognitive, emotional and behavioural.
One of the main conclusions of these studies, points out that, although the effects of disinformation depend on several factors, pre-existing attitudes and beliefs play a fundamental role on the acceptance of malicious content, such as disinformation narratives, by individuals (Ewoldsen & Rhodes, 2020). However, unless pre-exposure and post-exposure surveys are conducted with some frequency, it will be difficult to assess real impact and effective influence on attitudes and behaviour.
According to Arcos (2018), more evaluative research based on social research techniques is necessary to provide findings “on the cognitive/informational impact (message exposure, understanding, and retention), attitude impacts (attitude creation, modification, and reinforcement), and behavioural effects (how people will behave or will cease to behave as a consequence of accepting those malicious messages). It is of utmost importance to track these evidence-based assessments to evaluate the medium and long-term effects of mis- and disinformation in societies. Valkenburg & Oliver (2020) discussed the need for assessing reliable data on audiences and their exposition to disinformation messages –noticing the information can come from multiple devices or channels– for a full comprehensive understanding of its impact. In the same way, more research on the impact that the fact-checked content has on the publics is needed. Although some evaluation research has been conducted, it has been focused mainly on electoral processes (Wintersieck, 2017). Measurement of the web-traffic for the fact-checking organizations and tracking of information flows of fact-checked content in social media can be useful, but do not cover the full tracing of fact-checked contents.
Lukito’s research (2020) on the activities of Internet Research Agency (IRA), and how these were coordinated through the different social media, suggests the interest of conducting post-mortem analysis (Arcos 2018) covering full tracking (platform by platform) in order to detect what channels and how they have been used, and develop ad-hoc fact-checking strategies.
The growth of disinformation and misinformation and its expansion to non-political issues has relaxed the debate on whether fact-checking activities should be carried out by newspapers or independent organizations. Fact-checking was always part of the journalistic process but the new information environment has raised the need for verification on a number of issues that surpass traditional political news stories. The rise of new fact-checking organizations, as a result of the higher demand for verification related to the COVID-19 associated infodemic, seems to have affirmed the principle of political neutrality; a high number of fact-checking organizations are NGO’s and hence this principle is not questioned in the same ways as newspapers.
Unlike traditional news outlets that also have sections on opinion, and develop news coverage on specific events and development, setting the agenda on political and international news the fact-checkers emerge as an active listening agent that identifies and satisfies the information needs of its community. Fact-checking organizations are the result of the new paradigm of journalism marked by technological development and citizen interaction in information activity, a model that, as Pisani (2008) points out, breaks with the hierarchical communication model, "from one to many", moving to a horizontal communication model and "from many to many".
From this approach, a mutually beneficial relationship is established between the organization and its consumers. Involving audiences in the different phases of fact-checking can be essential to fill the existing gaps in organizations by expanding the topics on misinformation and disinformation, identifying viral information that is disseminated in closed messaging platforms, and contributing to the dissemination of corrected information. The commitment to programs such as Superpoderes of the Spanish foundation Maldita is a good example of how the knowledge of audiences (on health, politics, climate, International Relations, history, technology, etc.) can be integrated into organizations supporting the phase of consulting independent experts to interpret the data (Graves, 2017).
A recent challenge that these organizations have to deal with is the appearance of informal or non-professional fact-checkers that from twitter mainly disseminate misinformation or disinformation corrected, a phenomenon that has grown up with the war in Ukraine. While in some cases, these profiles can help professional fact-checkers, it is difficult to determine possible cover intentions. Barriers to entry must be established such as the monitoring of methodologies, expert knowledge contrasted in a certain scope, verification of professional credentials and reputation.
On other hand, the growth of disinformation activity in recent years has been linked to a boom in research and development of tools aimed at automating certain tasks that are necessary for the verification process. This automatic fact-checking is based on the development of algorithmic models based on deep learning, machine learning, natural language processing (NLP) and big data (Huynh & Papotti, 2019, Miranda et.al, 2019, García-Marín, 2022). These technologies support the detection of factual claims worth verifying, check if a content has already been verified and perform affirmation validation to determine the feasibility of the detected content (Miranda et al., 2019). Communication with audiences can benefit from intelligent chatbots (Cha et.al, 2020) that incorporate adversarial generative networks (ADNs) to retrieve and generate evidence and explanations in natural language.
These tools allow the user to check if a content has already been verified as well as send content suspected of misinformation or disinformation to be verified.
The attention for the development of models to support verification has expanded in recent years, as a result of the boom in demand for content verification because of the Covid-19 pandemic, however there are multiple challenges still pending. For example, García-Marín (2022) points out the absence of models for analysis of fake audio, as well as a poor attention to the detection of fake images and video. Likewise, the changes in the diffusion patterns and the entry into this scenario of social networks such as Tik Tok or Telegram, newer or scarcely used for the dissemination of these contents, means that the tools for its exploitation are still in an emerging phase.
This section will briefly present hands-on approaches, handbooks and toolkits developed by specialists in the field of communication, who have been confronted with disinformation in their activities and have developed means of combating it, which could prove valuable resources for communicators engaged in countering disinformation.
6 stages each with hands-on instruments to operationalise them into actionable steps:
Understand the types of disinformation that exist in the overcrowded media environment at present and the potential dangers and threats they pose. In order to recognise disinformation effectively, one needs first to analyse the messages produced. Secondly, the narratives behind the messages need to be identified, as well as the values, identities, beliefs that they reflect (the writers of RESIST 2 call this the brand). Once the brand is identified the interests can also be ascertained, as well as the potential impact of disinformation on target audiences.
The FIRST indicators for analysing the message:
a) Fabrication - Is there any manipulated content? E.g., a forged document, manipulated image, or deliberately twisted citation.
b) Identity - Does anything point to a disguised or misleading source, or false claims about someone else’s identity? E.g., a fake social media account, claiming that a person or organisation is something they are not, or behaviour that doesn’t match the way the account presents itself.
c) Rhetoric - Is there use of an aggravating tone or false arguments? E.g., trolling, whataboutism, strawman, social proof, and ad hominem argumentation.
d) Symbolism - Are data, issues or events exploited to achieve an unrelated communicative goal? E.g. historical examples taken out of context, unconnected facts used to justify conspiracy theories, misuse of statistics, or conclusions that are far removed from what data reasonably supports.
e) Technology - Do the communicative techniques exploit technology in order to trick or mislead? E.g. off-platform coordination, bots amplifying messages, or machine-generated text, audio and visual content. (RESIST 2 2021 10-11)
Overview of the tools available to spot disinformation in a timely manner and monitor the media environment
The first element that ensures early detection is monitoring the risks, which can be done using:
a) Platform analytics. Each social media platform has an analytics function that provides data on accounts or pages that you own. Platforms that you own pages on are an important source of insight for understanding how people engage with your content.
b) Google Trends. Shows how frequently terms are searched for on Google. The results can be broken down by time, country, and related queries to focus attention on a specific timeframe, location, and/or topic. This is useful for revealing spikes in interest and can help guide your attention to specific days, locations or topics where interest in a debate has changed.
c) TweetDeck. Create a Twitter dashboard to follow multiple timelines, accounts and search terms in real time. Note that you can monitor accounts and keywords in Tweetdeck without being a follower. Available at tweetdeck.twitter.com.
d) Browser extensions. There are a number of apps that can be added to your browser to speed up or even automate functions such as translation, image searches and taking screenshots. This is especially useful for speeding up simple tasks that you need to do often. (17)
The second element is to develop contingency plans in case disinformation affects priorities such as: objectives, information, brands, audiences and come up with detailed scenarios as to how these could be affected and how the communicators could respond to those threats in an effective manner.
Refers to the ways in which communicators can turn information into actionable insight for decision-makers. More precisely, the information resulting from stage 2, needs to be presented to decision-makers in such a way that it becomes relevant and can guide and inform decisions.
Presents the structural analysis techniques that can assist communicators in predicting the potential impact of disinformation and produce objective assessments.
When determining the impact, several key aspects must be taken into account and measured in order to obtain an informed and objective assessment rather than a gut feeling:
a) What is the degree of confidence? the results from the monitoring stage should be treated as indicators of possible trends rather than fixed and determined opinions and should be evaluated in terms of risk (high, medium, low) and likelihood (high, medium, low).
b) How does mis- or disinformation affect your areas of responsibility? Clearly articulating potential consequences assists in identifying the best responses and resilience building methods.
c) How does the mis- or disinformation affect your communication with the public? Communicators need to use the FIRST indicators to answer this question.
d) How does the mis- or disinformation affect your brand? what interests/values/beliefs might be targeted, why and how?
e) What is the likely reach of the mis- or disinformation? How far can it spread and what kind of audiences can it reach?
f) How should I prioritise the mis- and disinformation? There is no need to address every single piece of disinformation that appears, since it is also impossible from a resource point of view. If the monitoring stage and the impact analysis are done appropriately, then it is easier to identify the disinformation that has potential to impact the brand in the most significant manner and needs to be addressed. “A prioritised response is one in which there is a clear and compelling need to protect government objectives, information, brands and/or audiences.” (29)
Maps the communication skills that could be employed to develop communication strategies meant to increase credibility, create proactive, engaging content for the target audience. If communication to correct disinformation is needed, then certain rules apply.
a) Follow communication best practice (OECD Principles of Good Practice for Public Communication Responses to Mis- and Disinformation recommended): transparency, inclusiveness, responsiveness, whole-of-society, public-interest driven; institutionalisation; evidence based; timeliness; prevention; future-proof.
b) What are my communication options? Communication could take place on traditional channels (radio, television, print newspapers) or on digital platforms or on social media. Moreover, the options also include proactive versus reactive methods. Among the proactive efforts are mentioned: inoculation, awareness raising, campaigns, network building, counter-brand, resilience building (RESIST 2 2021 40). Among the reactive efforts are enumerated: debunking, counter-narrative, crisis communication, policy response (RESIST 2 2021 43).
Tools to measure the effectiveness of strategic communication campaigns. The measurement should take place along two different coordinates: outputs and outcomes. Outputs refer to the messages created and disseminated and will be measured with respect to audiences reached and engaged. Outcomes refer to the impact of that communication on the world and will be measured by tracking changes in the target audience's behaviours and thinking.
The authors of the handbook define information influence activities as involving “potentially harmful forms of communication orchestrated by foreign state actors or their representatives. They constitute deliberate interference in a country’s internal affairs to create a climate of distrust between a state and its citizens” and through the use of deception to undermine democracy.(11-12) The handbook provides very useful guidelines for communicators from company/organisation/ institution/government to employ in order to counter such activities. Disinformation is considered to be a type of information influence activities and therefore the handbook guidelines can be used to counter it as well.
In order for a communicator to counter information influence activities and campaigns, they must follow certain guidelines:
The preparation stage is very important in order to assess the risks and threats that the company/organisation/institution might be or become subject to so that mechanisms are developed to ensure an appropriate, adequate and fast response. The preparation stage focuses on three aspects:
a) Raising awareness - if the public in or outside the company/organisation/ institution/society as a whole is aware of the issues, of the possible threats and vulnerabilities, then they are more willing to work together, pool their resources and knowledge and create a more comprehensive resilience-building approach.
b) Building trust - Influence operations and disinformation campaigns are aimed at subverting trust and disengaging audiences. Therefore, strategic communications’ main role is to promote the company/organisation/ institution/government values, vision, objectives, etc., through consistent, proactive, positive, correct, well devised and distributed messages, employing well-defined narratives that promote the core values.
c) Assessing risks and vulnerabilities in order to know where the company/organisation/ institution/society as a whole may be most exposed to disinformation attacks: “ with a specific focus upon vulnerable stakeholders/audiences, key values, messages and narratives, and the overall risk to your organisation’s core activities” (34)
Responses need to be adapted to the type of organisation, to the public, to the channels they are transmitted through, to the type of influence activity that targets the organisation.
a) Choose your response. The authors of the handbook propose four possible types of responses from which to choose: assess, inform, advocate, defend. The first two responses fit into a broader category of fact-based responses, while the latter two can be framed as advocacy-based responses.
Fact-based responses have two levels. Level 1 is Assess and it focuses on mapping the situation, fact-checking and investigating transparently, while Level 2 is Inform and centres around making a statement, correcting, referring independent sources that can corroborate the information, asserting values, notifying stakeholders, issuing a holding statement (MSB Handbook for communicators, 2020 36).
Advocacy-based responses also have two more levels. Level 3 Advocate which includes dialogue, facilitation, multipliers (engaging with key communicators to disseminate further), piggybacking (using existing events, initiatives, or debates to promote the facts of the case), formal statements and storytelling. Level 4 Defend comprises of ignoring (i.e., doing nothing about the disinformation), reporting (to the appropriate authorities), blocking (the user who promoted the disinformation), exposing (the actor behind the disinformation) (MSB Handbook for communicators, 2020 37).
b) Check your facts. When countering disinformation and influence operations, facts matter because they support the legitimacy of the company/organisation/ institution/government, which cannot be seen as engaging in the same type of deception. Therefore, regardless of the type of response adopted, facts should form and inform the basis for the narratives and messages.
c) Use social media, with its built-in functions of tagging, notifications, links and attachments both to be aware of the messages circulating regarding the company/organisation/ institution/government, but also to build networks of transmission, which could be activated and engaged when a disinformation campaign needs to be shut down.
Communicators need not only prepare and act, but also evaluate the measures taken and assess their efficacy.
a) Describe. Actively describing the situation in which the company/organisation/ institution/government was targeted by disinformation will help to establish an organisational understanding of the event, a continuity for best practices, and to design other possible proactive measures.
b) Reflect. Analysing the effects of the disinformation, of the responses, weighing the good and the bad outputs and outcomes could also increase preparedness for future events
c) Share. Experiences and expertise need to be shared among colleagues and with management so that there is a common understanding and culture regarding mitigating risks and vulnerabilities posed by disinformation.
Is one of the first global initiatives to focus on “providing practical and ethical guidance in how to find, verify, and publish content sourced from the social web.” The initial founding partners are: BellingCat, Dig Deeper, Emergent.info, EyeWitness Media Hub, Google News Initiative, Meedan, Reported.ly, Storyful, and VerificationJunkie) and in 2016 it expanded to become an international Partner Network of newsrooms, universities, platforms and civil society organisations.
(IFCN) at Poynter was also established in 2015 and it is a global leader in fact-checking, promoting best practices and fact-checking standards included in the IFCN’s Code of Principles
Is also developed by Poynter and focuses on enhancing young generations’ and not only abilities to detect disinformation in online content, such as critical thinking and media literacy and become critical consumers of online content. Their motto is: We believe that when facts prevail, democracy wins.
Is an independent technological analytical centre and an NGO. They research disinformation in the public sphere and conduct media literacy educational campaigns. They employ artificial intelligence to research disinformation and bridge the gap between the speed with which disinformation is produced and debunked.
(EDMO) is a hub for fact-checkers, academics and other relevant stakeholders which allows them to work together and actively links media organisations, media literacy experts, policy makers, teachers and citizens so that actions taken to fight against disinformation are coordinated.
Is multi-nationally constituted and NATO-accredited international military organisation, which is not part of the NATO Command Structure, nor subordinate to any other NATO entity. Its mission is to contribute to the strategic communications capabilities of NATO, NATO allies and NATO partners. It is made up of multinational and cross-sector participants from the civilian and military, private and academic sectors, trainers, educators, analysts and researchers.
Appeared in the context of the COVID 19 infodemic and it was aimed at flooding the online space with facts regarding the pandemic so as to counter the numerous disinformation campaigns. They also provide guidelines for creating evidence-based communication campaigns tailored to penetrate the saturated social media environment.
Romania is an initiative to promote digital education, raise public awareness and counter disinformation. It produces a series of products and analyses in order to enhance the general public’s abilities to identify disinformation and its manifestations.
Romania is a weekly report on the latest attempts at disinformation and their debunking written by two prominent Romanian journalists.
Romania was started in 2014 and is the first fact-checking site for public policies and statements in Romania. It is entirely funded by readers’ donations and does not accept any form of government funding.
Is a news platform as well as an educational instrument for citizens, initiated by the Romanian Ministry of Defence. It is updated by a community of military public relations officers and journalists and its goal is to become a new public communication channel to assist in correctly informaning the public on topics pertinent to the Romanian Armed Forces and to correct non-factual debates in the public arena with respect to the Romanian Armed Forces.
1.Allcott, H. & Gentzkow, M., (2017). Social Media and Fake News in the 2016 Election. Journal of Economic Perspectives, p. 211–236
2. Arcos, R. (2018). “Post-event analysis of the hybrid threat security environment: assessment of influence communication operations.” Hybrid CoE Strategic Analysis 12. https://www.hybridcoe.fi/wp-content/uploads/2020/07/Strategic-Analysis-2018-12-Arcos.pdf
3. Arcos R, Gertrudix M, Arribas C and Cardarilli M. (2021). “Responses to digital disinformation as part of hybrid threats: a systematic review on the effects of disinformation and the effectiveness of fact-checking/debunking” Open Research Europe, 2(8). https://doi.org/10.12688/openreseurope.14088.1
4. Basol, Melisa, et al. (2021) "Towards psychological herd immunity: Cross-cultural evidence for two prebunking interventions against COVID-19 misinformation." Big Data & Society 8 (1): 20539517211013868.
5. Basol, Melisa, Jon Roozenbeek, and Sander Van der Linden (2020).. "Good news about bad news: Gamified inoculation boosts confidence and cognitive immunity against fake news." Journal of cognition 3 (1)
6. Berkowitz L. (1984). Some effects of thoughts on anti- and prosocial influences of media events: A cognitive-neoassociation analysis. Psychol Bull. 95(3): 410–427.
7. Blastland, M., Freeman, A. L., van der Linden, S., Marteau, T. M., & Spiegelhalter, D. (2020). Five rules for evidence communication.
8. Brashier, N. M., Pennycook, G., Berinsky, A. J., & Rand, D. G. (2021). Timing matters when correcting fake news. Proceedings of the National Academy of Sciences, 118(5), e2020043118.
9. Carpenter, S., (2009). An Application of the Theory of Expertise: Teaching Broad and Skill Knowledge Areas to Prepare Journalists for Change. Journalism & Mass Communication Educator, 64(3), p. 287–304.
10. Cazalens, S. et al., (2018). A Content Management Perspective on Fact-Checking. Lyon, France, s.n., pp. 565-574.
11. Chan, M.-p. S; Jones, C.R.; Jamieson, K.H; Albarracin, D. (2017). “Debunking: A meta-analysis of the psychological efficacy of messages countering misinformation”. Psychol. Sci. 28:1531–1546
12. Chung, M., & Kim, N. (2021). When I learn the news is false: How fact-checking information stems the spread of fake news via third-person perception. Human Communication Research, 47(1), 1-24.
13. Ciampaglia, G. L. et al., (2015). Computational Fact Checking from Knowledge Networks. Plos One, pp. 1-13.
14. Compton, Josh, et al (2021):. "Inoculation theory in the post‐truth era: Extant findings and new frontiers for contested science, misinformation, and conspiracy theories." Social and Personality Psychology Compass 15 (6): 1-16
15. Cook, J., Lewandowsky, S., & Ecker, U. K. H. (2017). "Neutralizing misinformation through inoculation: Exposing misleading argumentation techniques reduces their influence". PLOS ONE, 12(5), e0175799. https://doi.org/10.1371/journal. pone.0175799
16. Cook, John, et al (2022). "The cranky uncle game—Combining humor and gamification to build student resilience against climate misinformation." Environmental Education Research: 1-17 (online first)
17. Cook, John (2016). "Countering climate science denial and communicating scientific consensus." Oxford research encyclopedia of climate science., https://oxfordre.com/climatescience/abstract/10.1093/acrefore/9780190228620.001.0001/acrefore-9780190228620-e-314, accessed 5.09.2022
18. Cotter, K., DeCook, J. R. & Kanthawala, S., (2022). Fact-checking the Crisis: COID-19, Infodemics, and the Platformization of Thruth. Social Media + Society, pp. 1-13.
19. Day, J. & Kleinmann, S. (2017). “Combating the Cult of ISIS: A Social Approach to Countering Violent Extremism.”The Review of Faith & International Affairs, 15(3): 14-23, https://www.doi.org/10.1080/15570274.2017.1354458
20. Del Mar Rivas Carmona, M. & Vaquera, M. L. C., (2020). Pandemic and post-truth: The impact of COVID-19 on Whatsapp communication. Prisma Social , pp. 110-154.
21. Dobbs, M., (2017). The Rise of Political Fact-checking: How Reagan Inspired a Journalistic Movement: A Reporter’s Eye View, s.l.: New America Foundation .
22. Ecker, U.K.H., Lewandowsky, S. & Chadwick, M. (2020). “Can corrections spread misinformation to new audiences? Testing for the elusive familiarity backfire effect.” Cognitive Research 5(41). https://doi.org/10.1186/s41235-020-00241-6.
23. Ecker, U.K.H; Lewandowsky, S.; Cook, J.; Schmid, P., Fazio, L.Z.; Brashier, N.M.; Kendeou, P.; Vraga, E. K. and Amazeen, M. A. (2022). “The psychological drivers of misinformation belief and its resistance to correction.” Nature Reviews Psychology, 1. https://doi.10.1038/s44159-021-00006-y
24. Edson, C. T. J., Zheng, W. L. & Richard, L., (2018). Defining “Fake News". Digital Journalism, p. 137–153.
25. Ellefsen, R. & Sandberg, S. (2022). “Everyday Prevention of Radicalization: The Impacts of Family, Peer, and Police Intervention.” Studies in Conflict & Terrorism. https://www.doi.org/10.1080/1057610X.2022.2037185
26. Fabry, M., (2017). Here’s How the First Fact-Checkers Were Able to Do Their Jobs Before the Internet. [Online] Available at: https://time.com/4858683/fact-checking-history/
27. Fact, F., (2020). The challenges of online fact-checking, London: Full Fact.
28. Festinger, Leon. (1962). A Theory of Cognitive Dissonance. Stanford, CA: Stanford University Press
29. Feldman L. (2017). The Hostile Media Effect. In: Kenski; Kate; Jamieson, Kathleen Hall. The Oxford Handbook of Political Communication. New York, Oxford University Press, 549–564. ISBN: 978-0-19-979347-1.
30. Fischer, S., (2020). Fact-checking goes mainstream in Trump era. [Online] Available at: https://www.axios.com/2020/10/13/fact-checking-trump-media
31. García-Marín, D. (2022). “Modelos algorítmicos y fact-checking automatizado. Revisión sistemática de la literatura.” Documentación de las Ciencias de la Información. Monográfico. Editorial Complutense. ISSN-e: 1988-2890 https://dx.doi.org/10.5209/dcin.77472
32. García-Marín, D., (2020). Global infodemic: Information disorders, false narratives, and fact checking during the Covid-19 crisis. Prisma Social.
33. Graves, L. & Amazeen, M. A., (2019). Fact-Checking as Idea and Practice in Journalism. In: Oxford research encyclopedia of communication. Oxford: Oxford University Press.
34. Graves, L. & Cherubini, F., (2016). The rise of fact-checking sites in Europe, s.l.: The Reuters Institute for the Study of Journalism.
35. Graves, L. (2017). “Anatomy of a fact check: Objective practice and the contested epistemology of fact checking.” Communication, Culture & Critique, 10:518–537. https://www.doi.org/10.1111/cccr.12163
36. Greitzer, F.L., Kuchar , O.A., and Huston , K. (2007) Cognitive science implications for enhancing training effectiveness in a serious gaming context. ACM J. Edu. Resources in Comput., Vol. 7, No. 3, Article 2 (August 2007), 10 pages. DOI=10.1145/1281320.1281322
37. Hameleers M, Powell TE, Van Der Meer TGLA, et al. (2020): “A Picture Paints a Thousand Lies? The effects and mechanisms of multimodal disinformation and rebuttals disseminated via social media.” Political Communication, 37(2): 281–301.
38. Herrero, E. & Herrera Damas, S., (2021). Spanish-Speaking Fact-Checkers around the world: profiles, similarities, and differences among fact checking professionals. Revista de Comunicacion de la SEECI, pp. 49-77.
39. Herrero, E. & Herrera Damas, S., (2021). Spanish-Speaking Fact-Checkers around the World: Profiles, Similarities, and Differences among Fact Checking Professionals. Revista de Comunicación de la SEECI. 2021, pp. 49-77.
40. Himma-Kadakas, M. & Ojamets, I., (2022). Debunking False Information: Investigating Jounalists' Fact-Checking Skills. Digital Journalism, 10(5), pp. 866-887.
41. Huynh, Viet-Phi & Papotti, Paolo (2019.) “A Benchmark for Fact Checking Algorithms Built on KnowledgeBases.” CIKM ’19, November 3–7, Beijing, China. https://doi.org/10.1145/3357384.3358036
42. Jarman, J. W. (2016) “Influence of political affiliation and criticism on the effectiveness of political fact-checking", Communication Research Reports, 33(1): 9-15. https://www.doi.org/10.1080/08824096.2015.1117436
43. Javaid, M., Haleem, A., Singh, R. P., Suman, R., & Gonzalez, E. S. (2022). Understanding the adoption of Industry 4.0 technologies in improving environmental sustainability. Sustainable Operations and Computers.
44. Johnson, H.M., & Seifert, C.M. (1994). “Sources of the continued influence effect: When misinformation in memory affects later inferences.” Journal of Experimental Psychology: Learning, Memory, and Cognition, 20: 1420–1436.
45. Kan, I.P., Pizzonia, K.L., Drummey, A.B. et al.; Mikkelsen, E.J.E. (2021). “Exploring factors that mitigate the continued influence of misinformation.” Cogn. Research. 6(76). https://doi.org/10.1186/s41235-021-00335-9
46. LaGarde, J. & Hudgins, D., (2018). Fact vs. Fiction: Teaching Critical Thinking Skills in the Age of Fake News. Portland, Oregon: International Society for Technology in Education.
47. Leiserowitz A.; Maibach E.; Rosenthal S.; Kotcher, J.; Carman, J.; Wang, X., Marlon, J.; Lacroix, K. & Goldberg, M. (2021). “Climate Change in the American Mind” in April 2021 Yale University and George Mason University; New Haven, CT: Yale Project on Climate Change Communication.
48. Lewandowsky, S., Ecker, U. K., Seifert, C. M., Schwarz, N., & Cook, J. (2012). “Misinformation and its correction: Continued influence and successful debiasing.” Psychological Science in the Public Interest, 13(3): 106–131. https://doi.org/10.1177/1529100612451018
49. Lewandowsky, Stephan, and Sander Van Den Linden (2021). "Countering misinformation and fake news through inoculation and prebunking." European Review of Social Psychology 32 (2): 348-384.
50. Lewandowsky, S., Cook, J., Ecker, U. K. H., Albarracín, D., Amazeen, M. A., Kendeou, P., Lombardi, D., Newman, E. J., Pennycook, G., Porter, E. Rand, D. G., Rapp, D. N., Reifler, J., Roozenbeek, J., Schmid, P., Seifert, C. M., Sinatra, G. M., Swire-Thompson, B., van der Linden, S., Vraga, E. K., Wood, T. J., Zaragoza, M. S. (2020). The Debunking Handbook 2020. Available at https://sks.to/db2020. DOI:10.17910/b7.1182.
51. Lukito J. (2020). “Coordinating a Multi-Platform Disinformation Campaign: Internet Research Agency Activity on Three U.S. Social Media Platforms, 2015 to 2017.” Polit Commun, 37(2): 238–255
52. Klein, H. A. (1974). Behavior modification as therapeutic paradox. American Journal of Orthopsychiatry, 44(3), 353–361. doi:10.1111/j.1939-0025.1974.tb00888.
53. MacDonald, Geoff, Nail Paul R, HAPER Jesse R (2010):” Do people use reverse psychology? An exploration of strategic self-anticonformity”, in Social influence. 6/2011, vol. 1, DOI: 10.1080/15534510.2010.517282
54. Mahl, Daniela, Mike S. Schäfer, and Jing Zeng (2022):. "Conspiracy theories in online environments: An interdisciplinary literature review and agenda for future research." New media & society 14614448221075759. (online first)
55. Mantzarlis, A., (2015). Introducing Poynter’s International Fact Checking Network. [Online]
56. Available at: https://www.poynter.org/news/introducing-poynters-international-fact-checking-network
57. Mantzarlis, A., (2018). Fact-checking 101. In: Journalism, ‘Fake News’ & Disinformation. Handbook for Journalism Education and Training. s.l.:UNESCO, pp. 81-95.
58. Marietta, M., Barker, D. C. & Bowser, T., (2015). Fact-Checking Polarized Politics: Does The Fact-Check Industry Provide Consistent Guidance on Disputed Realities?. The Forum, p. 577–596.
59. Margolin D.B.; Hannak A.; Weber I. (2018). “Political Fact-Checking on Twitter: When Do Corrections Have an Effect?.” Political Communication. 35(2): 196–219
60. Marwick, E. W. (2018). “Why do people share fake news? A sociotechnical model of media effects.” Georgetown Law Technologic Review, 474.
61. Miranda, S., Nogueira, D. & Mendes, A. (2019). “Automated Fact Checking in the News Room,” WWW ’19, May 13–17-, 2019, San Francisco, CA, USA, 2019 IW3C2 (International World Wide Web Conference Committee), published ACM ISBN 978-1-4503-6674-8/19/05.
62. Nail, PR and Van Leeuwen, MD. (1993). An analysis and restructuring of the diamond model of social response. Personality and Social Psychology Bulletin, 19: 106–116.
63. Nieminen, S. & Rapeli, L., (2018). Fighting Misperceptions and Doubting Journalists'Objectivity: A Review of Fact-checking Literature. Political Studies Review, pp. 1-14.
64. Nyhan, B., Porter, E., Reifler, J. & Wood, T. J., (2020). Taking Fact-Checks Literally But Not Seriously? The Effects of Journalistic Fact-Checking on Factual Beliefs and Candidate Favorability. Political Behavior, p. 939–960.
65. Nyhan, B., & Reifler, J. (2010). “When corrections fail: The persistence of political misperceptions.” Political Behavior, 32(2):303–330. https:// doi.org/10.1007/s11109-010-9112-2
66. Nyhan B., Reifler, J., Ubel P. (2013). “The Hazards of Correcting Myths about Health Care Reform.” Medical care, 51(2): 127-132.https://www.doi.org/ 10.1097/MLR.0b013e318279486b
67. Nyhan, B. (2021). “Why the backfire effect does not explain the durability of political misperceptions.” Proceedings of the National Academy of Sciences, 18(15), https://doi.org/10.1073/pnas.1912440117
68. Ornebring, H. & Mellado, C., (2016). Valued Skills among Journalists: An exploratory comparison of six European nations. Journalism, 19(4), pp. 1-19.
69. Pal, A. & Banerjee, S., (2019). Understanding online falsehood from the perspective of social problem. In: Handbook of Research on Deception, Fake News, and Misinformation Online. Hershey, PA: IGI Global, pp. 1-17.
70. Pemmet, James & Anneli Kimber Lindwall (2021) Fact-Checking and Debunking. A Best Practice Guide to Dealing with Disinformation, published by NATO Strategic Communications Center of Excellence.
71. Pennycook, Gordon, and David G. Rand (2021):. "The psychology of fake news." Trends in cognitive sciences 25 (5): 388-402.
72. Pennycook, G., Bear, A., Collins, E. T., & Rand, D. G. (2020). The implied truth effect: Attaching warnings to a subset of fake news headlines increases perceived accuracy of headlines without warnings. Management Science, 66(11), 4944-4957.
73. Perez-Escolarr, M., Ordonez-Olmedo, E. & Alaide-Pulido, P., (2021). Fact-Checking Skills And Project-Based Learning About Infodemic And Disinformation. Thinking Skills and Creativity, pp. 1-11.
74. Persily, N., (2017). The 2016 U.S. Election: Can Democracy Survive the Internet?. Journal of Democracy, Volume 28, Number 2, pp. 63-76.
75. Poynter, n.d. International Fact-Checking Network. Empowering fact-checkers worldwide. [Online] Available at: https://www.poynter.org/ifcn/about-ifcn/
76. Roozenbeek, J., & van der Linden, S. (2019a). "The fake news game: Actively inoculating against the risk of misinformation". Journal of risk research, 22(5), 570–580. https://doi.org/10.1080/13669877.2018.1443491
77. Roozenbeek, J., & van der Linden, S. (2019b). "Fake news game confers psychological resistance against online misinformation". Nature Humanities and Social Sciences Communications, 5(65). https://doi.org/10.1057/s41599-019-0279-9
78. Roozenbeek, J., & van der Linden, S. (2020). "Breaking Harmony Square: A game that “inoculates” against political misinformation". The Harvard Kennedy School Misinformation Review, 1(8). https://doi.org/10.37016/mr-2020-47
79. Roozenbeek, J., & Van Der Linden, S. (2022). How to combat health misinformation: A psychological approach. American journal of health promotion, 36(3), 569-575.
80. Shifman, L., (2013). Memes in a digital world: Reconciling with a conceptual troublemaker. Journal of Computer Mediated Behavior, p. 362–377.
81. Stencel, M., Ryan, E. & Luther, J., (2022). Fact-checkers extend their global reach with 391 outlets, but growth has slowed. [Online] Available at: https://reporterslab.org/tag/fact-checking-census/
82. Stenger, M., (2016). 8 Ways to Hone Your Fact-Checking Skills. [Online] Available at: https://www.opencolleges.edu.au/informed/features/8-ways-to-hone-your-fact-checking-skills/
83. Susmann M.K. & Wegener, D.T. (2022). “The role of discomfort in the continued influence effect of misinformation.” Memory & Cognition, 50: 435–448, https://doi.org/10.3758/s13421-021-01232-8
84. Swire, Briony, et al (2017):. "Processing political misinformation: Comprehending the Trump phenomenon." Royal Society open science 4.3 160802.
85. Traberg CS, Roozenbeek J, van der Linden S (2022) "Psychological Inoculation against Misinformation: Current Evidence and Future Directions" The ANNALS of the American Academy of Political and Social Science. 700(1):136-151. doi:10.1177/00027162221087936
86. Trevors, G.J. (2022). “The Roles of Identity Conflict, Emotion, and Threat in Learning from Refutation Texts on Vaccination and Immigration.” Discourse Processes, 59(1-2): 36-51. https://doi.org/ 10.1080/0163853X.2021.1917950.
87. Valkenburg P.M., Oliver M.B. (2020). Media Effects Theories: An Overview. In: Oliver, Mary Beth; Raney, Arthur A.; Bryant, Jennings. Media Effects: Advances in Theory and Research. Fourth Edition. Routledge Communication Series, Kindle Edition.
88. Van der Bles AM, Van der Linden S, Freeman ALJ, et al.(2020). “The effects of communicating uncertainty on public trust in facts and numbers.” Proc Natl Acad Sci U S A, 117(14): 7672–7683
89. van der Linden S, Roozenbeek J and Compton J (2020) "Inoculating Against Fake News About COVID-19". Front. Psychol. 11:566790. doi: 10.3389/fpsyg.2020.566790
90. van der Linden, S., Leiserowitz, A., Rosenthal, S., & Maibach, E. (2017). "Inoculating the public against misinformation about climate change". Global Challenges, 1(2): 1-17 10.1002/gch2.201600008
91. Vosoughi, S., Roy, D., & Aral, S. (2018). "The spread of true and false news online". Science, 359(6380): 1146–1151.
92. Walter, N.; Cohen,J.; Holbert R.L & Morag, Y. (2019). “Fact-Checking: A Meta-Analysis of What Works and for Whom.” Political Communication, https://www.doi.org/10.1080/10584609.2019.1668894
93. Walter, N. & Murphy, S. T. (2018). “How to unring the bell: A meta-analytic approach to correction of misinformation.” Commun. 85: 423–441
94. Willis, RH. 1965. Conformity, independence, and anticonformity. Human Relations, 18: 373–388.
95. Wintersieck, A. L. (2017). “Debating the Truth: The Impact of Fact-Checking During Electoral Debates.” American Politics Research, 45(2): 304–331. https://doi.org/10.1177/1532673X16686555
96. Yaqub, W. et al., (2020). Effects of Credibility Indicators on Social Media News Sharing Intent. s.l., Publication History, pp. 1-14.
97. RESIST 2. Counter disinformation toolkit, UK Government Communication Service, 2021.
98. Journalism, ‘Fake News’ & Disinformation Handbook for Journalism Education and Training, 2018 (available at Journalism, fake news & disinformation: handbook for journalism education and training - UNESCO Digital Library);
99. Balancing Act: Countering Digital Disinformation While Respecting Freedom of Expression; Broadband Commission research report on ‘Freedom of Expression and Addressing Disinformation on the Internet, 2020, available at Balancing Act: Countering Digital Disinformation While Respecting Freedom of Expression (broadbandcommission.org)
100. MSB Countering information influence activities A handbook for communicators. Swedish Civil Contingencies Agency (MSB), 2019 (available at Countering information influence activities : A handbook for communicators (msb.se))
101. World Health Organization, 2022. Infodemic. [Online] Available at: https://www.who.int/health-topics/infodemic#tab=tab_1