This section provides a short overview on the role of data protection in combating disinformation. In the first part, the section provides a short introduction into the right to privacy and the data protection framework. It then proceeds to examine different dimensions of data protection, which may be relevant for the prevention of disinformation, such as profiling, automated decision making, the principles of privacy by design/privacy by default, sensitive personal data. The section concludes with a short discussion on the limitations of the personal data framework in preventing and countering disinformation. The scope of the section is to provide an overview of the ways in which European data protection legislation can be employed in countering disinformation, but also the existing limitations in this field.
Main research questions addressed
● How does data protection assist in tackling disinformation?
● What are the data protection guidelines available when tackling disinformation?
Several techniques exist to tackle disinformation, such as content moderation and takedowns, algorithmic de-ranking, fact-checking, and media literacy. These techniques have been promoted by the European Union through its communication on tackling online disinformation (Tackling online disinformation, 2018) and are crucial to combatting mass disinformation campaigns. The previous section has noted the frameworks present in the European Union and specifically in Spain, Malta and Romania that give a legal basis for implementing these techniques. However, a subset of problems within the fight against disinformation is that of targeted disinformation efforts – where individuals or groups of individuals are targeted with customised content based on their profiles to maximise the effectiveness and spread of such content.
To understand targeted disinformation, it is essential to place fake news within the proper context – which is that it is mainly spread via social media channels, where algorithms govern how those fake news posts are shown to users. This means that with the correct information, disinformation and fake news can spread through precisely the kinds of people who are most likely to believe it. For example, if an election management agency can gather information through social media to identify people who are more likely to support causes espoused by one political party, the agency can then target those people with fake news denouncing actions taken by the party in opposition to the one that hired it. In this manner, the agency can get people on the fence to start supporting a particular political party or encourage passive supporters to exercise their right to vote. This example is not theoretical and is precisely what Cambridge Analytica was accused of doing a few years ago (The Cambridge Analytica Files, 2018).
The problem of targeted disinformation campaigns is a major one. A study on anti-vaccination disinformation in 2018 found that when Facebook altered its policies to stem the sharing of ads containing links to known fake news sites, it led to a 75% decline in the number of shares on the social media site (Chiou & Tucker, 2018). An analysis of the chain through which disinformation spreads shows that while fake content does not originate on social media websites, it is primarily through social media websites that such content is customised towards specific targets (Combatting 2021). Data protection laws help combat this aspect of disinformation. The European Data Protection Supervisor (EDPS) has also made this argument through its Opinion 3/2018 and the European Data Protection Board (EDPB) through its Guideline 8/2020. The following sections will delve into the specifics of the data protection framework and how it can be used to combat disinformation.
In Europe, the right to private and family life has been set out under Article 8 of the European Convention on Human Rights (ECHR), which states as follows:
This right to private and family life in the ECHR serves to underpin the right to the protection of personal data under the EU Charter of Fundamental Rights (ECFR) through Article 8, which states that “Everyone has the right to the protection of personal data concerning him or her” (ECHR, Article 8). The Convention and the Charter have together set the stage for the EU’s data protection framework, discussed further below. In its recital, this fact has also been mentioned explicitly by the General Data Protection Regulation (GDPR), which notes that the right to personal data protection is not absolute, since it must be balanced against several other rights under the Charter. These other rights under the Charter include the respect for private and family life, home and communications (Article 7), freedom of thought, conscience and religion (Article 10), freedom of expression and information (Article 11), freedom to conduct a business (Article 16), the right to an effective remedy and to a fair trial (Article 47), and cultural, religious and linguistic diversity (Article 22). The necessity of achieving this balance arises because, as the GDPR states, all processing of personal data “should be designed to serve mankind” (GDPR, Recitals).
This data protection framework and its implications on targeted disinformation campaigns is described further below.
In the European Union (EU), data protection falls within the aegis of the GDPR and the Law Enforcement Directive (LED), both of which were adopted in 2016 and took effect in 2018. Neither of these pieces of legislation is explicitly meant to tackle disinformation but instead broadly define the principles underlying data protection in the European Union. Of specific relevance to targeted disinformation is the 2002 Directive on privacy and electronic communications (ePrivacy Directive), which deals with the responsibilities of electronic communications services and the privacy of electronic communications.
The principles that are relevant for combating disinformation relate to profiling and automated decision making and will be dealt with separately below.
The GDPR and the LED define the parties to the data processing as “controllers” and “processors” who are established in the EU or those established outside the EU but provide services within the EU. The LED specifically deals with public authorities that are competent to prevent, investigate, detect or prosecute criminal offences. However, the GDPR broadly deals with anybody (excluding the competent authorities defined in the LED) that determines the purposes and means of processing personal data. Despite dealing with separate entities, how the GDPR and LED regulate the collection and processing of personal data is essentially the same. With both the GDPR (Article 4) and the LED (Article 3), “personal data” and the “processing” of personal data are defined as follows:
On the other hand, the ePrivacy Directive deals specifically with the processing of personal data in the electronic communication sector, where “communication” is defined as follows under Article 2:
(d) “communication” means any information exchanged or conveyed between a finite number of parties by means of a publicly available electronic communications service. This does not include any information conveyed as part of a broadcasting service to the public over an electronic communications network except to the extent that the information can be related to the identifiable subscriber or user receiving the information;
The ePrivacy Directive thus complements the GDPR and is limited to a ‘user’ of an electronic communications service and only defines the responsibilities of such services.
In this way, the GDPR oversees the processing of a user’s personal data in general, while the ePrivacy Directive oversees data processing on calls, messaging apps and services (such as via telephony services, or WhatsApp or Instagram direct messages). In each of those cases, the service providers (or “controllers”) have access to the kind of data that would make it very easy to target users for advertising or other content – which could be misused for the targeted delivery of fake news. The tool used to find targets for content, that is, profiling, is discussed further below.
The erstwhile Article 29 Working Party (Art29WP), now the EDPB, noted in its opinion on online behavioural advertising that there are two main approaches to building user profiles which can also be combined:
i) Predictive profiles are established by inference from observing individual and collective user behaviour over time, particularly by monitoring visited pages and ads viewed or clicked on. ii) Explicit profiles are created from personal data that data subjects themselves provide to a web service, such as by registering. (Article 29, 2010)
Using such profiles for the targeted delivery of content will result in the processing of personal data as defined under the GDPR and LED. This is because profiles, whether predictive or explicit, would contain information relating to an identified or identifiable natural person. This is important because the GDPR provides explicit legal bases and principles that must be adhered to when processing personal data. Such principles, identified under Article 5 of the GDPR, include: (a) lawfulness, fairness and transparency; (b) purpose limitation; (c) data minimisation; (d) accuracy; (e) storage limitation and (f) integrity and confidentiality. Of these principles, the principle of purpose limitation is especially important and states that personal data shall be:
(b) collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes; further processing for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes shall, in accordance with Article 89(1), not be considered to be incompatible with the initial purposes (‘purpose limitation’); (Article 5(1), GDPR)
The interpretation of “legitimate purposes” must be read with Article 6, which provides six bases for lawful processing of personal data. Of these six bases, two are of particular relevance in terms of targeted content delivery, and note that processing shall be lawful only if:
(a) the data subject has given consent to the processing of his or her personal data for one or more specific purposes;
(f) processing is necessary for the purposes of the legitimate interests pursued by the controller or by a third party, except where such interests are overridden by the interests or fundamental rights and freedoms of the data subject which require protection of personal data, in particular where the data subject is a child. (Article 6(1), GDPR )
When read in this manner, the purpose limitation principle shows that unless the user (or data subject), who is being targeted for content delivery, has provided their consent, or unless it is for pursuing legitimate interests, the processing of personal information for the targeted delivery of content would run afoul of the GDPR. The GDPR further sets out the conditions required for consent to be lawful under Article 7. In summary, the GDPR ensure that the consent is specific, informed and unambiguous, with the data subject having the right to withdraw such consent. It is important to note that even if a data subject’s consent meets these criteria, the processing of personal data must still adhere to the data protection principles enshrined under Article 5 of the GDPR. This means that if the processing is not necessary and proportionate to the aimed objective, then the processing would still be invalid. This need to meet the principle of necessity and proportionality has also been noted by the Art29WP in its opinion on the conditions of consent under the GDPR (Article 29, 2010).
Through these provisions, the GDPR ensures that the data subject controls their personal data. These provisions imply that if an individual does not give their consent to be delivered targeted content, then it would contravene the GDPR to do so unless a legitimate interest (Article 6(1)(f)) can be proven. In the case of Fashion ID, the Court of Justice of the European Union (CJEU) provided three cumulative conditions that need to be met to prove a legitimate interest effectively:
first, the pursuit of a legitimate interest by the data controller or by the third party or parties to whom the data are disclosed; second, the need to process personal data for the purposes of the legitimate interests pursued; and third, the condition that the fundamental rights and freedoms of the data subject whose data require protection do not take precedence (CJEU, 2019, C-40/17, para. 95)
If these three conditions are met, then the legal basis of legitimate interest under the GDPR may be used. Even so, as mentioned earlier, any processing of personal data must meet all six principles under Article 5, including the principle of necessity and proportionality. This has also been made clear by the EDPB in its opinion on the targeting of social media users(EDPB 2020).
In terms of predictive profiles, which are based on observing individual and collective user behaviour over time, it is necessary to look at the ePrivacy Directive, which deals specifically with tracking data in electronic communications. This is because, in general, the observation of user behaviour occurs through tracking technologies such as cookies which are stored on the user’s device. Article 5(3) of the ePrivacy Directive states that:
Member States shall ensure that the use of electronic communications networks to store information or to gain access to information stored in the terminal equipment of a subscriber or user is only allowed on condition that the subscriber or user concerned is provided with clear and comprehensive information in accordance with Directive 95/46/EC, inter alia about the purposes of the processing, and is offered the right to refuse such processing by the data controller.
Directive 95/46/EC noted in the language above was superseded by the GDPR, and therefore all references to the Directive now point to the GDPR instead. The EDPB has given its opinion on the interplay between the ePrivacy Directive and the GDPR, noting that the interpretation of the ePrivacy Directive must be along the narrow lines drawn by the GDPR (EDPB, 2019). Under Article 5(3) of the ePrivacy Directive, the user must be provided “clear and comprehensive” information on the purposes of processing personal data. This is similar to the principle of transparency under Article 5 of the GDPR. Further, the processing of personal data under the ePrivacy Directive is based on the user’s consent or, in the case of the GDPR, the data subject’s consent. The conditions of such consent must also be interpreted following Article 7 of the GDPR discussed above, meaning that the consent must be specific, informed and unambiguous, with the data subject having the right to withdraw such consent.
Taking this into account, whether the processing of personal data is through predictive profiles or explicit profiles, adhering to the principles of data protection set out in the GDPR is critical. Having said that, there are further limitations on decisions taken through the use of these profiles, which are discussed below.
As discussed earlier, for targeted disinformation campaigns, user profiles are used to decide how to customise content and whom to deliver that content. The GDPR has specific restrictions on making such automated decisions under Article 22 which states:
1. The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.
2. Paragraph 1 shall not apply if the decision:
(a) is necessary for entering into, or performance of, a contract between the data subject and a data controller;
(b) is authorised by Union or Member State law to which the controller is subject and which also lays down suitable measures to safeguard the data subject's rights and freedoms and legitimate interests; or
(c) is based on the data subject's explicit consent.
For the purposes of targeted content delivery, Article 22(2)(c) becomes relevant due to the need for explicit consent. In its opinion on the conditions of consent, the Art29WP defines it as such:
The term explicit refers to the way consent is expressed by the data subject. It means that the data subject must give an express statement of consent. An obvious way to make sure consent is explicit would be to expressly confirm consent in a written statement (Article 29, 2016, 18)
This is different from ‘regular’ consent because regular consent only requires a statement or clear affirmative action per the GDPR. Thus, to be subjected to automated decision-making or profiling, a data subject must provide explicit consent. If such explicit consent is not obtained by the controller, they would be in contravention of the GDPR and would therefore be unable to delivery targeted content to the data subject.
The need to record consent becomes especially important when dealing with sensitive categories of personal data or when designing the data collection system, as will be discussed below.
Under the GDPR, some types of personal data have been designated as “special categories” under Article 9, which states:
1. Processing of personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person's sex life or sexual orientation shall be prohibited.
It is clear that “special categories” actually refer to data about a person that is highly sensitive in nature or data about a person that forms a core part of a person’s identity. This is the type of data that is especially susceptible to misuse in the case of targeted delivery of content. For example, knowing that one has a particular political belief would make those persons easy targets of opportunity for those trying to sway them into voting a certain way.
Any personal data that falls under the ambit of this Article can only be processed under specific circumstances provided under Article 9(2), of which two are relevant for the purposes of targeted content delivery:
(a) the data subject has given explicit consent to the processing of those personal data for one or more specified purposes, except where Union or Member State law provide that the prohibition referred to in paragraph 1 may not be lifted by the data subject;
(e) processing relates to personal data which are manifestly made public by the data subject;
In terms of Article 9(2)(a), the interpretation of explicit consent is exactly as discussed above. Therefore, unless explicit consent has been given, special categories of personal data cannot be used to delivery targeted content to data subjects.
Article 9(2)(e) is relevant because targeters of the content may state that they used special categories of personal data to target content delivery since such information had already been made public by the data subject. However, the standard for ‘manifestly made public’ has been discussed at length by the EDPB, which requires a combination of several elements to demonstrate that this standard has been met. These elements include the default settings of the platform where information has been published, the nature of that platform, the accessibility of the page where such data has been made available, the visibility of the information, and whether the data subject himself or herself has published the sensitive data (EDPB, 2020, 35).
Taking into account all the principles and legal bases described above, possibly the most critical safeguard for data subjects from an implementation perspective has been set out under Article 25. This article states as follows:
1. Taking into account the state of the art, the cost of implementation and the nature, scope, context and purposes of processing as well as the risks of varying likelihood and severity for rights and freedoms of natural persons posed by the processing, the controller shall, both at the time of the determination of the means for processing and at the time of the processing itself, implement appropriate technical and organisational measures, such as pseudonymisation, which are designed to implement data-protection principles, such as data minimisation, in an effective manner and to integrate the necessary safeguards into the processing in order to meet the requirements of this Regulation and protect the rights of data subjects.
2. The controller shall implement appropriate technical and organisational measures for ensuring that, by default, only personal data which are necessary for each specific purpose of the processing are processed. That obligation applies to the amount of personal data collected, the extent of their processing, the period of their storage and their accessibility. In particular, such measures shall ensure that by default personal data are not made accessible without the individual's intervention to an indefinite number of natural persons.
Especially in terms of the targeted delivery of content, the need to ensure data protection by default means that data subjects would need to provide their consent (whether regular or explicit) as an ‘opt-in’ measure; that is, the consent would not be provided by default. This also means that service providers would be obligated to ensure that a user would have to specifically choose to divulge their personal information instead of giving away that data as the default option. Further, their data would be protected by any safeguards deemed necessary by the GDPR through the design of the data collection system itself. Thus, only the minimal amount of data necessary and proportionate to meet a controller’s objective would be collected; this data would only be used for a limited purpose and then deleted after a specific amount of time. As long as these measures are in place, the targeted delivery of content would only take place for those data subjects who opt-in to it or where the controller can demonstrate a legitimate interest in doing so.
Drawing on the discussion of the relevant data protection principles above, it is necessary to discuss them in the context of disinformation. This leads to some important conclusions regarding their limitations.
First, most websites and services that attract a large user base have created entire businesses out of delivering targeted content. Facebook and Google (Constine, 2018), two of the most extensive internet services on the planet, have already demonstrated an acute desire to soften the impact of the data protection principles under the GDPR, for which they have also been fined. This is despite, for example, Facebook being at the heart of the Cambridge Analytica scandal, which revolved around targeted disinformation campaigns based on profiling users (The Cambridge Analytica Files 2018).
Second, while data protection guidelines can provide a measure of protection to data subjects and safeguard their right to privacy, these guidelines also depend on data subjects exercising sensible control over their personal data. This can be seen through data protection hinging on the need for regular or explicit consent in the case of profiling and automated decision-making. Unfortunately, the methods employed by websites to comply with the conditions of consent have led to “consent fatigue”, where users have simply stopped paying attention to consent notifications. Combined with internet services trying their best to get user consent, this has led to a situation where the usual means of obtaining consent are being called into question by bodies such as the Belgian Data Protection Authority, which recently issued a €250,000 fine to a company providing consent-related services.
While acknowledging the limitations of data protection guidelines, it is necessary to note that a large amount of disinformation occurs through the targeted delivery of content. For example, echo chambers (Cinelli et al, 2021) have been identified as one of the most effective methods of spreading disinformation (Menczer, 2016). These echo chambers can be taken advantage of only by processing personal data, thus allowing data protection guidelines to mount a practical challenge to this disinformation vector. When filter bubbles and echo chambers in social media have been identified as a vital part of the fake news and disinformation ecosystem (Rhodes, 2022) data protection guidelines become all the more important in combating the phenomenon, regardless of the limitations of such guidelines as discussed above. Therefore, it is essential to follow the guidelines, especially that of data protection by design and by default, to ensure that the limitations are covered, and disinformation is effectively tackled.
1. Article 29 Working Party available at Article 29 Working Party Guidelines (dataprotection.ro)
2. COMMUNICATION FROM THE COMMISSION TO THE EUROPEAN PARLIAMENT, THE COUNCIL, THE EUROPEAN ECONOMIC AND SOCIAL COMMITTEE AND THE COMMITTEE OF THE REGIONS Tackling online disinformation: a European Approach COM/2018/236 final
3. Chiou, L., & Tucker, C. (2018). Fake news and advertising on social media: A study of the anti-vaccination movement (No. w25223). National Bureau of Economic Research.
4. Cinelli, M., De Francisci Morales, G., Galeazzi, A., Quattrociocchi, W., & Starnini, M. (2021). The echo chamber effect on social media. Proceedings of the National Academy of Sciences, 118(9), e2023301118.
5. Combatting Targeted Disinformation Campaigns (dhs.gov) 2021
6. Constine, J. (2018) A flaw-by-flaw guide to Facebook’s new GDPR privacy changes, available here: https://techcrunch.com/2018/04/17/facebook-gdpr-changes/
7. Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data, and repealing Council Framework Decision 2008/977/JHA
8. Directive 2002/58/EC of the European Parliament and of the Council of 12 July 2002 concerning the processing of personal data and the protection of privacy in the electronic communications sector (Directive on privacy and electronic communications)
9. European Data Protection Supervisor, EDPS Opinion on online manipulation and personal data, available here: 18-03-19_online_manipulation_en.pdf (europa.eu)
10. European Data Protection Board, Guidelines 8/2020 on the targeting of social media users, available here: EDPB guidelines: cookies, consent and compliance (cookiebot.com)
11. European Convention on Human Rights (ECHR), available at European Convention on Human Rights (coe.int)
12. European Data Protection Board, Guidelines 8/2020 on the targeting of social media users, available here: https://edpb.europa.eu/system/files/2021-04/edpb_guidelines_082020_on_the_targeting_of_social_media_users_en.pdf
13. EDPB Opinion 5/2019 on the interplay between the ePrivacy Directive and the GDPR available at Opinion 5/2019 on the interplay between the ePrivacy Directive and the GDPR, in particular regarding the competence, tasks and powers of data protection authorities | European Data Protection Board (europa.eu)
14. JUDGMENT OF THE COURT (Second Chamber) 29 July 2019 available at https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:62017CJ0040&from=EN
15. Menczer, F. (2016). Fake online news spreads through social echo chambers. Scientific American, 28.
16. REGULATION (EU) 2016/ 679 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL - of 27 April 2016 - on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/ 46/ EC (General Data Protection Regulation) (europa.eu)
17. Rhodes, S. C. (2022). Filter bubbles, echo chambers, and fake news: how social media conditions individuals to be less critical of political misinformation. Political Communication, 39(1), 1-22.
18. The Guardian, The Cambridge Analytica Files, 2018 available at: https://www.theguardian.com/news/series/cambridge-analytica-files