• No results found

The Epistemic Performance of EU Expert Groups: A Case Study of the High-Level Expert Group on Fake News and Disinformation

N/A
N/A
Protected

Academic year: 2021

Share "The Epistemic Performance of EU Expert Groups: A Case Study of the High-Level Expert Group on Fake News and Disinformation"

Copied!
130
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

The Epistemic Performance of EU Expert Groups

A Case Study of the EU High-Level Expert Group on Fake News and Disinformation

Name student: Steijn Muller

Number student: 11431385

Supervisor: mw. dr. C.M. (Conny) Roggeband

Second reader: mw. dr. R.M. (Rosa) Sanchez Salgado

MSc program: Political Science (Track: Public Policy and Governance)

Institution: University of Amsterdam

Thesis seminar: Transnational policy wars

Date: 25-06-2020

(2)

Acknowledgements

There are several people I would like to thank for their role in the creation of this work. First and foremost, I would like to thank my supervisor Conny Roggeband for her expert guidance throughout this process. Your feedback challenged me continuously and made sure that I had to stretch the limits of my thinking. Additionally, I would like to thank all the interviewees: those from the HLEG, the experts from the Dutch universities, and the journalists. Your insights allowed me to paint a thorough picture of the HLEG and I hope my interpretations did justice to your experiences. I would also like to thank other people who gave me helpful feedback on my work: my parents, my fellow thesis seminar students, and three AUC students who were so nice to peer-review my work. Without all of you, the thesis would have been of a substantially lower (epistemic) quality and therefore I am grateful to all of you. Finally, this thesis is for my friends and family who often reminded me that there is also a life outside of academia.

(3)

Abstract

Today, experts often play an influential role in policy-making processes. Their works impact the legitimacy of political institutions (Holst & Tørnblad, 2015) and our daily lifes. One institution that widely consults experts is the European Commission. The Commission maintains a large expert group system in which hundreds of groups give policy advice on a day-to-day basis. However, the performances of these experts have been under-studied (Metz, 2013; Holst & Tørnblad, 2015). Given the societal and scientific relevance of these groups and their works, this work attempts to measure the ‘epistemic quality’ (Krick, 2018) of the policy advice of the High-Level Expert Group on Fake News and Disinformation. Using six criteria created by Eva Krick (2018) which largely overlap with the Commission’s goals (2016), namely plurality and balance, competence, impartiality, consensus, political relevance, and external experts’

perceptions, this work concludes that the HLEG’s policy advice fulfilled all the criteria except for the criterion of impartiality. Therefore, despite several flaws, the produced report of the HLEG was of a high epistemic quality. Additionally, this work sheds light on four other matters. First, it gives insights into the motivations of the Commission in selecting and using its expert groups. Second, it brings to light the potential of multi-source, negotiated expertise to produce high-quality policy advice. Third, it discusses how the framework of Eva Krick (2018) can be improved. Finally, it gives practical policy recommendations to improve the workings of EU expert groups and their output in the future.

Key terms: epistemic quality, legitimacy, EU expert groups, High-Level Expert Group on Fake News and Disinformation.

(4)

Table of Contents

1. Introduction 5

2. Theoretical framework 9

2.1. Definitions of EU Expert Group and Expert 9 2.2. Epitemis Quality: The Criteria 12 2.2.1. Plurality and Balance 12 2.2.2. Competence 14

2.2.3. Impartiality 15

2.2.4. Consensus 17 2.2.5. Political Relevance 18

2.2.6. External Experts’ Perceptions 19 3. Research Design 20 3.1. Single-case design 20 3.2. Methods 21 3.2.1. Desk Research 22 3.2.2. Semi-structured Interviews 23 3.2.2.1. Sampling 23 3.2.2.2. Data Collection & Transcription 25 3.2.2.3. Data Analysis 26 3.3. Operationalization of indicators 26 4. Results 29 4.1. Context of the HLEG 29

4.2. Plurality and Balance 30 4.3. Competence 38

4.4. Impartiality 45

4.5. Consensus 51

4.6. Political relevance 58

4.7. External experts’ perceptions 64

5. Discussion 65

6. References 73

(5)

Introduction

In ‘the age of expertise’ (Fischer, 2009, p. 1), specialized expert knowledge has come to play an increasingly important role in policy and decision-making processes

(Christensen & Holst, 2017, p. 821; Góra, Holst & Warat, 2019: p. 1). Due to the growing technological and regulatory complexity as well as the increased level of specialization of modern society (Bevir, 2011: p. 10; Góra, Holst & Warat, 2019: p. 1), modern

democracies increasingly rely on experts to create evidence based policy (in ‘t Veld, 2010; Cairney, 2016: p. 1; Góra, Holst & Warat, 2019: p. 1). Given the important role of experts in today’s policy-making, their policy advice needs to be of a high quality. Indeed, as Holst and Tørnblad (2015: p. 168) note, the political role of experts ‘is defensible if, and only if, doing so contributes to better and more truth-sensitive decisions’ (Holst & Tørnblad, 2015, p. 168). To find out if experts are positively

influencing decisions, it is necessary to investigate their epistemic performances (Holst & Tørnblad, 2015: p. 171).

One specific form of expertise that is currently being employed by many governments is multi-source, negotiated policy advice (Krick, 2018: p. 209). Such expertise is produced in settings in which not only scientists are present, but also interest group representatives of non-governmental organizations (NGOs) and

corporations. In such committees, the knowledge is socially co-produced and negotiated upon to reach a consensus (Krick, 2018: p. 212). The rise of such bodies is seen as a trend in modern governance (Krick, 2015a) and is deemed to reflect a more general ‘shift from science to expertise and from knowledge to judgment’ (Jasanoff, 2005, p. 211). One main reason for this development has been the growing concern with a ‘scientization’ of governance and decision-making (Góra, Holst & Warat, 2019, p. 6). To counteract this, a wider plurality of viewpoints in policy advisory bodies is believed to increase both the democratic legitimacy of such advisory committees (Góra, Holst & Warat, 2019, p. 6) as well as the level and relevance of the expert knowledge base of its members (Krick, 2018: p. 217). The governance potential of these groups is that the members can serve a ‘double role’, as representatives of a collective interest as well as specialist experts (Krick, 2015a: p. 498). However, questions that have not yet been examined are: are these ideals being met in practice? What is the quality of the output of such expert bodies?

(6)

One governmental institution that tries to make use of this type of expertise is the European Union, and more specifically the European Commission. The Commission has a relatively small in-house staff that focuses on complex regulatory policy-making regarding a wide political agenda (Metz: 2013, p. 270; Rimkutė & Haverland, 2014: p. 434). To tackle this problem, the Commission has set up an institutionalized body of about 1000 outsourced expert groups (Metz, 2013: p. 268) that help the Commission with policy drafting-and implementation tasks (Robert, 2012: p. 426: Gornitzka & Sverdrup, 2015: p. 151). Overlapping with the ideas behind multi-source expert bodies, the Commission has several aims to ensure high quality outputs from its expert bodies.

To start, the Commission strives to select members in three main ways to ensure high quality outputs. First, it desires plural and balanced expert groups. The

Commission aims to have a balanced representation of experts with different relevant expertise, interests, and nationalities (European Commission, 2001b; European Commission, 2016: p. 2: p. 16). Second, the Commission demands its experts to be qualified and experienced in the particular issue (European Commission, 2016: p. 16). Third, the Commission wants to limit conflicts of interests (hereinafter CoIs) among its members (European Commission, 2016: p. 4). CoIs can arise when, for example, an expert has a financial interest in a particular issue which can affect their impartial judgement. The fourth aim is more procedural, as the Commission requires its experts to reach a consensus (European Commission, 2016: p. 11) which can ensure compliance and have epistemic benefits for the final policy advice (Krick, 2018). Fifth and finally, the Commission desires the expert advice to help solve a policy problem (Metz, 2013; Rimkutė & Haverland, 2015; European Commission, 2016: p. 4). In theory, these aims are clear, but is the expertise of EU expert groups always in line with these criteria? There has been substantial criticism on (for example) the plurality and impartiality of the compositions of EU expert groups which might indicate that these desires are not always being met (CEO, 2014; 2016; 2017; Alter-EU, 2013; 2014). This makes it even more important to further empirically examine the criteria.

Regrettably, while scholarly research has been done on the composition of the EU expert group system (Robert, 2012; Metz, 2013; Gornitzka & Sverdrup, 2015), the different ways the Commission uses expertise in their policy-making process (Metz, 2013; Rimkutė & Haverland, 2015), the Commission’s communication about its use of expertise (Holst & Moodie, 2015) and EU expert group reforms (Moodie, 2016), no

(7)

research has yet attempted to evaluate the mentioned criteria with a holistic, epistemic approach. Therefore, it has been argued that existing studies have understudied this issue (Holst & Tørnblad, 2015: p. 171) and that ‘everybody only looks at who is sitting in these groups, rather than paying attention to the content’ (Metz, 2014: p. 279). Some studies have touched upon a few of the criteria like the plurality and balance of EU expert groups (CEO, 2014; Deloitte, 2016) or have scrutinized expert performances in other settings (Tetlock, 2005), but none have evaluated the epistemic quality of EU expert group expertise with an investigation of all the stated criteria.

Therefore, this research aims to provide the first attempt with partial use of a framework created by Eva Krick (2018). This framework consists of several factors that largely overlap with the Commission’s (2016) aims and when measured together can

indicate (not determine) the epistemic quality of multi-source, negotiated expertise

(Krick, 2018: p. 209). This research does not measure all the criteria in Krick’s (2018) framework, but mainly those that are also highlighted by the Commission (2016). This was decided to give this research more policy relevance and because it fitted the scope of this project. There is one criterion, however, which is not highlighted by the

Commission (2016) but is mentioned by Krick (2018) and will therefore also be

investigated, namely the perceptions of other experts regarding the policy advice (Krick, 2018: p. 218). In sum, the criteria are: 1) plurality and balance, 2) competence, 3)

impartiality, 4) consensus, 5) political relevance, and 6) the perceptions of other

experts. Importantly, these criteria use different analytical dimensions as the first three criteria (plurality and balance, competence, and impartiality) concern the

characteristics of the individual experts in the group, the fourth criterion (consensus) regards the collective epistemic practice, and the final two criteria (political relevance and other experts’ perceptions) consider the expert group’s output in context (Krick, 2018: p. 215). When all the criteria are measured and analyzed it can indicate the quality of the policy advice generated by such expert bodies.

Using multiple methods, namely desk research, a qualitative content analysis of policy documents, and semi-structured interviews with experts and policy officers, this case study evaluates the mentioned criteria in relation to the High-Level Expert Group on Fake News and Disinformation (hereinafter the HLEG). This expert group was set up by the Directorate-General for Communications Networks, Content and Technology (hereinafter the DG CNECT) to tackle the issue of ‘fake news’ or ‘disinformation’ (two

(8)

terms which are used interchangeably in this work for pragmatic purposes). The choice of the case is justified in the methodological section. Consequently, the research

question of this work is: What is the epistemic quality of the policy advice created by the

High-Level Expert Group on Fake News and Disinformation? This research question can

be divided in six sub-questions which are conceptualized and justified in the theoretical framework, and are operationalized in the methodological section:

1. How plural and balanced was the composition of the HLEG? 2. How competent were the members of the HLEG?

3. How impartial were the members of the HLEG?

4. How was a consensus reached by the members of the HLEG? 5. How politically relevant was the policy advice of the HLEG? 6. How do other experts perceive the policy advice of the HLEG?

Importantly this work also highlights the context of the HLEG and potential interactions between these criteria, matters which Krick (2018) omits in her framework. First, it assumes that the context of an expert group like the nature and complexity of the issue and the time-frame of the meetings should be considered when evaluating the criteria. Second, I aim to investigate potential relations between the criteria. Questions that are examined throughout this work are: Do these criteria complement each other? Or can they clash, resulting in potential trade-offs? Is it realistic for the Commission to fulfill all these criteria? These questions are not explicitly raised by Krick (2018), but an

investigation of such relations can shed more light on why EU expert groups perform the way they do.

This work also explores several broader themes in addition to the epistemic quality of the HLEG’s work. To start, this research aims to shed light on the motivations of the DG CNECT (and the Commission more broadly) behind the selection and usage of its expert group. Furthermore, it desires to say something about the potential of multi-source, negotiating expert groups in creating high quality policy advice and for the participating experts to play the ‘double role’ of a specialist expert and collective interest representative (Krick, 2015a: p. 498). Furthermore, I aim to give a first operationalization of Krick’s framework (2018) and to find out if her model can be improved. Lastly, I attempt to give policy recommendations to improve the workings of EU expert groups in the future.

(9)

The structure of this thesis is as follows. First, I use the theoretical framework to define key concepts related to the research, namely an EU ‘expert group’ and an EU ‘expert’. Second, I use the theoretical framework to argue why the mentioned criteria are important for the epistemic quality of multi-source, negotiated expert policy advice, but also to show potential points of tension among the criteria. Third, I use the

methodological section to argue which types of data I believe were needed to apply Krick’s framework (2018), which methods I used to gather these data, how I analyzed the data, and how I operationalized the different criteria. Fourth, I use the results section to argue that the policy advice of the HLEG was, despite some flaws, of a high epistemic quality, and show the implications of the results in relation to the mentioned themes. Finally, I sum up the theoretical, methodological, and practical implications of my work, the latter section in which I give policy recommendations to improve EU expert groups.

Theoretical Framework

EU Expert Group and Expert: Definitions

Before proceeding with the academic debate on the mentioned criteria, two concepts need to be defined. These concepts are an EU ‘expert group’ and ‘expert’. This section aims to conceptualize these terms by examining the characteristics of EU expert groups and experts and by using particular concepts in the literature.

EU Expert Group

To define an EU expert group, the classic ‘epistemic community’ concept by Haas (1992: p. 3) can be used. This is a ‘network of professionals with recognized expertise and competence in a particular domain and an authoritative claim to policy-relevant knowledge within that domain or issue-area’ (Haas, 1992: p. 3). While this definition shares features of an EU expert group, it does not fully capture it (Dreger, 2009: p. 3; Field, 2013, p. 3). Namely, according to Haas (1992), the members in an epistemic community also share the same normative, principled and causal beliefs and a

(10)

expert group. Members of an EU expert group often have a variety of opinions on policy issues and do not always share a common knowledge base (Dreger, 2009: p. 3). Field also (2013) notes that expert groups greatly differ in terms of their ‘technical

complexity, political sensitivity and legal competence’ (Field, 2013), a claim which is also supported by macro studies of EU expert groups (Robert, 2012; Gornitzka & Sverdrup, 2015). All in all, an EU expert group is more complex than the concept of an epistemic community created by Haas (1992). Therefore, I employ the term ‘community of knowledge’ (Field, 2013: p. 3) to define an EU expert group. This is a ‘group of

individuals coming together under the auspices of the European Commission to share professional knowledge and experience to inform the policy-making process’ (Field, 2013: p. 3). I believe this concept more accurately defines an EU expert group and therefore also the HLEG.

It is also important to note that EU expert groups can serve multiple purposes for the Commission. To make legislation, the Commission needs several resources from its expert groups: expertise (technical information), legitimacy and support (political information), and procedural resources (consensus-building) (Metz, 2013: p. 270). Therefore, these expert groups also serve a political goal and not only an instrumental role for the Commission (Metz, 2013: p. 270; Tørnblad, 2018: p. 76). For example, EU expert groups are used for consensus-building purposes to help negotiations between different interests and improve the EU’s decision-making process, especially regarding conflictual issues (Metz, 2013: p. 271). For these reasons, this work also explores which goals the HLEG served for the DG CNECT, how this affected the selection and output of the group, and how this sheds light on the Commission’s priorities.

EU Expert

The definition of an EU expert group foreshadows the definition of an EU expert. Just as EU expert groups vary in size and role, EU experts also have different

characteristics. This section elaborates on the concept of an ‘EU expert’ by examining the different types of EU experts that exist and the several characteristics they should have (according to the Commission).

To start, the Commission (2016) believes its EU experts can be one of the following five types. First, Type A members are experts appointed on behalf of their

(11)

personal capacity and should act independently and in favor of the public interest

(European Commission, 2016: p. 5). These experts are generally scientists or academics. Second, there are Type B members, who are experts appointed to represent a particular common interest (European Commission, 2016: p. 6). Such members do not represent individual stakeholders, but rather a ‘policy orientation common to different

stakeholder organizations’ (European Commission 2016, p. 5). Third, there are Type

C members, who are representatives of organizations. These organizations are

organizations in a broader sense of the term such as environmental NGOs like Greenpeace but also trade unions (European Commission, 2016: p. 6). Fourth, there are Type D members. These are either national, regional, or local governmental authorities of EU Member States. Finally, there are Type E members or ‘other public entities’ (European Commission, 2016: p. 6). These members represent third world countries or international organizations like the World Health Organization. To conclude, EU experts are not only academics or scientists but can also be other practitioners who represent a particular interest.

EU experts are also characterised by their perceived authoritative claim to policy-relevant knowledge and their externality to the Commission. Regarding the former, EU experts are deemed to have an authoritative claim to policy-relevant

knowledge which according to the Commission (2016) is legitimized by their expertise and/or the interests they represent. Second, the experts in the EU expert group system are generally external to the Commission. The Commission also has in-house experts, but these are not part of the EU expert group system discussed in this work.

Considering these two final elements, I define an EU (external) expert in the following way: A practitioner with a perceived authoritative claim to policy-relevant knowledge

who is external to the Commission.

Epistemic Quality: The Criteria

Plurality and Balance

The first principle of the Commission to ensure a high epistemic quality of EU expert group expertise is to have plural and balanced expert groups (European Commission, 2016: p. 2). The Commission believes that expertise should be ‘multidisciplinary,

(12)

multi-sectoral and should include input from academic experts, stakeholders, and civil society’ (European Commission, 2001b) and that expert groups should have a ‘balanced

composition’ (European Commission, 2016: p. 2). All in all, the Commission attempts to be a ‘representative bureaucracy’ (Gornitzka & Sverdrup, 2015: p. 151) by including different societal actors who are directly affected by the EU’s policies in its decision-making process (Gornitzka & Sverdrup, 2015: p. 151). The same principle is also

highlighted by Krick (2018: p. 217; p. 220) and therefore this section sheds light on the origin of the argument. Authors proposing this idea believe that a diverse, balanced group of actors can add to both the democratic quality as well as to the epistemic quality of generated expertise. Thereafter, however, this section shows several

democratic and epistemic arguments that criticize the idea of pluralizing expert groups. Considering these different arguments, this section maintains that a plural and balanced expert group is important for the epistemic quality of expertise, but it shows that

diversity can also decrease the level of expertise.

The idea to include actors with different types of knowledge and values in expert groups arose from the school of thought that wishes to ‘democratize’ expertise (Maasen & Weingart, 2005: p. 2; Góra, Holst & Warat, 2018: p. 6). First, diversity is believed to enhance the democratic quality of expert groups and their outputs. Authors believe that the new role of experts in policy-making can clash with traditional democratic norms like representation and participation (Urbinati, 2014; Habermas, 2015; Moodie, 2016: p. 229), therefore it is argued that expertise should be provided by a multitude of actors to integrate these norms in expert bodies in a better way (Góra, Holst & Warat, 2018: p. 6). For example, including NGOs can allow better democratic representation as they can represent collective interests (Krick, 2019: p. 109). Thus, these authors believe that the inclusion of a plural group of actors can increase the democratic representation of expert groups which can in turn positively enhance their outputs.

In addition to the perceived benefits to the democratic representation of expert groups, many scholars believe that a diverse group of actors can also increase the level, relevance, and applicability of expertise of the group (Goldman, 2001: p. 105; Maasen and Weingart, 2005: p. 3; Jasanoff, 2005, p. 211; Edelenbos, van Buuren & van Schie, 2011: p. 677; Holst and Tørnblad, 2015, p. 170; Krick, 2018: p. 216). Policy issues are never purely scientific and also involve values and therefore a variety of expertise and interests can tackle such issues in a better way (Góra, Holst & Warat, 2018: p. 5). For

(13)

example, representatives of organizations like NGOs and corporations can provide great practical or experiential knowledge that can benefit the creation of policy advice (Krick, 2015b: p. 20; Góra, Holst & Warat, 2018: p. 5). Business actors can have great technical information on issues as for example car manufacturers can have knowledge on the feasibility of petrol consumption policies (Dür & Mateo, 2016: p. 10). Citizen groups and professional organizations can have ‘political information’ (Dür & Mateo, 2016: p. ) regarding the preferences of particular constituencies on particular policies, but also increasingly have technical expertise due to the increased professionalization of such organizations (especially in response to the EU’s technocratic culture) (Greenwood, 2009: p. 1154). In addition, a diverse group of members can critically examine each others’ opinions which can refine arguments (Góra, Holst & Warat, 2019: p. 5). Lastly, a plural knowledge base that is accepted by different affected actors can help with the implementation of policies as it can ensure compliance among actors (Krick, 2018: p. 212). In conclusion, there are several arguments in favor of diversifying expert groups as it can enhance the epistemic quality of policy advice and their potential for

implementation.

However, several authors are critical of such a ‘democratization’ of expertise. Aside from potentially slowing down decision making (Maasen & Weingart, 2005: p. 11), they believe it can negatively affect the democratic and epistemic quality of

expertise. Regarding the democratic argument, it is believed that pluralization of expert

groups can lead to an over-representation of corporate, business or for-profit actors (Marda & Milan, 2018: p. 14; Transnational Institute, 2019: p. 3 – p. 4). Business organizations can have more resources than for example citizen groups in terms of money and technical expertise which can enhance their access to policy-making bodies (Dür, Bernhagen & Marshall, 2015: p. 955). This can skew the decision making towards corporate interests which can make policy decisions less representative. Authors like Kohler-Koch (2013) also doubt the extent to which NGOs make legitimate claims of citizens’ interests (Kohler-Koch, 2013: p. 14): such organizations are not necessarily accountable to their supporters and their leaders are not elected. In addition, regarding the epistemic argument, Ezrahi (1990) has argued that wider stakeholder participation in expert groups can damage the impartiality, objectivity, and integrity of science, which can negatively affect the quality of policy advice (Ezrahi, 1990). One can also question the level and relevance of expertise possessed by interests representatives NGOs and

(14)

corporations and the extent to which such actors share all their expertise truthfully (Dahm & Porteiro, 2008: p. 552; Tallberg et al. 2015: p. 215). Indeed, considering that lobbyists are defending interests, they can have incentives to act strategically and exaggerate their claims, withhold or distort information, or even lie (Dür & Mateo, 2016: p. 17). All in all, such criticisms raise questions about the ‘double role’ (Krick, 2018: p. 212) such experts can play in multi-source, expert groups that negotiate policies.

In conclusion, there are democratic and epistemic arguments that can be made both in favor of and against the pluralization of expert groups. Plurality has the

potential to enhance expert groups and their outputs if such groups are broad and balanced, and if the members are specialist experts who are willing to share their

expertise in a fully transparent manner. However, plurality can also negatively affect the quality of expert groups’ outputs if the groups become dominated by particular

(resourceful) actors who have little specialist expertise and who do not fully share their expertise to protect their interests. Therefore, the relations between the criteria of plurality and competence should be investigated to find out how these different arguments are balanced out in practice. This brings me to the second criterion desired by the Commission (2016) and Krick (2018), namely, competence.

Competence

The second aim of the Commission (2016) and Krick (2018) in order to ensure a high level of expertise from its expert groups is to include qualified experts in the issue at hand (European Commission, 2016: p. 15). The Commission wants specialists with professional skills, competence, experience in the field (Moodie, 2016: p. 247, referencing European Commission 2005, 2010a, 2010b). Thus, the Commission also wants to be a ‘responsible bureaucracy’ (Gornitzka & Sverdrup, 2015: p. 151) as it sees professional expertise as a key source of legitimacy.

As highlighted in the literature, experts should have so-called ‘cognitive success’ (Goldman, 2001: p. 106) to increase the epistemic quality of policy advice. This

cognitive success can be reflected through credentials that demonstrate ‘training, experience, and competence’ (Krick: 2018: p. 18) in a particular field. Such credentials can include academic degrees, professional accreditations, and work experience (Krick,

(15)

2018: p. 18). Specifically for scientists or academics such credentials can include peer-reviewed publications and recent research projects in the policy area (Krick, 2018: p. 215 & Goldman, 2001: p. 98). This work examines the competence of the members but, as mentioned before, also investigates how the criterion relates to other factors like plurality. Finally, the criterion of competence can result in potential trade-offs with another criterion desired by the Commission (2016), impartiality.

Impartiality

The third factor which the Commission (2016), Krick (2018) and others (Horel and CEO, 2013: p. 3; Haas, 2004: p. 547) believe is important for a high epistemic quality of

expertise is impartiality of experts. Like Krick (2018), the Commission (2016) aims to reach this goal by rejecting academic experts who have perceived CoIs (European Commission, 2016: p. 29). However, this raises certain questions, specifically, what is a CoI and why can it be harmful to the impartiality of experts? Subsequently, how does this criterion relate to other factors? This section first defines a CoI and shows how it can damage the impartiality of experts. Second, it shows that this factor can potentially clash with competence. In conclusion, it shows that while impartiality is desirable, Krick (2018) and the Commission (2016) do not necessarily highlight its potential

interference with competence and therefore this relationship should be investigated. In this study, the following definition of a CoI is used: a CoI is a ‘situation in which an employee [or expert] has a private financial interest sufficient to influence […] the exercise of his or her public duties and responsibilities’ (Williams, 1985, p. 6;

Mafusina, 2003, p. 5). In the context of experts, this can arise when an expert has a financial stake in the issue which can negatively affect their impartial judgment of the issue (Williams, 1958, p. 6; Mafunisa, 2003: p. 5). How does this happen in practice? First, authors believe that the practices of organizations can be damaged by CoIs. For example, Gray & Bishop Kendzia believe that organizations can censor themselves to secure funding compliance (Gray & Bishop Kendzia, 2009: p. 167). This idea is also highlighted by Park (2008) who believes that if NGOs become too affiliated with donors they can lose their credibility and independence (Park, 2008; p. 218). Second, Goldacre (2012) shows that CoIs can exist among academic experts funded by pharmaceutical companies. He believes that monetary relations decrease the odds that such experts

(16)

produce work against the interests of the firms that fund them (Goldacre, 2012, p. 221). Many studies affirm this idea and show that industry funding can negatively affect the impartiality of research outcomes (Krimsky, 2005; Bero et al., 2016; Lundh et al., 2017; Guillemaud, Lombaurt & Bourguet, 2016). For example, Guillemaud et al. (2016) found that industry-funded academic reports on certain types of genetically modified

organisms (GMOs) have a 49% chance to be more favorable to the interests of seed industries than non-industry funded research (Guillemaud et al. 2016: p. 10). The same logic can of course be applied to deliberative settings in which academics are in the same room as their funders.

One the other hand, the criterion of impartiality can clash with the desire to have competent members, both aims of the Commission (2016) and Krick (2018). Namely, strictly allowing academics who are publicly funded can make it more difficult to find high-level experts. A work by Ossege (2014) shows that members of European

Regulatory Agencies (ERA’s) have experienced this after the renewed CoI rules, as the recruitment of high level experts became more difficult (Ossege, 2014: p. 413). ERA members believed it was difficult to recruit experts without funding relations and that they sometimes had to include experts that had relations with corporations in order to employ qualified experts (Ossege, 2014: p. 414). Cuts in public budgets for research has made academic funding by private companies even more prominent (Horel and CEO, 2013), affirming the ideas of Ossege (2014). In conclusion, tension can arise between the desires to have high level experts without such funding relations.

All in all, this section has defined a CoI and has shown on the one hand the

potential negative effects it can have on the impartiality of experts. One the other hand, it has also highlighted that this criterion can clash with the competence criterion as desired by the Commission (2016) and Krick (2018). Therefore, like with the previous criteria, this work examines the relations between these criteria and explores whether potential trade-offs had to be considered by the DG CNECT.

Consensus

The Commission also aims for its experts to reach a consensus (European Commission, 2016: p. 11). This factor is also highlighted in Krick’s framework (2018: p. 217). This section starts with a definition of a consensus. Thereafter, it showcases the argument

(17)

that a consensus can increase the epistemic quality of a policy advice and have other benefits. However, it also highlights points of criticism of a consensus-building approach. The section ends with the argument that this criterion is important for the epistemic quality of multi-source, negotiated expertise, but also acknowledges that the criticisms should be considered in the evaluation.

To start, how can a consensus be defined? For this, a definition from Susskind et

al. (1999) can be used. Susskind et al. (1999) believe that a ‘consensus has been reached when everyone agrees they can live with whatever is proposed after every effort has been made to meet the interests of all stakeholder parties’ (Susskind et al., 1999: p. 282). Two important aspects of this definition should be highlighted. First, it shows that a consensus does not necessarily mean unanimity. Indeed, situations can arise when some stakeholders will be worse off regardless of further efforts (Susskind et al., 1999: p. 600). In such cases, the group might accept the overwhelming support as the final result (Susskind et al., 1999: p. 600). Thus, a high level of agreement can also be a consensus even though it might not be a ‘perfect’ consensus where everyone agrees with all the proposals. Second, however, it is necessary that great effort needs to be put in the process which emphasises the important role of the chairs who guide expert groups.

Having defined a consensus, the benefits of it need to be examined. Authors believe that even though it may be difficult to achieve, a consensus can give a policy advice a higher certainty of knowledge and thus increase its epistemic quality

(Mansbridge et al. 2012: p. 18; Beck and Forsyth, 2015: p. 117; Krick, 2018: p. 217;). A high level of agreement can indicate less uncertainty in the knowledge base and therefore give the policy advice more epistemic authority (Beck and Forsyth, 2015: p. 117). Furthermore, a consensus can ensure compliance among stakeholders which can result in more feasible policy advice (Krick, 2018: p. 213). Thus, a consensus can increase the epistemic and political relevance of policy advice.

However, there is also criticism towards a consensus building approach. For example, some believe it has the potential to neglect dissenting opinions (Edmunds & Wollenberg, 2012: p. 1). According to this argument, a consensus might exist on the surface, but it could also mask particular interests or which can be bargained away (Edmunds & Wollenberg (2012, p. 1). Krick herself also acknowledges that a consensus could cover up points of disagreement (Krick, 2018: p. 217). Finally, it is important to

(18)

note that a consensus needs to be reached among a competent and plural group of experts (Krick, 2018: p. 217). The desirability of a consensus is questionable if it has been reached by an unqualified and unrepresentative group of experts. Thus, this criterion always needs to be considered alongside the previous criteria.

Thus, a high level of agreement or a ‘perfect’ consensus has the potential to

increase the epistemic quality policy advice and make it more politically relevant. This can potentially be a strength of multi-source, negotiating expert groups (Krick, 2018: p. 213). However, acknowledging the criticisms, dissenting opinions and the consensus building process also need to be considered. Finally, a consensus is more desirable if it has been reached by a group of largely plural and competent experts. Thus, the criterion also needs to be investigated in relation to the factors of plurality and competence.

Political relevance

The fifth aim of the Commission (2016: p. 4) and Krick (2018) is for expert groups to create policy advice that can assist the legislation of policy proposals and help solve policy problems (Metz, 2013; Rimkutė & Haverland, 2015). Several authors propose that expertise needs to be ‘politically relevant’ (Krick, 2018: p. 218), ‘applicable’ (Holst & Tørnblad, 2015: p. 170) or have political ‘resonance’ (Brown, Lentsch & Weingart, 2005: p. 82). However, what do these terms mean? In principle, it means that expert policy advice should always be realistic, implementable, enforceable, and aim to address the original problem (Holst & Tørnblad, 2015: p. 171; Krick, 2018: p. 218). It is clear that expert policy advice is of little use if nothing can be done with it. In short, expert policy advice should not only consider the science but also the politics. In conclusion, political relevance is an epistemic criterion of expert policy advice desired by the Commission (2016) and other authors like Krick (2018) and therefore needs to be investigated.

External experts’ perceptions

The last criterion which is not explicitly mentioned by the Commission (2016) but can help measure the epistemic quality of a policy advice is the overall perception of external experts (outside of the HLEG). Namely, when evaluating expert performances,

(19)

researchers can be confronted with the ‘problem of epistemic asymmetry’ (Holst & Tørnblad, 2015: p. 167). This means that because one is not an expert on the particular policy issue, one can lack the ability to correctly judge the experts’ performances directly (Holst & Tørnblad, 2015: p. 167). Therefore, other experts from the same discipline can step in to help with the evaluation. This idea is also highlighted by Krick (2018) who believes that such experts can be asked about their perceptions of the problem solving capacity of the report, its technical accuracy and comprehensiveness, the trustworthiness of the used data sources, and in general its methodological rigor (Krick, 2018: p. 218). However, one limitation of measuring this criterion is the possible inability of the researcher to judge the competence of other experts (Krick, 2018: p. 218). Krick’s solution to this problem is to ask as many experts as possible to ensure cross-validation and theoretical saturation (Krick, 2018: p. 218). As highlighted in the methodology, this project attempted to achieve this within the given time scope. Finally, already interviewed experts were asked to give recommendations of other colleagues as these actors have more knowledge on who the experts in the field are. Therefore, this hopefully reduced the epistemic-asymmetry problem.

Epistemic Quality: Final Definition

Having laid out the criteria, I believe that, while taking into account context, an expert policy advice has a high epistemic quality if it is:

Produced by a plural, balanced, and competent group of experts without conflicts of interest who reached a consensus that led to a politically relevant policy advice approved by other experts.

One important point that needs to be highlighted is that this definition excludes some criteria in Krick’s (2018) framework. Several reasons account for this. To start, this work largely focuses on the criteria explicitly mentioned by the Commission (2016). This is important as these criteria have not yet been evaluated and because a political body sees these criteria as important. Therefore, an investigation specifically into these criteria can give more practical relevance to this research.

(20)

Second, an investigation of all of Krick’s (2018) factors would not have been methodologically feasible. For example, other factors mentioned by Krick (2018) include the degree of participation, trust, and cooperation among the experts (Krick, 2018: p. 220). Such criteria can be better explored through participant observation (Halperin & Heath, 2017: p. 331), something which was not feasible to apply given that the HLEG does not convene anymore. Furthermore, Krick (2018) highlights criteria such as the extent to which a problem has been systematically analyzed and the

presence of a ‘red thread’ in the final policy advice (Krick, 2018: p. 216). However, such factors would not have been adequately explored given that I am not an expert on the issue of fake news, which makes me susceptible to the epistemic-asymmetry problem (Holst & Tørnblad, 2015: p. 173). Finally, the scope and time frame of this project does not allow for a sufficient exploration of even more criteria than already investigated.

Research Design

This section outlines the research design of the study. It first shows why a single case study was maintained and other reasons why the HLEG was chosen. Additionally, it highlights what data was needed to answer the research question and which methods and procedures were employed to gain these data. Finally, it explains how the different criteria were analyzed and operationalized.

Single-case design

This research followed a single case design for several reasons. First of all, this study was of an exploratory nature which made a qualitative single-case study suitable. This is because no study has yet attempted to indicate the epistemic quality of an EU expert group’s work with the (partial) empirical application of Krick’s (2018) framework. In addition, the extensive nature of Krick’s (2018) framework also required a single case study. Finally, the limited scope of this project, both in terms of writing space and time, made a single-case study more feasible. In summary, the exploratory nature of the research question, the extensiveness of Krick’s (2018) framework, and the limited scope of this project were the main reasons to conduct a single-case study.

(21)

Why was the work of the HLEG chosen specifically? Several reasons account for this. First, the HLEG was representative of a larger population of expert groups which is an important requirement of a single case study (Gerring, 2007: p. 45). The HLEG was representative as it shared many similar features of EU expert groups. For example, many EU expert groups have the same goal as the HLEG, which is to help the

Commission with the creation of a policy initiative (Tørnblad, 2018: p. 79). Besides, like the HLEG, many EU expert groups consist solely of societal stakeholders (like the HLEG) (Gornitzka & Krick, 2018: p. 58). Finally, it shared the similarity that it consisted of external experts rather than in-house experts of the Commission (European

Commission, 2019). In conclusion, the HLEG was representative of a broader group of EU expert groups which allowed this research to be a true single-case study (Gerring, 2007: p. 45).

Aside from the representative nature of the HLEG, there were several other reasons why the HLEG was chosen. First, the HLEG tackled a relatively new and important issue namely disinformation which concerns many European citizens and therefore gave the group and its work ‘intrinsic importance’ (Gerring, 2007: p. 42). The novelty of the issue made the HLEG an interesting case as advocacy coalitions in a novel policy area might be less clear (Sabatier, 1988) which could have impacted the

selection. Second, the epistemic quality of the HLEG’s work has not been evaluated before which gave the case more scientific relevance. Third, the case was chosen for practical reasons as the HLEG process was finished and therefore the criteria like consensus could be investigated. In addition, HLEG members were responsive to my interview requests, unlike experts in other groups. However, I did interview experts from DG AGRI expert groups in the exploratory research stage and their insights are also included in this work, though informally (their interviews were not transcribed).

Methods

To answer the research question, different types of data were needed. To begin, information on the HLEG’s composition was necessary to investigate the criteria of plurality and balance. Furthermore, the credentials of the HLEG members such as their profession, work experience, the amount of (cited) peer-reviewed publications (mainly for the academics) were needed to explore their competence. Data that showed funding

(22)

relations among members were also required to determine their impartiality. In addition, policy documents that showed the HLEG members’ final votes and potential follow-up policies by the Commission were needed to explore the criteria of consensus political relevance. Finally, data that reflected members’ perceptions were necessary especially for the criteria of consensus and the external experts’ perceptions. Interview data was also necessary for the other criteria to triangulate the data gained from desk research.

Desk research

Considering the required data, the first method employed was desk research. Regarding the composition of the HLEG, the group’s website on the EU Expert Group Transparency Register was examined as it gave the most insights on the group’s composition.

Regarding the credentials of the members, mainly the LinkedIn pages and Google

Scholar pages (of the academics) were examined. These pages were investigated as they clearly demonstrated the credentials and amount of peer-reviewed and cited

publications (in the form of h- and i-indexes) for most members. Some members did not have LinkedIn pages and therefore the members’ personal websites, blogs, or their affiliated organization’s website were examined to find credentials. These websites also gave insights to criteria like consensus as the members wrote opinions on the final report.

Regarding the funding relations among members, all the noted websites were examined but also specific communications and annual reports of organizations and online press articles were scrutinized. All these websites were examined as this approach provided more insights into the members’ funding relations. Regarding the criteria of consensus and political relevance, policy documents provided by the

Commission were examined such as the minutes of the HLEG meetings, the final policy advice, and the Commission’s follow-up policy reports. These documents were retrieved from the transparency register and also provided the necessary data required to answer the subquestions.

The desk research data were stored in the following manner. The credentials of the members retrieved from LinkedIn pages and other websites were put into table matrices in Word. The same method was employed for potential funding relations

(23)

among members. Furthermore, relevant statements by HLEG members on their websites were added to the thematic matrices of the interview data, but the results show that these statements were collected from members’ websites and not through interviews. This brings me to the second method used, semi-structured interviews.

Semi-structured interviews

Semi-structured interviews were mainly conducted to compare the findings of desk research and to ensure data triangulation (Ritchie et al. 2014: p. 358). In addition, interviews were conducted as their gained data was necessary to answer the criteria of the external experts’ perceptions. Interviews are suitable for this as they explore people’s experiences, opinions, and relationships (Halperin & Heath, 2017: p. 290).

Because the study was exploratory, semi-structured interviews were especially useful as they allowed for open-ended questions and probing. This gave the

interviewees room to bring up new ideas (Halperin & Heath, 2017: p. 290). Other types of interviews would have been less suitable for this exploratory research. Structured interviews are generally of a more standardized fashion and include simple and short questions (Halperin & Heath, 2017: p. 290). This type of interviewing does not

allow exploration of people’s experiences in the same in-depth manner as semi-structured interviews do. Finally, unsemi-structured interviews are more suitable for

ethnographic research (Halperin & Heath, 2017: p. 159), a type of research that did not align with the nature of this research question. In sum, semi-structured interviews were chosen specifically to investigate the criteria in an in-depth and exploratory manner.

Sampling

To select the participants of the HLEG, a combination of purposive sampling and

convenience sampling was used (Ritchie et al., 2014: p. 113-116). Purposive sampling is a non-probability sampling technique that intentionally selects particular participants based on certain criteria (Ritchie et al., 2014: p. 113). The criterion on which the members were selected was their participation in the HLEG. More specifically, a

stratified purposive sampling technique was employed (Ritchie et al., 2014; p. 114). The goal of this approach is to select a sample in which the interviewees are fairly

(24)

homogeneous but also display variation concerning a particular phenomenon (Ritchie et al., 2014: p. 114). Such a method allows for comparison between different subgroups (Ritchie et al., 2014: p. 114). In the case of this research, it was important to explore the opinions of the different types of actors. This research would not have displayed

sufficient diversity if, for example, only the academics were interviewed. The same method was employed to sample the outsider experts. A variety of external experts from different Dutch universities were contacted to achieve diversity within the sample. To conclude, the first sampling method used was stratified purposive sampling.

Additionally, this research had to employ convenience sampling (Ritchie et al., 2014: p. 116). This type of sampling selects participants based on their availability (Ritchie et al., 2014: p. 116). This played a role in my research as many experts did not respond to the inquiries. Therefore, if an expert was simply available to be interviewed, the interview was conducted even if sufficient perspectives from the subgroup of the particular member were already gathered. If strictly stratified purposive sampling was employed it would have decreased the sample size and therefore convenience sampling was also used.

The HLEG members and the external experts were all recruited by e-mail.

Following qualitative research principles, this e-mail briefly introduced both myself and the goals of the research project (Ritchie et al., 2014: p. 141). Once the member agreed to be interviewed, a meeting was set up via either Skype, Zoom, or telephone. The used medium depended on the preferences of the interviewee. In the end, most of the

interviews with the HLEG experts were performed using a video-conference tool like Skype and Zoom. Given the preference of two interviewees, their interviews were held on the phone. One interviewee was too busy to be interviewed in-person but agreed to respond to an online questionnaire. Lastly, several follow-up questions were sent by e-mail. It is made clear in the transcriptions if either an in-person interview or an e-mail gathered the data.

In total, thirteen interviews were conducted. Seven interviewees were members of the HLEG. Of these members, two were academics, one was a journalist and a visiting professor, one was a representative of a news media association, one was a

representative of a public broadcasting organization, one was a representative of a consumer protection group and the final interviewee was a representative of a human-rights citizen group. Furthermore, the secretariat who (among others) organized the

(25)

HLEG was interviewed. Also, three ‘outsider’ experts were interviewed. The three were from the University of Utrecht, Amsterdam (UvA) and Leiden, and they were

respectively experts in the field of media literacy, political communication, and fact-checking. All three experts performed research on topics related to the tackling of disinformation. Finally, two journalists were interviewed. One was an investigative journalist who also conducted research on the HLEG and this person was interviewed to compare some of my findings with, mainly regarding the impartiality section. The other journalist was specialized in corporate lobbying and CoIs and was interviewed to better understand the problem of CoIs. In sum, a wide variety of people were interviewed to gauge the epistemic quality of the HLEG’s work.

Finally, this research aimed to follow standard ethical practices in qualitative research (Ritchie et al., 2014: p. 78). For example, members provided informed consent to the interview. In the e-mail and again during the interviews, the goals and nature of this research were explained to make sure the interviewee knew what he or she was consenting to. Also, the interviewees were allowed to stay anonymous which all desired. Therefore, the transcriptions do not provide the names of the HLEG members or the outsider experts but rather show if they were, for example, an academic, a citizen group representative, or affiliated with a certain university.

Data Collection and Transcription

The data was collected in interviews and stored with recordings and transcriptions. Data transcription should not be treated as a technical formality, but rather as a form of research in itself (Atkinson & Heritage, 1984). To achieve this, the classical principles of data transcription as formulated by Mergenthaler and Stinson (1992) were followed to the best of the researcher’s capabilities. One of these principles states that the data must be translated into a verbatim record as close as possible (Mergenthaler & Stinson, 1992) and this was therefore one of the aims of the transcription. Moreover, careful judgments were made to ensure that certain emphasis on words were included in the transcription by placing adequate punctuation and by including intonations. All the interviews with the HLEG members were done in English, but the interviews with the outsider experts were in Dutch and thus translated into English to the best of my

(26)

not work properly, which resulted in incomplete transcriptions. Luckily, notes were taken during the interviews and these were included in the transcriptions. To still include these members’ perceptions, I e-mailed statements I deduced from the notes to the members for them to check if the statements were correct. Permission was granted by these members to include these statements.

Data Analysis

It also needs to be explained how all these different types of data were analyzed. Regarding the data from online sources of members’ profiles and accreditations achieved through desk research, the credentials put into the matrices were analyzed both quantitatively and qualitatively. The credentials and funding relations were quantitatively analyzed to get an overall indication of the credentials of the members. The credentials were also qualitatively analyzed to determine the significance of the credentials in relation to the issue of fake news. Regarding the policy documents, these were also qualitatively analyzed by extracting ‘excerpts or quotations to illustrate a point’ (Halperin & Heath, 2007: p. 180). A quantitative analysis of such documents would have been less suitable as the analysis of these documents concerned matters like follow-up policy proposals and statements by members regarding the consensus. With the interviews, the data were analyzed with the use of thematic coding. This method is commonly used to identify patterns in qualitative data and to provide for systematic analysis (Ritchie et al., 2014: p. 282). To achieve thematic coding, I first familiarized with the data to understand the different themes highlighted by the members (Ritchie et al., 2014: p. 282). After this familiarization, a thematic framework was constructed based on the criteria and what the members said and then the quotes were labeled per theme (Ritchie et al., 2014: p. 282). These statements were then included in the results to answer the concerned sub-question(s).

Operationalization of indicators

How were the different criteria operationalized? The first criterion of plurality and balance was mainly investigated with desk research and compared with the perceptions of the HLEG members. The latter proved helpful in getting a better sense of the group’s

(27)

composition. The members were asked the following questions: Do you believe that all

the affected and relevant stakeholders participated in this group and why? and Do you believe that any particular interests or actors were over-or underrepresented in this group and why? If desk research found a plural and balanced composition of relevant

stakeholders with different interests, expertise and nationalities, and if the perceptions of the members reflected these findings, the HLEG was deemed plural and balanced which would positively enhance the epistemic quality of its work.

To determine the competence of the HLEG experts, desk research was primarily employed to investigate the members’ credentials. Furthermore, interviews were conducted to compare the perceptions with the desk research findings. The following questions were asked to the members: How do you perceive the level and relevance of the

expertise provided by the different members on the issue and why? Specifically, members

were also asked about their views on the level of expertise of the different subgroups. If a substantial, relevant amount of credentials were found among the members in the form of professions, work experience, organizations’ credentials, a high h- and i-index to determine the amount of the academics’ peer-reviewed work and the amount these were cited, and if the members’ opinions reflected these findings, the criterion of

competence was deemed fulfilled which would positively enhance the epistemic quality of the HLEG’s policy advice.

The impartiality of the members was also mainly explored with desk research of the members’ profiles and the organizations’ websites to find funding relations. Also, the members were asked the following question: How do you perceive the level of

presence or absence of financial conflicts of interests among the HLEG members? If

funding relations were not found among the members, the criterion of impartiality would be met and would increase the epistemic quality of the HLEG’s report.

Additionally, the level of consensus was mainly investigated with the analysis of the minutes, policy documents, and blog posts of members to determine the level of

agreement among the members. Also, questions were asked to determine the members’ opinions on the consensus: How do you perceive the balance of the final policy advice? and How satisfied were you with the final policy advice? If a high level of agreement was reflected in the policy documents and if those who voted in favor were also positive towards the outcome, then the criterion of consensus was deemed to be met which would boost the epistemic quality of the HLEG’s output.

(28)

The political relevance criterion was mainly investigated with an analysis of the final policy document of the HLEG and follow-up policy documents of the Commission. If the Commission produced such reports, these were compared with the HLEG’s report to find similarities which could indicate the political relevance of the HLEG’s advice. In line with Krick’s methodological suggestions (2018), the secondary method employed was asking HLEG members and the DG CNECT policy-makers about their opinion on the political relevance of the report. If the document analysis found high similarities and if the perceptions of the members were positive about the policy relevance of the report, the criterion was deemed to be met which would increase the epistemic quality of the HLEG’s advice. Lastly, the criterion of the external experts’ perceptions was mainly explored with the semi-structured interviews. These experts were asked about aspects like the scientific and methodological rigor of the advice and if they believed the report answered to the original problem of fake news. If the experts generally saw these aspects in a positive light, then this criterion was also met which would also increase the epistemic quality of the report of the HLEG. The operationalization of all the criteria is summed up in Figure 1:

Indicator Primary Method Secondary Method

1. Plurality & Balance Desk research of HLEG’s composition

Interviews 2. Competence Desk research of members’ profiles,

organizations’ websites, and individual works

Interviews

3. Consensus Qualitative content analysis of minutes, blog posts of members, and policy documents

Interviews

4. Neutrality Desk research of members’ profiles and organizations’ websites

Interviews 5. Political relevance Qualitative content analysis of

policy documents

Interviews 6. External experts’

perceptions

Interviews Policy documents

(29)

Results

Context of the HLEG

Before proceeding with an analysis of the different criteria, it is important to highlight different contextual factors of the HLEG. First, the selection procedure maintained by the DG CNECT and the reasons for the set-up of the HLEG should be examined. As also noted by an interviewee, the HLEG process started with a call from the European Parliament to the Commission with the request for the Commission to come up with policy proposals to tackle fake news (European Commission 2018b). In response, the DG CNECT set up an open call for application for the HLEG on the 12th of November 2017 (European Commission, 2018b). Applicants had to fill in an application form and the Type A members had to declare potential conflicts of interests. As explained by the HLEG secretariat, the initial plan was to select ‘20 to 25 members’, but due to the ‘high number of good applicants […] [it] became 39 members’. Thus, the group was larger than initially planned.

In addition, it is important to discuss the nature of the issue as fake news can be seen as a ‘wicked problem’ (Peters, 2017: p. 388). A wicked problem stubborn and complex and therefore difficult to tame (Korsten, 2019: p. 3). Korsten (2019: p. 4) highlights that this complexity exists in three ways: cognitive, normative, and social. First, cognitive complexity means that the knowledge of the issue is uncertain about how the problem is caused and how it can be solved, especially if it is a relatively new issue like fake news. Second, normative complexity means that people hold different ‘problem-definitions’ (Rochefort & Cobb, 1995: p. 1) of the issue and ‘frame’ (Rein & Schon, 1994) the problem in different ways (Peters, 2017: p. 388; Korsten, 2019: p. 4). Therefore, people can have a variety of opinions on what causes the issue, the extent of the severity, and how to solve it (Rochefort & Cobb, 1995: p. 15 – p. 27). As shown by the Eurobarometer (European Commission, 2018a), people have very different opinions about who and what is causing fake news and how it should be tackled. Third, social complexity means that many different actors are involved in the problem (Peters, 2017: p. 388; Korsten, 2019: p. 4). With wicked problems, not one actor has all the tasks, resources, and authorizations to tackle the issue, often necessitating a multi-actor or co-production approach (Korsten, 2019: p. 4). This social complexity of fake news was also

(30)

exemplified by the Eurobarometer (European Commission, 2018a) as many different actors were deemed to play a role in the problem. All in all, considering fake news as a wicked problem gives implications for criteria such as the plurality of the group and the final consensus, as reaching a consensus on such an issue is arguably more difficult than with less complex problems.

Finally, the time-frame of the HLEG should be discussed. It was found that the HLEG had very limited time to create their policy advice as the group only had four meetings in a time-span of two months (European Commission, 2019), which according to an interviewee made the ‘time-frame extremely ambitious’. As explained by an interviewee, this was because the Commission wanted the policy advice to be finished before the EU elections of 2018 and therefore it had to be finished before the summer. Furthermore, as the same interviewee highlighted, the new DG CNECT Commissioner Maya Gabriel also played a role in this short time frame. It was believed that because she was new, she wanted to ‘show something quick’, but this was described as ‘legitimate for a politician’. The little time the HLEG had could have made it more difficult to bridge all the different interests and could have impacted the overall quality of the final report. Therefore, the lack of time must be considered when evaluating criteria like consensus and the perceptions of the external experts.

Plurality and Balance

Introduction

Having discussed the contextual factors of the HLEG, the different criteria can now be assessed. The first criteria according to the Commission (2016) and Krick (2018) is that expert groups need to be plural in terms of expertise, interests and nationalities and be balanced. These criteria are important as their presence can increase both the

democratic and epistemic quality of an expert group’s decision. To determine the presence of these factors, this section first highlights which actors are relevant stakeholders in the issue of fake news. Next, it analyzes the criteria through desk research of the HLEG’s composition and focusses on the different types of societal actors in the group, their interests, and nationalities. Thereafter, the data is compared with the perceptions of the HLEG members. Based on the list of potential stakeholders,

(31)

the analysis of the composition, and the perceptions of the HLEG members, this section finds that the DG CNECT managed to gather a plural and balanced expert group. Despite this, however, the interests of citizens and the advertising industry could have been represented better. Finally, this section discusses the implications of these findings as well as possible solutions.

Relevant Stakeholders

Many different actors have a ‘stake’ in the issue of fake news as exemplified by

Eurobarometer research (European Commission, 2018a). Namely, the European public sees the following actors as responsible for the issue: journalists (45% of the European citizens), governments (39%), press and TV management (36%), citizens (32%) and social media platforms (26%) (European Commission, 2018a). First, journalists have a stake in the issue as they can both write and spread fake news. Second, governments

also spread fake news (for example Russia1), but given that the goal of the DG CNECT

was to bring together societal actors, representatives of national administrations should not have been included. Third, citizens are affected by fake news and, as explained by an interviewee, especially those who are not sufficiently media literate. Finally, the

practices of corporations like press and TV organizations and social media platforms can also spread fake news (Pyrzek, 2020; Travers, 2020).

There are also other actors involved in the issue. For example, advertisers have a stake in the problem as they allow people to make money from ads even if their content is ‘fake’ (Pyrzek, 2020). There are well-established links between fake news and

advertising revenues (Pyrzek, 2020). Academics and think tanks are also among those concerned as they perform research on the issue which gives them relevant expertise of the problem. Given the multifaceted nature of the issue, I believe it would be beneficial if these actors have a wide variety of expertise. Additionally, citizen groups and

professional associations are relevant to the issue as they can make claims on behalf of respectively citizens which are affected by fake news and and particular relevant professionals (like journalists). Citizen groups will for the purpose of this thesis be defined as nonprofit, citizen-initiated, and voluntary organizations that aim to advocate

(32)

citizens’ interests of general importance (Durrance, 1979: p. 221; Kohler-Koch, 2013: p. 5). Finally, fact-checking organizations, some of which also fall under this category but some which are for-profit companies (Mantzarlis, 2016), can help solve the issue by verifying content and illuminating disinformation (Travers, 2020).

Desk research Composition

To start, desk research showed that there were were sixteen ‘Type A’ members in the HLEG (European Commission, 2019). Seven of these members were full-time academics (idem). The academics were diverse for three reasons. First, they all came from

different universities (see annex). Second, they almost all had different nationalities, as there were only two members from the same country (Denmark, see annex). Third, the academics worked in a wide variety of disciplines. While all of the academics had some affinity with the issue of disinformation, they had expertise in communication23, artificial

intelligence4, media and information literacy5, political science6, law7, and engineering

and computer science8. Thus, there was a high degree of plurality among the academics

in terms of different universities, nationalities and disciplines.

In addition to these members, there was a wide range of other ‘Type A’ members (European Commission, 2019). To start, there were four journalists in the group (idem). Two of the journalists worked for newspapers and were also visiting professors and

one journalist was also a media executive, actor, screenwriter and film director9

(European Commission, 2019). The final journalist worked for a online news platform which combats disinformation10. There were also three consultants or ‘advisers’

(European Commission, 2019): there was a communication adviser, a legal adviser and an internet adviser (idem). Finally, there was a representative of an independent Irish media organization (INM) that publishes newspapers and a representative of an

2 See https://www.researchgate.net/profile/Alina_Bargaoanu 3 See https://www.rsu.lv/en/anda-rozukalne 4 See https://pure.au.dk/portal/en/persons/anja-bechmann(b093426c-3157-465e-bd29-fe73ee20d446).html 5 See https://www.media-and-learning.eu/2016/node/2877/lightbox2.html 6 See https://www.politics.ox.ac.uk/research-staff/rasmuskleis-nielsen.html 7 See https://sweetlegaltech.com/personnel/oreste-pollicino/ 8 See https://www.martenscentre.eu/people/ziga-turk 9 See https://www.imdb.com/name/nm0704664/ 10 See https://maldita.es/maldita-es-journalism-to-not-be-fooled/

Referenties

GERELATEERDE DOCUMENTEN

From the network analysis it could be seen that Business A had the highest degree of embeddedness considering the strength of the ties, as it scored really high on

Uit de resultaten blijkt dat de merken sinaasappelsap vooral te onderscheiden zijn door een meer of mindere 'natuurlijke' smaak en meer of minder 'bitter'.. De

De mate waarin P in de dikke fractie terechtkomt (kolom 2) bepaalt hoeveel kuub drijfmest verwerkt moet worden om een gewenste P-afvoer bij veehouders met een teveel aan P te

- Het voeren van 25% gemalen tarwe in combinatie met een aanvullend mengvoer heeft geen invloed op de technische resul- taten van gespeende biggen in de opfok- periode. Een

Chapter 4 Membrane-bound Klotho is not expressed endogenously in page 133 healthy or uremic human vascular tissue. Chapter 5 Assessment of vascular Klotho expression

Reinstating the style of satire does not necessarily change this: the fact that Horace spent a lot of time writing his satires does not mean they merit attention.. However, the

Considering programme involvement as a facilitator between different media conditions and advertising effects, Segijn and colleagues (2017) had compared

To identify the possible interrelations between the castle and its surroundings it is important to obtain a wide range of knowledge about the different contexts; the urban