• No results found

Trust in iGovernment: A comparative analysis of EU public sector data governance

N/A
N/A
Protected

Academic year: 2021

Share "Trust in iGovernment: A comparative analysis of EU public sector data governance"

Copied!
66
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Programme: Public Administration – International & European Governance Course: Thesis

Supervisor: Dr. S.N. Giest

Second Reader: Prof.dr.ing. A.J. Klievink

Trust in iGovernment:

A comparative analysis of EU public sector data governance

Student: Alexander Galt Student ID: 2411008

(2)

Contents List of Tables……….1 List of Figures………1 Preface………2 Introduction………4 Research Question………..6 Literature Review………...6

Big-data and Data-driven Technologies……….6

Values and Ethics………...7

Governance, Acceptance and Trust………9

Engagement, Accountability and Transparency………...11

Theoretical Framework………13

Hypotheses………...17

Methodology………17

Approach………..18

Operationalisation of Variables………19

Reliability, Validity and Robustness………21

Research Output………...24 Distribution………...24 Correlation………24 Regression………25 Analysis………32 Conclusion………35 Annex………...37 1. Survey Methodology………37 2. Data Table………38 3. Descriptive Statistics………39 4. Correlation Matrix………39 5. Histograms………40 6. Q-Q Plots………..41 7. Residual Diagnostics………43 Bibliography……….58

(3)

List of Tables

1. Variables………...24

2. Regression Output – Model 1………...25

3. Regression Output – Model 2………...27

4. Regression Output – Model 3………...28

5. Regression Output – Model 4………...30

List of Figures 1. Diagram of tripartite division of principles...……….7

(4)

Preface

As a former local government Public Health analyst I had the pleasure of applying data and analytical techniques toward solving a variety of health and social issues across the county. My team and I worked closely with the IT department, who were undertaking a data architecture project to bring together the organisation's data into a single 'data warehouse', upon which data analytics could be performed. One of the project members told me excitedly about the possibilities that data analysis had to offer (seemingly naïve to the fact that it was indeed my job) and suggested how he might be able to use all this amalgamated data with machine learning algorithms to determine which children to put into state care services.

The proposed approach reminded me of this cartoon1:

Licensed under a Creative Commons Attribution-Non-Commercial 2.5 Generic Licence

My shared enthusiasm for new technology and capability was tempered by a growing apprehension that projects like these were being pursued without proper oversight, theoretical rigour or ethical considerations; and a fear that if we got it wrong (in the moral sense) there would be negative repercussions across the organisation (and

1https://xkcd.com/1838/

(5)

potentially wider government organisations) in the ability to pursue data-driven projects.

Months later we were visited by Xantura (a subsidiary of Ernst and Young) who presented us with their analytics platform2 looking to address the same issue in perhaps a more sophisticated but just as opaque fashion. A week or so after a front page story in The Guardian broke3 covering the use of this system by another local authority, citing racism and privacy fears in relation to the application of Xantura’s service. Senior management decided to pursue other avenues… seemingly the ‘Daily Mail effect’ isn’t the only media game in town.

This episode at the coalface focused my mind on the necessity of responsible government innovation with data-driven technologies in line with democratic expectations; predicated on trust, accountability and transparency. This in turn led me to my postgraduate studies and the acts as the basis for this thesis.

2

https://www.ey.com/uk/en/industries/government---public-sector/ey-protecting-vulnerable-citizens

3

(6)

Introduction

We are living in a world increasingly defined by the digital information revolution. The capabilities of information and communication technologies have moved us beyond digitization (the computational recording of information) and into an era of ‘datafication’, that is, “the possibility of everything [emphasis added] being put into a quantifiable format” (Mayer-Schönberger & Cukier, 2013). Floridi describes this as the period of “hyperhistory”, where information and communication technologies utilise vast stores of data not just to mediate the relationship between users and technologies but also to control other technologies - within which we as people become increasingly defined and self-defined (2014).

In this world big-data and data-driven technologies such as artificial intelligence (AI) wield increasing influence over the operations of governments – and in doing so pose new and significant threats to the democratic values of fairness, justice, free will and dignity. Given these threats, there must be both means through which citizens reconcile and accept potential risks in the government use of these technologies and means through which governments maintain and demonstrate ethical uses of such risky technologies. This paper seeks to explain these dynamics in the European Union (EU) via the interplay of trust, risk, accountability and transparency.

Currently the governance literature related to these phenomena is underdeveloped given the recent proliferation of such technologies within society, with cursory acknowledgement given to data governance as an aspect of the new ecosystem of big-data and big-data-driven technologies (Klievink et al, 2014). Related bodies of literature exist across different research fields including philosophy, law, sociology, public policy, public administration and technology. Pioneered by Professor Luciano Floridi, data ethics has emerged as a comprehensive philosophical approach to the normative values that democratic societies should seek to protect from data-driven technologies (Floridi & Taddeo, 2016; Mittelstadt et al, 2016; Drew, 2016) however this literature does not speak to the functional dynamics of how these ethics should be applied or to the effectiveness of policy measures governments take to maintain ethical uses of data.

(7)

The most comprehensive framework regarding data governance comes from Prins et al in iGovernment (2011) who explore the role of data in government through a tripartite model of balancing principles (including accountability and transparency). The empirical evidence in this work is limited to a case study approach in the Netherlands and therefore has limited generalizability and remains beyond the scope of understanding the scale of effects of each principle. More generally there is literature regarding the role of trust in government use of risky technologies (Pavone et al, 2015; Siegrist & Cvetkovich, 2000) and how public engagement, accountability and transparency may offer routes for governments to increase trust from citizens (Carolan, 2006, O’Neill, 2002; Blomqvist, 1997).

The literary relevance of this paper comes in terms of developing a parsimonious model to understand the mechanisms through which citizens come to accept government use of data-driven technologies and which governance arrangements improve the efficacy of such mechanisms. This is achieved by undertaking multiple regression analysis via a trust, risk and acceptance framework, which utilises existing empirical data from European wide surveys (Eurobarometer and European Social Survey). The results suggest that trust and risk do play a central role in the acceptance of government use of data-driven technologies. However the governance arrangements that governments employ have promising but less robust effects, suggesting more detailed work should be undertaken to understand the specific roles that particular governance arrangements have on trust and acceptance of government use of data-driven technologies.

This paper follows a conventional structure with the central research question followed by a review of literature, which explores thematically each of the relevant concepts. The theoretical framework then formalises these concepts and presents a number of hypotheses to be tested. The methodology section operationalises key concepts into specific variables and also details the type of analysis that is undertaken alongside a discussion on case selection, validity and robustness of the analysis. The research output section describes the statistical results of the hypothesis testing followed by a theoretical discussion of the results in relation to the literature and a consideration about their generalizability. The paper concludes with a reflection on the research question and recommendations for further research.

(8)

Research Question

How effective are democratic states at developing citizen acceptance in the government use of data-driven technologies?

Literature Review

The field of study relating to governance of big-data and data-driven technologies is relatively disparate given how recently these phenomena have emerged in society. I have therefore undertaken a multi-disciplinary approach to literary and theoretical development; amalgamating multiple fields of study including philosophy, law, sociology, public administration, public policy and technology. This section discusses how big-data and data-driven technologies are manifested in modern society, the threats they pose to democratic values, governance mechanisms relating to acceptance of government use of technologies and the role of trust in this relationship.

Big-data and Data-driven Technologies

Improvements in both the back-end and front end of information and communication technologies (ICTs) from data capture and recording to data storage and management have allowed for the increased quantification of ever more data. Big-data is the result of this process of ‘datafication’ (Mayer-Schönberger & Cukier, 2013) and may be recognised both by its inherent characteristics, the so-called three V’s (high-Volume, high-Velocity and high-Variety) (Kitchin, 2014) and its functional characteristics relating to data sources, structures, timeliness, handling and usage (Klievink et al, 2017). At the same time processing power has also increased which has allowed ‘data-driven technologies’ to proliferate, allowing the wide-scale operationalisation of new sources of data across society. Data-driven technologies are often algorithmic, automated and predictive in nature, commonly referred to as machine-learning (ML) and artificial intelligence (AI), which promise to harness the “implicit, latent value” (Mayer-Schönberger & Cukier, 2013) of information. These technologies may be used to gain new insights from data and/or can be integrated in to existing

(9)

technologies and decision-making processes that would otherwise require human involvement (Floridi, 2014).

Governments have been subject to this paradigm shift in ICTs, moving from a focus on service provision (eGovernment) to a controlling position “typified by information flows and data networks” – so called iGovernment (Prins et al, 2011). From a rationalist perspective the advent of big-data within the public sector necessarily leads to better decision-making due to the larger amount of actionable information that is made available (Van Der Voort et al, 2019). This perspective acts as the basis for the “driving principles” of iGovernment in which the utilisation of information and data-driven technologies is ‘data-driven’ by the notion of gains in effectiveness and efficiency (Prins et al, 2011) – see Figure 1. These driving principles are balanced out by underpinning principles, which are discussed in the next section.

Figure 1 – Diagram of tripartite division of principles, from iGovernment (Prins et al, 2011)

Values and Ethics

Prediction isn’t something new, however the form in which data-driven technologies deploy prediction is. Traditional science aims at identifying and quantifying the underlying effects of phenomena via hypothesis testing in search of causal effects upon which predictions can be made. Conversely big-data and data science methods

(10)

often leave causality and veracity at the door and instead settle for high fidelity correlation. In this sense “[b]ig-data is about what, not why” by allowing “data to speak for itself” (Mayer-Schönberger & Cukier, 2013). Boyd and Crawford identify this as a specific weakness of data-driven approaches as it “enables the practice of apophenia: seeing patterns where none actually exist, simply because enormous quantities of data can offer connections that radiate in all directions” (2012). This distinction represents a main fault line with respect to data-driven technologies; with big-data devotees endorsing the underlying power of large scale data collection, computation and analysis versus critics arguing against its destabilizing effects and lack of philosophical regulation (Berry, 2011).

Whilst there is on-going debate about the epistemological value of big-data and data-driven technologies, there is relative consensus regarding their potential harms to societal and ethical values (Mittedlstadt et al, 2016). First and foremost the practice of obtaining informed consent when using data is flawed due to the myriad of secondary uses of data that cannot be known when data collection commences. This combined with the increased capacity for re-identification/de-anonymisation of personal data leads to potential negative impacts on the protection of individual privacy. Furthermore data-driven technologies allow for expanded mechanisms for exercising authority including “control creep”, whereby data is purposefully appropriated for a purpose other than its primary function (Kitchin, 2014) and “penalties on propensities” (Mayer-Schönberger & Cukier, 2013) in which predictions influence action or assign judgement based on likely events rather than actual events, as in the case of predictive policing. An often-overlooked risk of data-driven technologies is to those that are ‘discounted’ from regular data collection and who therefore have their preferences and behaviours excluded from analysis and decisions (Lerman, 2013). These potential harms have the collective power to negate the values of fairness, justice, free will and dignity in democratic societies.

In response to the potential for harm, ‘data ethics’ has emerged as a new field in philosophy to create a common language of values that democratic societies should seek to protect from big-data and data-driven technologies. Pioneered by Professor Luciano Floridi, data ethics is defined as:

(11)

“the branch of ethics that studies and evaluates moral problems related to data (including generation, recording, curation, processing, dissemination, sharing and use), algorithms (including artificial intelligence, artificial agents, machine learning and robots) and corresponding practices (including responsible innovation, programming, hacking and professional codes), in order to formulate and support morally good solutions (e.g. right conducts or right values)”

with a stipulation that the guiding principle of data science must be social

acceptability (Floridi & Taddeo, 2016). Other literature addressing the ethical

perspective on big-data follow similar themes, with Prins et al labelling values such as privacy and freedom of choice as the “underpinning principles” of iGovernment (2011) – see Figure 1 – and Custers et al reviewing the “ethical perspective” from a virtue ethics perspective, highlighting issues such as privacy, autonomy, dignity and justice (2017). Clearly there is a rationale for governance of data-driven technologies given both the potential risks to normative values in democratic society and the specified reliance on public acceptance in their proliferation and deployment.

Up until now I have made minimal mention of the specific role of government with data-driven technologies. This is mainly due to the generalizability of threats and abstract nature of values enabling a discussion about them in terms of the impact on society as a whole rather than from any particular actor. Regarding governance however, government has a key custodianship role in protecting values both through legislation and regulation of private actors but also in their own conduct through the execution of authority and the provision of public services. This will be the main focus of the next section.

Governance, Acceptance and Trust

Governance plays a key role in mediating the roles and responsibilities of the state, citizens and third party actors, especially when it relates to public acceptance of societal developments. Despite this the state of literature relating to public acceptability and data governance in government is currently lacking. Klievink et al acknowledge the relevance of data governance in the schema of “big data readiness in

(12)

government” but do not explain what this means or how it works (2014). Prins et al reflect on governance dynamics from the government perspective via the development of “process-based principles (accountability and transparency)” nested in a wider “tripartite model of principles” (2011) – see Figure 1. Whilst acknowledging that there are driving principles behind governance the authors stop short of developing on the governance mechanisms and instead investigate the interplay between principles via specific case studies across policy sectors. Drew (2016) takes the most direct approach in her study by applying the data ethics principles within government and analysing issues through public consultation and engagement. She finds that public awareness of the role of data science in government is important alongside contextual considerations as to what is in the public interest with both of them impacting on the level of trust the public places in government use of data-driven technology (Drew, 2016).

To develop on this issue we need look at literature further afield toward the governance approaches that mediate the relationship between general science, government use risky of technologies and public acceptance in order to understand the underlying governance dynamics. Pavone et al (2015) take this approach when formulating a framework of acceptability of surveillance technologies, mapping the development of arguments from different perspectives. Firstly the deficit model “attributes opposition towards technology to a lack of scientific knowledge” (Pavone et al, 2015). In this case education and scientific literacy levels within society would be deterministic of acceptance of new technologies. Conflicting research points toward the more important aspect of contextual approaches in which a lack of trust towards the scientists and governments and a divergence of practice from ethical values were the key issues regarding non-acceptance (Siegrist & Cvetkovich, 2000). Acceptance of government use of technologies therefore appears to be largely predicated on either scientific literacy or trust and responsiveness to values.

As trust is potentially a key component of public acceptability of government use of technology it would be useful to conceptualise how trust works. Trust is usually conceived of as a multilateral psychological quality that facilitates interpersonal social interactions (Giddens, 1984). From a functional perspective however trust is a manner in which to reduce the complexity of the world in order for us to act as if the future is

(13)

certain (Luhmann, 1979). We do this in order to tolerate risk and uncertainty in the face of the complexity of an indeterminate future. From a political philosophy perspective this is known as the “risk society”, where trust enables a reduction of risk – especially in situations where one cannot opt-out easily such as the relationship between citizen and state (O’Neill, 2002). In this risk society, institutions and technology increasingly mediate societal interactions and therefore “systems trust” (Luhmann, 1979) or “institutional trust” (Zucker, 1986) emerges as a unilateral form of trust in order to enable individual confidence in systematised interactions.

Giddens develops on this concept of systems trust stating “trust in the reliability of systems is based upon the faith in reliability of human individuals” (Giddens, 1990). In this scenario users of a system are able to build trust through “access points” in which interactions, or “facework commitments”, with representatives of a system are able to demonstrate trustworthiness and integrity (Giddens, 1990). Data-driven technologies however are diminishing the opportunities for interpersonal trust relationships to form between government representatives and citizens. Government is increasingly moving from ‘street-level bureaucracy’ toward ‘system-level bureaucracy’ where the “discretionary powers of street level professionals… has shifted to those responsible for programming the decision making process” (Zouridis, van Eck & Bovens, 2019). Prins et al trace this same shift via “computerised professionalism”, stating that the shape of accountability is shifting as a consequence (2011). In this scenario perceived trustworthiness of an authority increases in importance (O’Neill, 2002) and can be demonstrated through the upholding of values such as integrity, reliability, and fairness (Mayer, Davis & Schoorman, 1995; Zaheer, McEvily & Perrone, 1998). Therefore governance measures are required to demonstrate such values in response to the changing modes of trust between citizens and state that data-driven technologies are bringing about. Returning to literature on the acceptance of risky technologies offers some further insights on this.

Engagement, Accountability and Transparency

The democratisation of science through dialogue and public participation in decision-making possesses credible restorative effects to public trust (Carolan, 2006). Rather than solving issues outright the purpose of dialogue is seen to clarify “what is in

(14)

conflict with public opinion” (De Marchi, 2003), enabling government decisions to align with the interests of the public. Furthermore ‘anticipatory governance’ has emerged as a new approach in the field of ‘technoscience’ to specifically prime and encourage public support of new technologies via ‘co-constructive’ public participation that addresses any normative issues that may develop in their usage (Guston, 2014). Public engagement then represents a useful tool to address responsiveness to the normative aspects of the use data-driven technologies in relation to public trust.

O’Neill also highlights the importance of accountability measures and transparency of decision-making in relation to public trust (2002). Accountability acts a key foundation of democratic governance systems by ensuring that governments remain responsive and responsible to their citizens, and can be framed from both an instrumental perspective and a normative perspective as virtue. Despite the importance of accountability on both of these levels, it remains “conceptually amorphous” with greater literary attention paid to ‘accountability mechanisms’ than any comprehensive theory of accountability. Accountability mechanisms are tools that allow for “account giving” (Dubnick & Frederickson, 2011); that is they enable the capacity of one party to offer an account of their actions to another (Bovens, 2007). Policy makers utilise accountability mechanisms4 in the belief that they will fulfil particular normative objectives of integrity, legitimacy, fairness and trustworthiness in decision-making, as citizens will be able to hold an authority responsible for its actions in line with their values.

Transparency plays a key role in the efficacy of accountability mechanisms as citizens need both information and a means to judge it in order to “discover not only which claims or undertakings we are invited to trust, but what we might reasonably think about them” (O’Neill, 2002). Simply put, transparency allows for an ‘open account’ to be given regardless of any attempts of obfuscation or evasion of accountability by the account giver. This form of transparency functions through citizen access to relevant

information and has manifested in modern democratic states as a ‘right to know’ or

‘right to information’, in which citizens have the legal right to request any information

4 Common accountability mechanisms include independent audit functions, standards and codes of conduct.

(15)

from government outside of national security and economic stability concerns (O’Neill, 2002; Taylor & Kelsey, 2016). Through citizen access to information, governments are able to be held accountable (Florini, 1998) and are more responsive to the values of citizens. As such transparency enhances the ethical characteristics of government (Rawlins, 2009).

Trust is also a direct function of transparency as trust necessarily relies on an asymmetry of information between parties (Blomqvist, 1997) – indeed this is why trust is invoked in the first place – with the expectation that greater transparency will increase trust. The empirical relationship between transparency and trust is however not so clear-cut, with trust baring no systematic association in the wake of greater transparency (Mabillard & Pasquier, 2016). Increased levels of transparency may create a deluge of information that can add to uncertainty rather than decreasing it and therefore “dislocate our ordinary ways of judging one another’s claims and deciding where to put our trust” (O’Neill, 2002). This is especially the case in the world of big and open data where publication of large volumes of data through government data portals has been found to be counterproductive to the objectives of transparency (Lnenicka, 2015).

To summarise, data-driven technologies pose new threats to normative democratic values. Government has a specific role to play in the governance of these technologies generally but especially when it comes to their own deployment of these technologies. The success of governance is measured by the level of public acceptance in the use of technology, which is in turn determined by the level of trust. The use of data-driven technologies represents a shift towards system level decision making therefore systems trust is determinant in the acceptance of their use. Governance measures addressing accountability, transparency, public engagement and alignment of practice to social values may offer routes to improve levels of trust and social acceptability of government use of risky technologies such as data-driven technologies.

Theoretical Framework

The primary goal of this theoretical framework is to formalise the concept of trust in the relationship between citizens and governments regarding its use of data driven

(16)

technologies. The secondary goal is then to apply the principles from ‘iGovernment’ to see how government characteristics and actions can influence this relationship. Together this should produce a parsimonious model of governance, trust and risk in the acceptance of government use of data-driven technologies. The causal diagram is given below:

Figure 2 – Governance, Trust and Risk in ‘iGovernment’ Acceptance

The Citizen Perspective in Figure 2 applies a similar conceptual framework to Carter & Bélanger’s Trust and E-government adoption model (2008), which explores the systems trust dynamics that determine citizen uptake of government provided digital services5. In this model trust is defined through social learning theory as “an expectancy that the promise of an individual or group can be relied upon” (Rotter, 1971) and is based on the characteristics of the trustor (in aggregate terms the disposition to trust within a society), traits of the trustee (in this case government) and institutional/system factors (technology).

The expectancy in Figure 2 is that governments (the trustee) will uphold and demonstrate ethical uses of data-driven technologies in order to protect democratic

5This framework is itself based on the theory of reasoned action (TRA) – a behavioural model used to predict human behaviour based on stated intentions (Ajzen & Fishbein, 1972).

Citizen Perspective Government

(17)

values against the potential threats that these technologies pose to citizens (the trustors). Trust is necessary from citizens due to its ability to mediate the (perceived) risk arising from uncertain outcomes when governments use these technologies. If these expectations are upheld then citizens are predicted to accept government use of these risky technologies (Pavone et al, 2015). Citizen acceptance is therefore defined as the willingness of citizen to accept government data-driven technologies.

In their E-government adoption model, trust is composed of trust in a ‘specific entity’ (Government) and trust in the reliability of an ‘enabling technology’ (The Internet) (Carter & Bélanger, 2005) – both representing constructions of ‘systems trust’ or “institution-based trust” (Carter & Bélanger, 2008). Similarly, in Figure 2 trust is composed of trust in a specific entity (Government) and trust in an enabling technology (Data-driven technologies). Acceptance of government use of data-driven technologies is not only contingent upon trust in governments integrity and capability in using them as instruments, but also trust in data-driven technologies as systems within themselves, particularly regarding automated decision making as the quasi-independent6 nature of data-driven technologies is something that requires us to trust them separately from whomever is using them.

Perceived risk relates to the subjective costs and benefits expected by an individual (Warkentin, 2002). In Figure 2 perceived risk is comprised of risks posed to individual citizens through the infringement of their rights by government use of data-driven technologies and of risks posed to wider society through the undermining of values. As discussed previously trust is necessary in order to navigate the inherent risks associated with uncertainty (Luhmann, 1979; Mayer et al, 1995), and is also found to act as a demonstrable check against perceived risk (Pavlou, 2003). Both trust in government and trust in data-driven technologies are expected to reduce the level of perceived risk by citizens in the government use of data-driven technologies.

Societal trust is taken as the disposition to trust other people and is based on ‘interpersonal trust’ or “personality-based trust” (Carter & Bélanger, 2008). This form

6Data-driven technologies are still subject to an initial human-based (re-)design process so are therefore not yet deemed fully independent despite having the ability to make autonomous decisions.

(18)

of trust is beyond the direct influence of government and instead represents the socialised propensities to believe in the positive intentions of others (Rotter, 1971). This factor reflects the perennial differences in cultural attitudes towards other people and institutions in a society. A higher level of societal trust should result in a higher level of trust in government and trust in data-driven technologies.

The Government Perspective in Figure 2 applies the conceptual framework presented in iGovernment – see Figure 1 – in which the ‘Driving principles’ of effectiveness and efficiency of government are balanced with the ‘Underpinning principles’ of fundamental citizen rights and societal values such as privacy and freedom of choice which are mediated through the ‘Process-based principles’ of accountability and transparency (Prins et al, 2011). These process-based principles demonstrate and communicate to the public that they have both competence and integrity in the provision of data-driven public services. Trust may be established through accountability both symbolically and instrumentally, by democratising decision making and providing accountability through routes to recourse for (mis-)actions. Accountability mechanisms thereby reduce uncertainty and increases responsiveness, which in turn increases trust and reduces the level of perceived risk.

A degree of transparency is in itself necessary for any form of accountability (Grant and Keohane, 2005), as accountability measures need to be open and explainable to be verifiable and considered trustworthy. Broadly speaking these process-based principles represent the governance profile of government and offers positive mechanisms through which citizens may build and maintain trust and acceptance of government and its use of data-driven technologies. Accountability measures are expected to increase the level of trust in government and trust in data-driven technologies. Due to the nuanced relationship between transparency and trust the main hypothesis that transparency is expected to increase levels of trust is also contested by the alternative hypothesis that transparency may actually decrease levels of trust in government and data-driven technologies.

(19)

Hypotheses

The main hypotheses are:

H1: Perceived risk will negatively influence acceptance of government use of

data-driven technologies.

H2: Trust in government will reduce the perceived risk associated with government

use of data-driven technologies.

H3: Trust in data-driven technologies will reduce the perceived risk associated with

government use of data-driven technologies.

The sub-hypotheses are:

H4: Accountability mechanisms will positively influence trust in government.

H5: Accountability mechanisms will positively influence trust in data-driven

technologies.

H6: Transparency will positively influence trust in government.

H7: Transparency will positively influence trust in data-driven technologies. H8: Public engagement will positively influence trust in government.

H9: Public engagement will positively influence trust in data-driven technologies.

The alternative hypotheses are:

H10: Digital literacy will positively influence the acceptance of government use of

data-driven technologies.

H11: Transparency will negatively influence trust in government.

H12: Transparency will negatively influence trust in data-driven technologies. Methodology

The central research question of this paper regards public acceptance of the government use of data-driven technologies and the effectiveness of actions that governments take in the form of big-data governance in order to encourage public acceptance. The key concepts that relate to this are risk, trust, accountability, transparency and engagement as discussed the previous section.

(20)

Approach

The methodological approach applied in this paper is deductive and comparative in nature via large-N multivariate linear regression analysis. This method was chosen due to the ability to undertake robust multivariable pathway analysis and hypothesis testing whilst controlling for covariates and confounding variables. This method also allows for systematic differences to be understood, which is important given the number of variables within the conceptual model and potential for both interaction and marginal effects of variables.

The case selection for this analysis is EU member states as this primarily allows for the most number of variables within the model to be taken as exogenous, therefore contributing to the explanatory power of the independent variables. Furthermore EU member states are comparatively advanced to other nations in the government use of data-driven technologies and therefore there is sufficient variance amongst the dependent variable across EU countries to allow for effects of the explanatory variables to be shown.

The ‘Underpinning Principles’ are taken to be exogenous as the General Data Protection Regulation (GDPR) instils privacy rights and freedom of choice within legislation across all EU countries. Certain aspects of differentiated implementation of the GDPR have been highlighted by Custers et al (2018), however the European Data Protection Board are actively working towards harmonisation of implementation and interpretation of the legislation across member states and can therefore be assumed as constant for this model. Similarly the ‘Driving Principles’ are taken to be exogenous given the commonality of public sector culture and objectives for EU governments to achieve effectiveness and efficiency in the use of AI. This can be most notably seen in the work of the EU High level group on AI and their work on a pan EU AI strategy (European Commission, 2019). This group is currently working on ethical guidelines regarding the use of AI – however it is to date still in the consultation phase and therefore not taken as an exogenous variable given the current disparity in individual country level efforts.

(21)

Operationalisation of Variables

A number of variables within the theoretical framework have direct operationalisations, with survey questions specifically addressing the underlying concept e.g. trust, and therefore act as strong indicators of the underlying variable. Other variables have less direct operationalisations with non-specific survey questioning resulting in proxy variables or factor analysis being used instead. It is acknowledgement that there is potential for subjectivity in survey data as concepts can take on different meanings within different contexts, however these effects are likely to minimal given the objective style of questioning within surveys and that all surveys are undertaken within the EU which has relative cultural alignment – see Annex for survey methodology. It is also acknowledged that some questioning is undertaken in relation to data protection regulation (GDPR) and therefore respondents may be primed for this line of thinking rather than the wider aspects of values and ethics.

The societal trust variable (Trust_Society) is operationalised through direct survey questioning regarding trust in other people – representing the baseline level of trust that exists within each country. This variable has missing data for 12 countries within the 2018 sample; therefore further data is taken from the 2016 sample in order to fill in data for 4 countries. The correlation coefficient between the 2016 and 2018 matching data is 0.99 and is therefore the combined dataset is taken to produce the most robust and consistent variable for societal trust. A second equivalent variable (Trust_Society2) for societal trust has also been used which has full data coverage, however this data is from 2013 and will therefore only be used if missingness in the first dataset causes significant issues with variances and degrees of freedom within the regression models. Societal trust acts as a control variable across models.

The trust in government variable (Trust_Government) is also directly operationalised from survey data and acts as an explanatory variable across models. The trust in data-driven technologies variable (Trust_DDT) is indirectly operationalised through survey data due to there being no survey question relating to trust specifically. Survey data relating to a ‘positive view’ towards data-driven technologies is taken as the closest

(22)

available proxy to trust. Trust in data-driven technologies also acts as an explanatory variable across models. An interaction variable (Trust_GovDDT) is also constructed from both of these indicators and is tested as a separate explanatory variable across models.

The perceived risk/benefit variable is directly operationalised through survey data for both a societal level (Social_Benefit) and an individual level (Individual_Benefit) in order to test for different perspectives on where the balance of risk and benefit lies. These are taken as the main explanatory variables. An interaction variable (Net_Benefit) is also constructed for these indicators in order to understand the impact of the combined benefit of data-driven technologies for individuals and society.

Digital literacy is included as a potential confounding variable (Digital_Literacy) due to the predictions of the deficit model that knowledge of technologies has a positive relationship with their acceptability. Digital literacy is taken as the knowledge that people have regarding their rights under GDPR and acts as a control variable across models.

The accountability (Accountability) variable is constructed as a factor variable by taking the average of 3 binary variables measuring the existence of specific accountability mechanisms relating to government use of data-driven technologies. The existence of an AI strategy provides accountability via the publication of non-binding information about government intentions which citizens can openly judge as being fulfilled or otherwise. Similarly the publication of AI standards also allows for accountability by providing a checklist of how AI is supposed to be used and what AI is and isn’t supposed to be doing in society. An AI Advisory Board provides an independent expert body to both inform government policy decisions regarding AI but also to publish publicly accessible assessments on government actions regarding AI in relation to its stated strategy and standards. This variable acts as a secondary explanatory variable as it is tested separately for its impact on trust as well as being tested for its impact on acceptance.

The openness (Openness), transparency (Access) and engagement (Consultation) variables also act as secondary explanatory variables for this reason. The data source

(23)

index constructs openness as an overarching indicator for transparency and engagement – and is made up of data regarding civic participation, e-participation, open data, and rights and access to information. The transparency (access to information) and engagement (civic participation) variables take individual indicators from this index domain in order to understand if there are more specific dynamics than openness. Where these are used, openness is not used due to the issue of multicollinearity.

Reliability, Validity and Robustness

Reliability of the specific models analysed in this paper is warranted through the use of open survey data from a trusted statistical and academic authorities. Reliability of the underlying results is considered strong due to the large sample sizes of data being used, which ensure representativeness of views within societies – see Annex for survey methodology.

Whilst the survey questions are potentially subjective and general the scale of responses in each country is consistent and the nature of surveying should net out any non-systematic effects of subjective answers. This is a potential limitation of the methodology, however due to data availability this is the most valid approach that could have been achieved given the available resources.

Robustness is ensured via the consistency of questioning across survey periods and the large sample sizes that ensure representativeness of the relevant population – see Annex for survey methodology. There has been an attempt to collect the most recent data available for all variables due to the contemporary nature of the research subject. The variables, definitions, indicators and sources of data are given in Table 1 below. Full data table and summary statistics can also be found in the Annex.

(24)

Table 1 – Variables

Variable Definition Indicator Name Source

Societal Trust

(Control variable)

The disposition to trust other people.

Most people can be trusted – Average score (0-7) Trust_Society European Social Survey (2016/18)

Trust in others Trust_Society2 Ilc_pw03 Eurostat (2013)

Trust in Government

(Explanatory variable)

The disposition to trust the government.

Tend to trust: The (National) Government (%) Trust_ Government QA6a.8, Standard Eurobarometer 91 (2019)

Trust in Data-Driven Technology

(Explanatory variable)

The disposition to trust data-driven technologies.

Positive view of robots and artificial intelligence (%) Trust_DDT QD10, Special Eurobarometer 460 (2017)

Perceived Risk/Benefit

(Main explanatory variable)

Subjective cost/benefit of use of data-driven technologies on the society.

Positive impact of recent digital technologies on: The society (%)

Social_Benefit QD1.2, Special Eurobarometer 460 (2017)

Subjective cost/benefit of data-driven technologies on the individual.

Positive impact of recent digital technologies on: Your quality of life (%)

Individual_Benefit QD1.2, Special Eurobarometer 460 (2017)

Digital Literacy

(Control variable)

The level of knowledge in society regarding data-driven technologies.

Heard of at least 1 (GDPR) right (%) Digital_Literacy QB18.5, Eurobarometer 91.2 (2019)

Accountability Mechanisms that allow for account giving of policy

AI Strategy - Binary Accountability OII AI Readiness Index (2019)

(Secondary explanatory variable) decisions regarding data-driven technologies

AI Advisory Board - Binary NESTA AI Governance Index (2019)

AI Standards - Binary NESTA AI Governance Index (2019),

European AI Landscape – Workshop Report (2018)

Openness

(Secondary explanatory variable)

The level of transparency regarding policy decisions.

Openness Openness Openness domain – International Civil Service

Effectiveness (InCiSE) Index (2019)

Transparency

(Secondary explanatory variable)

The level of access society has to the information used to make policy decisions.

Access to information Access Indicator within the Openness domain –

International Civil Service Effectiveness (InCiSE) Index (2019)

(25)

Acceptance

(Dependent variable)

The willingness of citizens to accept government data-driven technologies.

Public authorities are the actors best placed to take effective actions to address the impacts of recent digital technologies

(26)

Research Output

This section details the statistical output from the formal hypothesis testing models that were analysed. Three variations of the model were analysed to conduct isolated tests for specific hypothesis with the fourth model bringing all the variables together to construct a parsimonious model of trust, risk and acceptance in the government use of data-driven technologies. The ordinary least squares method of regression analysis was used via the statistical analysis program R.

Distribution

All variables appear to have normal distributions despite some slight peaking at the tails of variables (see histograms and Q-Q plots in Annex); therefore data transformations are not required. This is expected given the sample sizes and representative nature of the survey data detailed in the Methodology section. The Accountability variable is constructed from 3 binary variables and therefore represents more of a factor variable, which explains the stepped nature of the Q-Q plot.

Correlation

As expected there is a strong and significant positive pairwise correlation between the Societal Trust indicators, showing that the secondary indicator is a good proxy for the first indicator if this is deemed unsuitable in regression analysis due to missing data causing insignificance across models.

The Acceptance variable shows promising significant relationships in the expected direction with all main explanatory variables other than Individual Benefit, which does not have a significant correlation. Digital Literacy also has strong and significant pairwise correlations with many other variables potentially pointing to the relevance of the alternative hypothesis that knowledge is an explanatory factor in acceptance of data-driven technologies. Neither Openness nor Accountability has significant pairwise correlations with Trust in Government or Trust in Data-driven technologies, which may point to the weakness of the sub-hypotheses above.

(27)

Regression

Ordinary Least Squares regression analysis has been undertaken on different stages of the theoretical framework in order to test the hypotheses given above. See Annex for Diagnostic plots of the residuals for each model.

Table 2 – Regression Output - Model 1

Dependent variable: Acceptance (1) (2) (3) (4) (5) (6) (7) (8) Constant 0.13 0.12 0.26 0.18 0.09 0.25 0.20 0.27 (0.17) (0.23) (0.21) (0.27) (0.18) (0.16) (0.21) (0.20) Trust_Society -0.02 -0.02 -0.01 -0.01 (0.02) (0.02) (0.03) (0.03) Trust_Society2 -0.02 -0.03 -0.02 -0.03 (0.03) (0.02) (0.03) (0.02) Trust_Gov 0.04 -0.01 0.11 -0.04 (0.18) (0.14) (0.23) (0.14) Trust_DDT 0.34 -0.26 0.20 -0.23 (0.24) (0.28) (0.32) (0.28) Trust_GovDDT 0.29 0.004 0.26 -0.02 (0.24) (0.22) (0.31) (0.22) Social_Benefit 1.09*** 0.72* 1.08*** 0.61 (0.33) (0.40) (0.32) (0.39) Individual_Benefit -1.41*** -0.25 -1.30*** -0.27 (0.45) (0.51) (0.43) (0.50) Net_Benefit 0.10 0.45* 0.12 0.36 (0.24) (0.22) (0.21) (0.21) Digital_Literacy 0.63** 0.47* 0.56* 0.37 0.33 0.42* 0.30 0.33 (0.28) (0.25) (0.27) (0.25) (0.35) (0.25) (0.34) (0.25) Observations 20 28 20 28 20 28 20 28 R2 0.68 0.26 0.67 0.23 0.40 0.23 0.41 0.20 Adjusted R2 0.54 0.05 0.55 0.06 0.19 0.05 0.25 0.06 Residual Std. Error (df = 0.06 13) 0.08 (df = 21) 0.05 (df = 14) 0.08 (df = 22) 0.07 (df = 14) 0.08 (df = 22) 0.07 (df = 15) 0.08 (df = 23) F Statistic 4.67*** (df = 6; 13) 1.26 (df = 6; 21) 5.65*** (df = 5; 14) 1.35 (df = 5; 22) 1.90 (df = 5; 14) 1.29 (df = 5; 22) 2.56* (df = 4; 15) 1.44 (df = 4; 23) Note: *p**p***p<0.01

(28)

Hypothesis 1 appears to be partially satisfied as Social Benefit (opposite of risk) significantly increases the level of citizen acceptance in government use/management of data driven technologies. However Individual Benefit has a significantly negative relationship with Acceptance, suggesting that individuals potentially expect and accept personal risk in order to obtain societal benefit in the government use of data-driven technologies. The order of the coefficients for these variables is also interesting with a greater personal risk taken compared to social benefit needed in order to induce Acceptance.

Although the coefficients and individual significance levels between Trust_Society and Trust_Society2 do not differ greatly, the joint significance and overall explanatory power of models that include Trust_Society2 are greatly diminished compared to models using Trust_Society as seen by the R2 and F-statistics respectively.

Model 1.1 and 1.3 perform best out of all of the models in terms of joint significance (see F-statistic) and in explanatory power with the highest R2 and Adjusted R2 values with stable coefficients and individual significance across models. No aspects of Trust appear to have a directly significant influence on Acceptance – however the relationship between trust and perceived risk/benefit is explored in model 2 below.

(29)

Table 3 – Regression Output - Model 2

Dependent variable:

Social_Benefit Individual_Benefit Net_Benefit

(1) (2) (3) (4) (5) (6) Constant 0.50*** 0.85*** 0.41*** 0.70*** 0.17 0.61*** (0.17) (0.18) (0.12) (0.14) (0.19) (0.20) Trust_Society 0.01 0.01 -0.001 0.01 0.003 0.01 (0.03) (0.03) (0.02) (0.02) (0.03) (0.04) Trust_Gov 0.22 0.11 0.23 (0.22) (0.16) (0.25) Trust_DDT 0.65** 0.54*** 0.81*** (0.25) (0.18) (0.27) Trust_GovDDT 0.66** 0.49** 0.80** (0.28) (0.21) (0.31) Digital_Literacy -0.48 -0.57 -0.13 -0.23 -0.42 -0.55 (0.33) (0.34) (0.24) (0.25) (0.36) (0.38) Observations 20 20 20 20 20 20 R2 0.43 0.32 0.52 0.39 0.50 0.37 Adjusted R2 0.28 0.20 0.39 0.27 0.36 0.25 Residual Std. Error (df = 15) 0.07 (df = 16) 0.08 (df = 15) 0.05 (df = 16) 0.06 (df = 15) 0.08 (df = 16) 0.09 F Statistic 2.88* (df = 4; 15) 2.55* (df = 3; 16) 4.02** (df = 4; 15) 3.37** (df = 3; 16) 3.73** (df = 4; 15) 3.14* (df = 3; 16) Note: *p**p***p<0.01

Hypothesis 2 is not satisfied, as there is no significant relationship between trust in government and societal, individual or net benefit in government use of data-driven technologies. Hypothesis 3 however is satisfied with a strong and significant positive relationship between trust in data-driven technologies and benefit across all models. Given the high individual significance of Trust_DDT and relative insignificance of Trust_Gov models with these variables separated outperform those using the interaction variable Trust_GovDDT – even when comparing adjusted R2 scores. These models explain up to 50% of the variation in Benefit with persistent joint significance – owed primarily to the strength of significance of Trust in Data-driven technologies. Digital literacy appears to operate outside of the Perceived Risk/Benefit relationship that the theoretical framework details and therefore has its own mechanisms of influence on Acceptance.

(30)

Model 3 addresses the mechanisms through which governments may increase trust in themselves and in data-driven technologies. Given that trust in data-driven technologies has a significant relationship with perceived risk/benefit it would be especially interesting to find any significant influence through levels of transparency or accountability mechanisms.

Table 4 – Regression Output - Model 3

Dependent variable: Trust_Gov Trust_DDT (1) (2) (3) (4) (5) (6) (7) (8) Constant -0.21 -0.45*** -0.13 -0.45* 0.34** 0.25 0.43*** 0.33 (0.13) (0.13) (0.17) (0.21) (0.12) (0.16) (0.14) (0.19) Trust_Society 0.15*** 0.08** 0.12** 0.06 0.06* 0.03 0.03 0.01 (0.03) (0.03) (0.04) (0.05) (0.03) (0.04) (0.04) (0.04) Digital_Literacy 0.81*** 0.86** 0.31 0.26 (0.27) (0.38) (0.32) (0.35) Accountability -0.03 -0.03 -0.08 -0.07 -0.02 -0.02 -0.03 -0.03 (0.07) (0.06) (0.09) (0.08) (0.07) (0.07) (0.07) (0.07) Openness -0.28** -0.30*** -0.001 -0.01 (0.12) (0.10) (0.12) (0.12) Consultation 0.01 -0.16 0.16 0.11 (0.20) (0.19) (0.16) (0.18) Access -0.10 -0.01 0.002 0.03 (0.14) (0.13) (0.11) (0.12) Observations 18 18 18 18 18 18 18 18 R2 0.64 0.79 0.53 0.67 0.31 0.36 0.38 0.41 Adjusted R2 0.56 0.72 0.38 0.53 0.16 0.16 0.19 0.16 Residual Std. Error (df = 0.08 14) 0.06 (df = 13) 0.10 (df = 13) 0.08 (df = 12) 0.08 (df = 14) 0.08 (df = 13) 0.08 (df = 13) 0.08 (df = 12) F Statistic 8.31*** (df = 3; 14) 12.19*** (df = 4; 13) 3.60** (df = 4; 13) 4.78** (df = 5; 12) 2.09 (df = 3; 14) 1.79 (df = 4; 13) 1.99 (df = 4; 13) 1.65 (df = 5; 12) Note: *p**p***p<0.01

Hypotheses 4 and 5 are not satisfied due to the insignificant and negative relationship between accountability and trust across all models. Hypothesis 6 is rejected in favour of partial support for the alternative hypothesis 11 that transparency has a negative influence on trust in government. Additional support for hypothesis 11 comes as

(31)

openness as an overarching indicator also has a significant negative relationship with trust in government. Hypothesis 7 is not upheld due to the insignificant relationship between transparency and trust in data-driven technologies although the coefficients are positive and therefore has more support than the alterative hypothesis 12 that transparency will decrease trust in data-driven technologies. Hypothesis 8 and 9 are also not upheld due to the insignificant relationship between engagement and trust across all models.

Whilst the relationship between digital literacy and perceived risk/benefit is insignificant in Model 2, there is a significantly positive relationship between digital literacy and trust in Government – however this may be an issue of reverse causality and remains beyond the scope of this paper. As a result despite model 3.2 and 3.4 having the highest joint significance and explanatory power they are not taken as robust models with explainable results. Therefore the best performing model is 3.1, which aggregates indicators into openness. The direction of relationship was given as an alterative hypothesis in that research is starting to show that too much openness/transparency can have a negative impact on trust. The high level of significance means that there is potential for this relationship to be explored in further research.

Model 4 brings together all of the variables investigated to build a single parsimonious model.

(32)

Table 5 – Regression Output - Model 4 Dependent variable: Acceptance (1) (2) (3) (4) (5) (6) (7) (8) Constant 0.23 0.45** 0.16 0.34 0.19 0.40* 0.21 0.36 (0.16) (0.20) (0.21) (0.23) (0.19) (0.22) (0.23) (0.22) Trust_Society -0.02 -0.03 -0.01 -0.01 -0.01 -0.02 -0.01 -0.02 (0.03) (0.03) (0.04) (0.04) (0.03) (0.03) (0.04) (0.04) Trust_Gov 0.27 0.31 0.15 0.26 (0.20) (0.30) (0.19) (0.24) Trust_DDT 0.43* 0.36 0.41 0.25 (0.22) (0.32) (0.27) (0.35) Trust_GovDDT 0.58** 0.52 0.43* 0.45 (0.22) (0.33) (0.23) (0.31) Social_Benefit 0.94*** 1.00*** 0.84** 0.84** (0.28) (0.26) (0.32) (0.30) Individual_Benefit -1.26*** -1.24*** -1.14** -1.03** (0.37) (0.35) (0.44) (0.39) Net_Benefit 0.03 0.08 0.06 0.07 (0.24) (0.20) (0.25) (0.20) Digital_Literacy 0.21 0.19 -0.09 -0.08 0.26 0.20 -0.06 -0.06 (0.26) (0.24) (0.37) (0.34) (0.30) (0.27) (0.36) (0.34) Accountability 0.03 0.03 0.04 0.04 0.04 0.04 0.04 0.04 (0.04) (0.04) (0.07) (0.06) (0.05) (0.05) (0.07) (0.06) Openness 0.13 0.15* 0.10 0.10 (0.09) (0.07) (0.13) (0.10) Consultation 0.07 0.11 0.12 0.11 (0.13) (0.11) (0.17) (0.14) Access 0.01 0.03 0.04 0.05 (0.08) (0.08) (0.10) (0.10) Observations 18 18 18 18 18 18 18 18 R2 0.84 0.84 0.61 0.60 0.82 0.82 0.65 0.65 Adjusted R2 0.70 0.72 0.34 0.38 0.62 0.65 0.34 0.41 Residual Std. Error (df = 0.04 9) 0.04 (df = 10) 0.07 (df = 10) 0.06 (df = 11) 0.05 (df = 8) 0.05 (df = 9) 0.07 (df = 9) 0.06 (df = 10) F Statistic 5.96*** (df = 8; 9) 7.30*** (df = 7; 10) 2.26 (df = 7; 10) 2.77* (df = 6; 11) 4.06** (df = 9; 8) 4.96** (df = 8; 9) 2.08 (df = 8; 9) 2.67* (df = 7; 10) Note: *p**p***p<0.01

(33)

Individual and societal benefits appear to have distinctive relationships with acceptance and are therefore kept separate in the final model. They both remain highly significant with stable coefficients as in Model 1. The interaction variable for trust gained greater significance than the separate aspects of trust in model 4.2 compared to model 4.1 where only trust in data-driven technologies is significant. This suggests that acceptance depends on the interaction between trust in government and trust in data-driven technologies rather than trust in each separately. In model 4.2 openness becomes a significant variable with a positive relationship with acceptance. This is interesting given the significant yet negative relationship openness had with Trust in government in model 3.1 and 3.2 and therefore deserves further investigation.

Accountability, engagement and transparency all remain individually insignificant across all models, however each did produce the expected direction of coefficient. The inclusion of these variables also boosts the explanatory power of the models compared to model 1 where the adjusted R2 values were around 0.1 lower than comparable models from model 4. Digital Literacy drops out of significance whereas model 1 pointed towards supporting the alternate hypothesis that it was an explanatory variable. This deserves further investigation as to the mechanisms behind digital literacy, trust and acceptance given the changing significance of this variable across models.

The final model is 4.2 – given it’s highest explanatory power, highest joint significance and most individually significant variables that support hypotheses – including the main explanatory variable that perceived benefit for society will increase acceptance of government use of data-driven technologies. Interestingly this comes at an apparent cost to individual benefit in which individuals accept perceived risk to themselves – potentially in the form of personal privacy rights – in order for governments to pursue positive social outcomes using data-driven technologies.

(34)

Analysis

This paper is seeking to answer through what means citizens accept government use of data-driven technologies in view of their risks to values within democratic society and given this how governments may then increase the acceptance of them. The models tested above pursue a trust and risk based notion of acceptance, with government being able to influence this relationship through engagement with citizens, transparency measures and the presence of specific accountability mechanisms relating to the use of AI.

In line with the hypotheses the results indicate that trust in the ‘enabling technology’ of data-driven technologies had a meaningful and individually significant role in the acceptance of government use of data-driven technologies, but trust in the ‘specific entity’ of government did not. This suggests that the trust relationship that citizens have with government is irrelevant in contrast to the trust relationship that citizens have with data-driven technologies – potentially highlighting the depth of which our modern society has ventured into the paradigm of ‘hyperhistory’ whereby the relationships we have with technologies are becoming of paramount importance (Floridi, 2014). The expectation was that trust in government should have had an individually influential role in the perception of risk in government use of data-driven technologies, however these results should be understood in a wider context of decreasing levels of trust in democratic institutions as a whole (O’Neill, 2002) which may obfuscate some of the specific dynamics within the models. A further explanation may arise from the dislocation between stated sentiment and action i.e. people may respond as not having trust in government, however they act as if they do (O’Neill, 2002). The psychological foundations of this phenomena lie beyond the scope of this paper. Furthermore the issue is a somewhat inescapable regarding survey data and therefore the assumption is that people say what they mean in order to retain empirical validity of the results.

The final model did however find that the strongest relationship between trust and acceptance was in the interaction between trust in government and trust in data-driven technologies signifying that trust in government is important but dependent on its interaction with trust in data-driven technologies – that is to say trust in government

(35)

alone does not lead to acceptance of government use of data-driven technologies but is reliant on trust in data-driven technologies at the same time. This combined with the result that trust in data-driven technologies significantly increased acceptance on its own would suggest that trust in data-driven technologies is the most important aspect in the trust dynamics relating to acceptance of government use of data-driven technologies.

Interestingly the models presented a nuanced result in the relationship between perceived risk and acceptance, with an individual risk being accepted in the name of social benefit. The expectation was that trust would reduce both the individual and societal perception of risk, which in turn should have a negative relationship with acceptance (Siegrist & Cvetkovich, 2000). These results however point towards a form of tolerance of individual risk in order for government to pursue socially beneficial outcomes in the use of data-driven technologies – potentially in the form if individual privacy rights. It could also be an expression of the support of wider ethical values and the perception that personal risk is necessary in the pursuit of these. The scale effects between individual risk vs. social benefit of data-driven technologies is also curious – with a greater proportion of personal risk tolerated compared to the perception of social benefit gained when accepting government use of data-driven technologies. This was an unexpected but highly significant result that warrants further research.

The methods through which government may influence levels of trust and acceptance on the part of citizens are less robust from these results with no trust enhancing measures achieving reliable significance across models. Despite the theoretical assumption that ‘account giving’ mechanisms should increase trust and therefore acceptance (Dubnick & Frederickson, 2011), the results showed no significant relationship to this effect across any iteration of the models. This may be due to citizens not being fully aware of the presence of these accountability mechanisms and therefore their effects not being pronounced in the models. Further research could be undertaken to evaluate the dynamics, awareness and impact of individual accountability mechanisms rather than attempting to understand their systematic impacts.

(36)

Openness was analysed as a composite indicator that included aspects of participation, engagement and transparency. Individual indicators of engagement and transparency were also analysed to distil and specific effects. The democratisation of decision-making through participation and citizen engagement is expected to bring actions in closer line with public values and therefore make them more trusted and accepted (Carolan, 2006), whilst transparency was predicted to potentially have a dual role in which it can both enhance trust by revealing relevant information about authorities (O’Neill, 202) and diminish trust by creating a deluge of information that complicates and obfuscates what is relevant (Mabillard & Pasquier, 2016). As an individual variable within the models engagement was found to be insignificant and therefore alone it doesn’t have an impact on trust and acceptance. Transparency was also found to be an insignificant variable across models, whilst in one model was found to have a negative association with trust in government – highlighting the potential for the trust diminishing effects of transparency. In the final model openness was found to have a positive and significant relationship with acceptance of government use of data-driven technologies highlighting that engagement and transparency may have influential effects as part of a wider programme of government openness. Further research may be able to address the specific reasons for this.

Digital literacy was included as a potential confounding variable used to analyse the alternative hypothesis that accountability and transparency were not deterministic of trust and acceptance and instead that the deficit model, in which a lack of scientific knowledge impacts on the acceptance of risky technologies (Pavone et al, 2015), was a main explanatory factor. In models that did not include engagement, transparency or accountability variables, digital literacy had significance however in the final model this variable was insignificant and the alternate hypothesis was therefore rejected – potentially due to the presence of other variables netting out the impact of digital literacy.

Whilst the analysis was conducted using EU countries the generalizability of results is considered to be democratic countries with comparable data protection legislation that actively rely on public acceptance of their use of risky technologies. Examples would be Switzerland Australia, Canada and Israel. This does not necessarily negate the generalizability of the underlying mechanisms for other cases – however the reliance

(37)

of public acceptance within the above models is taken as a concept that is only found reliably within democratic states.

The methodological constraints of this research relate to the availability of data and potentially to the general nature of some of the survey data. Further research would benefit from undertaking more specific survey questioning or interviewing with particular evaluative attention paid to the efficacy of trust building measures and the role of citizen trust in government. This could potentially be done within a single country that is advanced in its application of data-driven technologies in government and is actively working towards building trust in its use with citizens. An example would be the UK which has a record of ministerial statements about the importance of retaining trust in the ethical use of AI in government (Hancock, 2018; Wright, 2018), a comprehensive digital technology strategy (DCMS, 2019), an ethical framework for the use of AI (DCMS, 2018) and a independent advisory board – the Centre for Data Ethics and Innovation – who actively evaluate government use of AI in line with social values (CDEI, 2019). This method would allow for greater isolation of the dynamics explored in this paper and therefore support/challenge the validity of these results however the generalizability of results from such a study would be impacted.

Conclusion

This research aimed to answer the main research question of ‘how effective are democratic governments at building acceptance in the use of data-driven technologies’. Based on quantitative analysis of survey data relating to acceptance, trust, risk and different governance measures it can be concluded that trust in data-driven technologies has the most individually significant positive effect on the acceptance of government use of these technologies, however trust in government does not affect this relationship on its own and instead relies on the interaction of trust in data-driven technologies. The results also indicate that individual risk is accepted at the price of social benefit when accepting government use of data-driven technologies.

The methods through which governments may influence levels of trust and acceptance on the part of citizens produced less clear results. Engagement with citizens was expected to bring government actions related to data-driven technologies in line with

Referenties

GERELATEERDE DOCUMENTEN

In the discussion, I focus on the question of whether to use sampling weights in LC modeling, I advocate the linearization variance estimator, present a maximum likelihood

[r]

Systems designed to record, classify and even change affective state may entail safety, privacy, and security issues.[ 3,8,11 ] As with pass- words and other private information,

The HEADS study (HElmet therapy Assessment in Deformed Skulls) aims to determine the effects and costs of helmet therapy compared to no helmet therapy in infants with moderate to

requirements, (2) having data privacy officers, (3) having internal organizational privacy frameworks (4) inducing cultural values on ethical use of data, (5) having privacy as

is één heerlijke, duidelijke reclamespot. De buitenkant is afgeschermd door , blinde muren, of spiegelglas. Eventuele tuintjes en patio's liggen ilig aan de

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of