• No results found

Artificial intelligence in the recruitment process, do we trust it?

N/A
N/A
Protected

Academic year: 2021

Share "Artificial intelligence in the recruitment process, do we trust it?"

Copied!
47
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Artificial intelligence in the recruitment

process, do we trust it?

University of Groningen Faculty of Economics and Business

MSc Business Administration: Change Management

Written by: Hylke van Gorkum

Date: 06-08-2020

Master’s Thesis BA Change Management

(2)

2 ABSTRACT

Existing literature on AI recruitment mainly focuses on benefits and drawbacks from the business perspective, while neglecting the focus on the user perspective. This study mainly focuses on the user perspective in examining which technological perceived affordances and constraints exist while using AI in the recruitment process. Furthermore, the role of user trust is examined while analysing these affordances and constraints.

For this purpose, an interview-based study at Karreer.com’s – a provider of a mediating AI tool in the recruitment process – customers and linked companies (applicants and employers), using fourteen semi-structured interviews, was conducted in order to identify different perceived affordances and constraints that existed with the use of AI in the recruitment process, their goals and the influence of user trust.

Perceived affordances were: more objectivity, a qualitative and sustainable match, increased efficiency, lower applying-barrier, more variation in target groups, internal system usage and deploying employer branding activities. A perceived constraint was the inability of the AI system to fully interpret social context and therefore making a complete profile.

Totally replacing a human recruiter is not yet possible due to these reasons, according to both user groups. This leads to a ‘data-paradox’: employers do not yet want to invest much in AI because it cannot yet take over the entire process, while AI needs to learn from data. Furthermore, it was found that the user attitude trust did play a positive role in perceived affordances, as the user trust was mainly high, which enabled the perception of affordances. This research extends the literature on AI recruitment, technological (perceived) affordances and constraints, and user trust.

(3)

3

1. INTRODUCTION

The use of artificial intelligence (AI) in all sorts of daily practices and business processes is becoming increasingly common. AI is “a system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaption” (Kaplan & Haenlein, 2018: p. 1). AI already takes over all sort of tasks, and it will even replace more tasks, which are used to be performed by people in the near future. For instance, researchers predict that AI will be able to translate languages in 2024, write essays in 2026, drive a truck in 2027 and work as a surgeon in 2053 (Grace et al., 2017). Grace et al. (2017) state in their research that there is a fifty per cent chance, that AI will outperform humans in all tasks in 45 years from now. Moreover, Barrat (2013) predicts AI to be our final invention, that will eventually end the human era. As we can imagine, this has and will have a huge impact on business processes and our daily lives.

An interesting business process that is and will be affected by the development of AI is the recruitment process, because the HR industry is adapting to AI in the past few years and is relatively late with adopting it (Mathis, 2018). Therefore, the literature field on the recruitment process and AI is still immature. The purpose of this process is gaining sustained competitive advantage, which is achieved through recruiting and retaining qualified employees and combining their talents better than competitors. Trombin et al. (2018) state that the responsibility of the HR professional to source, screen, hire, onboard and prepare (the ‘traditional recruitment and selecting process’), will change over time due to the implementation of artificial intelligence. Effectively combining HRM employees with IT infrastructures could result in a synergistic analytical tool for ordering and gaining a competitive advantage out of big data - traditional, structured, demographical, and transactional data, and unstructured, sociographical, behavioural data (Boxall, 1996; Erevelles et al., 2016; Verhoef et al., 2016). Nowadays, AI services are mostly provided by external, specialized companies. Aforementioned changes in the recruitment process will result in an increase of the efficiency of the process, a decrease in the amount of paper that is used, and implies that the human factor (and therefore human interactions between the recruiter and recruited) is cancelled out more and more in this process (Bondarouk & Brewster, 2016; O’Donovan, 2019).

(4)

4

new and uncertain situations for the users. Within this uncertain situation, the user attitude ‘trust in a system’ is found to be crucial in changing perceptions and behaviours of an individual, and eventually in a systems’ success (Schurr & Ozanne, 1985: Chellappa & Pavlou, 2002; Bélanger & Carter, 2008; Gefen et al., 2003; Pavlou & Gefen, 2004). Hence, potential candidates need to trust the system in making the correct match and moreover, entrust them with using their personal data. At the same time HRM employees and -managers need to have confidence in the system’s ability to make qualitative (or better) matches than humans are capable of.

Research until now has mainly focused on improving the internal process of recruitment, through for example using technology for increasing efficiency, and being able to handle these amounts of data (Ruël et al., 2004). While this helps building arguments for businesses to choose for AI tools in the recruitment process, it neglects potential barriers. Therefore, Van Esch et al. (2019) suggest future research should analyse what the use of AI in recruitment does to the attitudes of the hiring firms and the job application likelihood. As said, trust from both the potential applicant and business perspective are required, in order for the system to succeed. More research is needed to explain if trust in AI in the recruitment process exists and where it comes from, before utilizing AI to its full potential in the recruitment process, because trusting a system is important in the adoption of new technology.

Following the gaps of user trust and affordances in AI recruitment, the lens of technological affordances is applied in this research in order to investigate potential actions and constraints. Markus and Silver (2008: p. 625) define functional affordances as: “the possibilities for goal-oriented action afforded to specified user groups by technical objects”. As this research focuses at two specific user groups and a technical object (AI) that affords certain actions, this definition suits the research best. This lens provides a focus on the perceived possibilities with AI in recruitment, as well as the incorporated constraints. Moreover, it will explain how user trust will positively or negatively affect affordances. As said, user trust is needed before actively using an IT system to its potential. For example, when applicants do not entrust a system with their personal data, their goal of being matched will not be achieved. The potential action (affordance) of being matched to a company, requires personal and company data. Therefore, the research question that will be answered is: ‘How does user trust influence the perceived

affordances and constraints of using artificial intelligence in the recruitment process?’

(5)

5

by applicants. Therefore, this research will show whether the user trust of both applicants and employees responsible for decision-making in the recruitment process, enables or disables the affordances of an AI system in the recruitment process. For companies that provide - or will use AI in this process, it will enrich their knowledge about considerations regarding the input, system aspects (not) to use, barriers to overcome (related to user trust), output and its place in the process. Applicants are able to identify capabilities and (positive) outcomes of actively engaging with a system in a recruitment process, compared to traditional methods. It will help in making the decision whether or not to apply for a job using an AI system and which potential barriers exist before doing so.

Furthermore, it will extend the literature AI recruitment, user attitudes and (perceived) affordances and constraints. As said, little is written about the impact of user attitudes - from which user trust is one – will have on the perceived affordances of AI systems. Hence, this research will contribute to the further understanding of user attitudes in the IT affordances literature.

The following section (2) contains the literature review, which explains the key concepts the recruitment process with and without AI, user attitudes focused on trust, the affordances lens which is being applied and a conceptual framework. In the methods section (3), I will explain how I conducted my research. I will explain what my setting was, which methods I used and explain why. Section (4) contains results of the conducted research. Finally, section (5) is the discussion, in which I critically review my process, conclude, compare my findings to prior literature and make suggestions for future research.

2. LITERATURE REVIEW

The recruitment process

(6)

6

bringing talent to the organization. These steps are brought together and visualized by Holm (2012) and, according to her, consist of: identifying applicants (preparing job descriptions and identifying pool of competent applicants), attracting applicants (selecting recruitment sources, and preparing and placing the job announcement), processing incoming applications (receiving, sorting and registering incoming applications and pre-screening applicants), and communicating with applicants (informing and arranging interviews). This process is visualized in Appendix A. This process can be very time-consuming, which led to several inventions in the past decade.

Electronic recruitment changed the traditional recruitment process. In the remainder of this literature review, electronic recruitment, E-recruitment and e-HRM are used interchangeably. E-recruitment is online-, internet-, or social recruiting which consists of more two-way communication between employees of companies and customers/candidates, which was uncommon in traditional recruitment strategies (Salmen, 2012). For instance, Parry and Tyson (2008) investigated the use of online recruitment methods in the United Kingdom. In a longitudinal survey and twenty interviews with both users and providers of these methods, they found that these online methods mainly led to an increase in efficiency, effectiveness, ease of use and a larger pool of candidates. Despite the research was done in the early stage of the adoption of electronic recruitment, they showed that any organization could be successful with online recruitment when choosing and adopting an appropriate strategy. The use of online recruitment tools empowered individuals to generate and share data online. Through the utilization of E-recruitment, costs were saved, more candidates could be found, faster recruitment was possible, and hiring success rates increased (Holm, 2012). These benefits mostly occurred due to utilizing options to communicate with applicants and process incoming applications, together with attracting applicants (e.g. on a company’s career website), as individuals were able to upload personal data and information.

(7)

7

Examples of purposes for which data are used are: specific job assignments (Lescreve, 1999), assignments of tasks (Isaias et al., 2010), assignment in teams (Rodrigues et al., 2005) or employees’ assignment to processes (Ly et al., 2006). Shehu and Saeed (2016) developed an adaptive personnel selection model, based on the results of a case study at a federal university that selected academic applicants based on seven criteria. These criteria are: application posted (Y/N), previous organizational relationship (Y/N), academic qualifications obtained (Y/N), teaching and research experience, publications, honours and rewards, and credential screening. Another example of the utilization of AI in the recruitment process is used by HireVue (2017), by comparing interviewed applicants with talented employees of companies, based on body language, voice and expressions.

Research has been done into the possibilities and benefits of using AI in recruitment. For instance, Leong (2018) points at the possibilities that arise in ordering the best candidates for recruiters, without having to search for, scan and read through incoming resumes. Furthermore, Fernández and Fernández (2019) state that bringing AI into the recruitment process will lead to more fairness, accuracy and inclusiveness in the process, because AI is intelligently programmed to avoid unconscious bias and neglects primary sources of information like gender, age, race, and names (Upadhyay & Khandelwal, 2018). Upadhyay and Khandelwal (2018) see changes coming up in the recruitment process; the previously repetitive tasks carried out by humans, can now be done by AI systems. Therefore, recruiters can focus on more long term and strategic issues, rather than searching through resumes, et cetera. Albert (2019) reviewed the – at that time – current use of AI in the recruitment process in order to generate an overview of the range of applications and outcomes. The research comprised a literature review on potential AI-application areas and interviews with HR managers, academics and consultants. In total, eleven areas of application were discovered, with different outcomes and implications. A complete overview of these can be found in appendix B. As can be seen, the area in which the AI tool is specialized highly influences the outcomes. Many outcomes refer to a faster time to hire, decrease in costs, a more complete candidate profile, less biased matches and more time for the recruiter to focus on more essential tasks (Albert, 2019). Apart from positive or helpful outcomes, problems were discovered as well. The next section will focus on challenges and problems.

(8)

8

Upadhyay and Khandelwal (2018) conclude that some activities of the recruitment process cannot be automated and therefore must be executed by HR professionals. These activities could be negotiating, building rapports, and assessing a cultural fit (Upadhyay & Khandelwal, 2018). When it comes to the adoption and consequences of implementing electronic HRM - to which electronic recruitment belongs - in organizations, people and their mindsets/attitudes are of great importance. Bondarouk et al. (2016) reviewed 69 articles (quantitative, qualitative and mixed method papers) that presented empirical findings on the adoption and consequences of e-HRM in the past four decades. They found that the successful adoption of e-HRM features depend on technology, organizational and people related factors. What stood out is that the successful adoption of e-HRM resides within the category of ‘people factors’, while ‘technology’ and ‘organizational’ factors were found to be mostly necessary prerequisites. Bondarouk et al. (2016, p. 7) state that “the mindset within certain organizational cultures” made the difference. The next section will focus on explaining the concept user attitudes, which will explain how mindset, attitudes and user trust play a role in using (new) technology.

User attitudes: trust

Users of new technology, like the use of AI in recruitment, are searching for - or gain information about the technology which result in certain attitudes towards it (Rogers, 1995). These attitudes, or perceptions can eventually influence the behaviour of a user. The perception of affordances of websites and information systems (IS) used by organizations, has influence on the intention of actions (Li et al., 2003; Van Vugt et al., 2006). Hence, how people perceive the website or IS, brings about certain actions. User attitudes are a common subject of research in the past decades, related to the acceptance of IS. In the psychology literature, an attitude is defined as “a psychological tendency that is expressed by evaluating a particular entity with some degree of favour or disfavour” (Eagly & Chaiken, 1993, p. 1). In practice, this means that whenever a person evaluates a situation, person, system, etc., a person will express a degree of favour or disfavour. The concept of attitudes was taken into the IS field. Kroenung and Eckhardt (2017) wrote a meta-analysis on what determines user attitudes towards information systems and identified two broad perspectives: intra-attitudinal and inter-attitudinal. The former focuses on the measurement and operationalization of attitudes, the latter on relationships between attitudes and other constructs.

(9)

9

will contribute to a positive user attitude towards an IT system and vice versa. Many other studies showed the relevance of trust for the acceptance of information systems (e.g. Van der Heijden et al., 2003; Gefen et al., 2003; Wang & Benbasat, 2005; Datta & Chatterjee, 2008). Researchers state that the reason for the importance of trust in the acceptance of new information systems can be found in the systems’ ability to reduce social and technical complexity (Luhmann, 1979; Gefen, 2000; Lee & See, 2004).

Lankton et al. (2015) divide trust in a system in human-like trust and system-like trust in a system. Human-like trust, according to them, consists of integrity, ability/competence and benevolence. Integrity means that the trustee adheres to principles that the trustor finds acceptable, ability that the trustee has skills, competencies and characteristics, and benevolence is the trust that the trustee will want to do good to the trustor. Human-like interactions are often portrayed upon systems. System-like trust consists of reliability (consistency), functionality (capability, functions or features in order to implement a task), and helpfulness (adequate and responsive help for users) (Lankton et al., 2015). Wünderlich et al. (2012) found that beliefs like having control, being trustworthy and being collaborative as a provider of a technological service, is more important in gaining user acceptance, rather than features of the technology itself. This would in terms of Lankton et al.’s research (2015) mean that the human-like trust would be of greater importance.

Technological affordances

(10)

10

that a door handle opens and closes that door). Later studies applied the concept of affordances on other artefacts and fields of study including technology, which is relevant for this study.

Norman (1988) was the first to introduce affordances into applied sciences and technology, when projecting it on human-computer interaction. He wanted to realize visual affordances in the design of interfaces, meaning that people easily understood which function an IT interface had, and eventually what they could do with it. Norman’s (1988) study inspired many other authors in taking affordances into applied sciences and technology. Hutchby (2001), who also belonged to the first ones to apply the concept of affordances into the field of technology, pointed at the interface between the aims of a human and the affordances an artefact (e.g. a system) offers. Through the critique on two lenses (social constructivist consensus and objectivism) he argued that affordances of an artefact are not things which impose themselves upon humans’ actions with, around or via that artefact. But they do set limits on what it is possible to do with, around or via the artefact (Hutchby, 2001). In other words, affordances enable and restrict actions, and do not occur without actions. Pozzi et al. (2014) built on the ingredients of Gibson (1986) and Norman (1988) and referred to technological affordances as “a relationship between an actor and an artefact, it is relative to the action capabilities of the actor, and reflects possible actions on the artefact itself (p. 2).”

Capabilities of the actor are necessary for an action to occur. Actions in itself can lead to a possibly desired outcome. Zammuto et al. (2007) describe affordances as a goal-oriented action. They mention that technological affordances on the one hand, and systems firms possess on the other hand, are ingredients for ‘functional affordances’, described as “the mutuality of actor intentions and technology capabilities that provide the potential for a particular action” (Majchrzak et al. 2013: p. 39). Functional affordances comprise possibilities that arise with material properties in information systems, meaning that a functional affordance brings about output (Markus & Silver, 2008). In other words, features that IT-systems possess make actions possible, that can eventually lead to a (desired) outcome.

(11)

11

the engineers coordinated their actions in sharing their outputs, it allowed engineers to talk about the analysis with each other, leading to organizational change.

Drawing on the outcomes of Leonardi (2013), among others, Balci et al. (2014) identified affordances of different IT systems in the airline check-in process. By the use of the affordance lens, they investigated characteristics of affordances, in order to sort out roles of affordances in the users’ understanding and intention to use systems. Through in-depth interviews they found that when participants understand and perceive a function of a system, it has a positive effect on symbolic expressions of the object (relations between technical objects and users). Moreover, they found that the interaction with a system is dependent on an individual’s background, age and experience with IT systems.

In line with the previous empirical studies, in this study I will apply the affordances lens in order to identify the affordances and constraints of using an AI system in recruitment. In this case, both the employer and the applicant are active users of the system, although they may follow different steps to come to the predetermined goals the AI system can realize: arrange the best candidate-job fit (see the next section).

To realize these goals, AI is dependent of an actor that create a set of parameters, before the actual candidate interacts with the system (Jennings et al., 1998). This means that systems are created by actors, who might play a role in the perceived affordances of a system. One of the AI’s affordances used in recruitment nowadays, is the affordance of information extraction, which is finding structured information automatically from (semi-) structured text (Sarawagi, 2008). This way, for instance résumé’s can be scanned for certain specific parameters, which AI can extract. It will be interesting to see which affordances of AI are used by the target company of this research. The methodology section focuses on the recruitment process, the potential role AI plays in it and explains the linkage with technological perceived affordances.

Conceptual framework

(12)

12

question: ‘How does user trust influence the perceived affordances and constraints of using

artificial intelligence in the recruitment process?’ is answered. I conducted interviews on user

trust, the use of AI in recruitment and its affordances, of which the process will be explained in the next methodology section.

3. METHODOLOGY

Research approach: interview-based study

A qualitative interview-based study was deployed in order to provide me with answers to my research question. Yin (2012, p. 5) suggests a qualitative, interview-based study when the research question is “descriptive” or “explanatory”. In other words, when the question is about what, why or how things happen. In the case of my research, it is explicitly useful to ask people about their experiences, thoughts and motives of using the AI system in the recruitment. In other words, getting a deeper understanding of this case. Edmondson and McManus (2007) suggest a methodological fit to optimize the way answers to the research question are collected. This research is conducted in a relatively new field of interests, with new phenomena, based on prior - already existing - theories. Edmondson and McManus (2007) and Van Aken et al. (2012) recommend the use of qualitative research methods in exploring a new field of theory or new business phenomenon. Moreover, choosing a single-case design is appropriate, according to Yin’s (2009, p 47-48) rationales for single-case designs: a single case can test the theory because it is well specified and the research has the goal to extend the theory. The case of Karreer.com is a representative case, as will be explained in the research setting section.

(13)

13

Research setting

The research was conducted at Karreer.com, which has created a mediating AI tool with the purpose to reinvent the recruitment process. This firm and its platform tries to ‘bring back the fun’ in the recruitment process and ‘meet the needs of the candidate’ (Karreer.com, 2020). By gathering data from both customers (or potential candidates for positions at connected firms) and connected firms, Karreer’s algorithm will create a match. This data consists of hobbies, the outcome of tests on logical thinking, funny dilemmas, skills et cetera (Karreer.com, 2020). This way, the algorithm will connect clouds of data together, in order to create a match. Whenever a customer signs up on the platform, incentives are given for completing certain steps of the process. For instance, completing your profile, ending on a shortlist of a company, or actualizing a job interview will grant you a fee, starting at a virtual negative deposit, without a maximum amount of tries. The company is located in Heerenveen, The Netherlands and has a database of 15.500+ people and sixty interested companies. However, due to long implementation-phases at the side of companies, the AI algorithm is not yet able to match the two user groups. Therefore, this research focuses on the perception of affordances and possibilities that will arise, instead of realized affordances.

Data collection

Answering my research question required two populations, evenly divided within fourteen interviews that took place; seven applicants and seven employers. Because the anonymity of applicants is important, they were asked to sign up to the platform, before the interview took place. After the interview they were assigned a code (Ap1-7), in order to grant their anonymity. The group of employers exists of decision makers in the process of recruitment and/or AI for the companies PWC, KLM, Noordhoff, Gasunie, Gemeente Noordoostpolder, Transcom and DUO. Again, anonymity is of paramount importance, which is why they were assigned code (Em1-7) as well.

(14)

14

database of Karreer.com and people that had recent experiences with the system. Employers were selected on the basis of being familiar with the system, size of the company (the higher the amount of employees, the higher the potential need for AI in the recruitment process) or had recent experiences with, or presentations about the system. Aforementioned choices could lead to respondents feeling prohibited to talk freely and made it more difficult for them to refuse participation (Given, 2008).

To combat biases and enhance reliability as much as possible (Yin, 2009), I made my steps operational and transparent. Because it is a single-case, interview-based study and the anonymity was of great importance, interorganizational partnerships (Yin, 2009) between Karreer.com and several organizations and relationships between applicants and the interviewer existed. In order to overcome potential biases, I’ve introduced myself as an independent researcher and respected the privacy of the user groups. I have encouraged them to speak openly, also by providing them anonymity via the use of codes and ensuring them that the data will only be used for the purpose of research. Moreover, collecting data from different populations and companies, which complicated coding because of slightly different interviews and questions, led to a decrease in my bias as a researcher (Turner, 2010).

Furthermore, a standardized similar open-ended set of questions for both user groups was used (Yin, 2012). Probing questions allowed me to ask questions about goals of using the platform, trusting the platform, and affordances and constraints the user groups perceived. Example questions were: what will be the goals of using the system, (how) do you believe the platform will help meet your goals, which impact will the implementation of an AI tool have on the traditional recruitment process and which role did trust (in the tool or in Karrer and its employees) play in the process of signing up and answering the questions? These digital (due to the corona crisis) semi-structured interviews that were conducted (e.g. Skype, Google Meet, etc.) resulted into recorded files, which I fully transcribed. The interview transcripts can be found in Appendix G and H.

Data analysis

(15)

15

searched for frequently mentioned themes, which were bundled into subgroups (for example being recruited based on your personality) (Strauss & Corbin, 1990). Hereafter, I marked them with different colours and gave them a title in the codebook. The codes that originated from this step are ‘goals’, ‘trust in the platform’, ‘perceived affordances’ and ‘perceived constraints’. After this analysis, I provided the subgroups with an explanation and an example quotation, to clarify the meaning and objectify the process.

In the second round of analysis, I again searched for overlapping themes, without considering the quotations that were already marked. During this round, both the positive and negative perceived ease of use of the system were bundled into a category, as appropriate and less appropriate design components were mentioned frequently. This was a result of the question how the user groups perceived the experience of using the platform. Again, I provided these subgroups with explanations and example quotations.

I did the data collection and data analysis iteratively (Turner, 2010), meaning that the conducted interviews were transcribed and analysed right after they were conducted. This allowed me to slightly adjust my questions and enhance the reliability, depending on the answers I received. For example, I decided to go through all of the steps of the process of signing up until receiving results with the interviewee and asked them what it did with their trust in the platform. After coding the first set of interviews, I realized that slightly adjusting the questions as mentioned, resulted in a richer data set.

After the coding process, I compared the findings of the two user groups. This process was done by a group analysis and a cross-group analysis (Yin, 2012). The within-analysis compared the results of the user groups separately (e.g. applicant with applicant), while the cross-group compared the different user groups to each other (applicant and employer). This process provided me and the reader with a clear overview of topics and relevant quotations in answering my research question: ‘How does user trust influence the perceived affordances and

constraints of using artificial intelligence in the recruitment process?’, in the upcoming

findings and discussion sections.

4. FINDINGS

(16)

16

Goals - applicants

The seven applicants had very similar goals with using the platform of Karreer.com. The most mentioned goal was the goal of becoming recruited, especially based on the personality of the applicant. This user group mentions that they like being matched to a company based on personality, rather than purely on skills and experience. For instance, Ap3 reacted positively on the opportunity of being matched, based on personality: “Yes, indeed the personality part of it

really attracted me. [...] Your personality and values are being matched [...], instead of working experience. I do not have that much working experience, so how can I be matched? I think it is more important to look at your personality, values and your soft skills, they say much more about you.” This user group varies in goals, but ultimately all of them are willing to receive

potential matches.

The second most mentioned goal relates to comparing the acquired test results to previous results, in order to see if there are any overlaps or even more interesting, if there are any new insights. Many of the applicants had heard of the platform and knew about the content through word of mouth, or reading about it, which is why they were aware of the fact that the test results in a profile. The profile is based on several personality tests, which the user group wants to acquire, Ap5 mentioned: “One goal I certainly had was to acquire the personality test

results, in order to compare them to others, to see whether it was alike or not.”

Aforementioned goals were shared among almost any participant. Remarkable is the fact that just one applicant had a goal related to earning money by making a profile. Ap7 mentioned: “It was not so much a driver, I did not think I was going to make a big amount of

money. On the other hand, I was curious about what my profile’s value was.” Although it was

not the main goal, all other applicants did not even mention the fact that money played a role in making a profile, other than that they did not understand this feature of the system: “I did

not understand [...] there is something with money that I did not understand (Ap2).” As said,

applicants found it confusing and they say it would rather distract them from using the system properly, than that it was a goal in itself.

Goals - employers

The employers’ foremost goal was finding suitable candidates for their companies in both a more effective and efficient way, as Em6 mentioned: “I think it has two sides [...] creating

better matches. [...] and efficiency as well. Because this leads to automatization.” The user

(17)

17

mentioned goals as well, as Em7 said: “Through being able to present yourself as employer,

the candidate knows who you are. [...] In a sense, it will help you generate a certain leading position in favour of your competitors.”

Other goals with the use of the system were more internally oriented. For instance, Em2 would use the tool to identify “which types are present within my company”, and Em5 has the goal to create a “better vision on who the candidate is, as well as making a team profile, in

order to see what you need when you have an open position.” The inside out goals adhere to

the goals of recruiting the best suitable applicant and will provide data on development on the other hand: “The use of the tool and its results are a very great way of getting into conversation

with each other. [...] We could utilize this tool in ‘the good conversation {meaning a formal conversation with the supervisor, e.g. around development, performance, etc.}’. (Em5)” Both

external and internal goals are mentioned by this user group.

Goals - comparison

With regards to the amount of goals, employers had more variation in goals, although they all ultimately addressed effectivity and efficiency. Both user groups have the similar goal of recruiting and being recruited: they want to use the platform to create a match, based on personality and companies’ cultures.

Trust in the platform - applicants

Most of the applicants trust the platform in meeting their pre-set goals. As described, most of the goals were on being recruited and acquiring the test results. When we look at the goal of acquiring the test results, on average, the people trust the platform to generate a profile of them:

“What was in the results was no surprise for me. [...] It proves that this platform is truthful” (Ap4). They recognize themselves in the created profile in the end and see what questions led

to the outcomes. However, two main things were brought up to improve the validity of the test. Firstly, the option of a so-called ‘go back’ button would enable applicants to resolve faulty answers. Secondly, the use of less double negatives in framing the questions enhances the understandability of the test. When talking about trust in the system, applicants immediately switched to the terms privacy and anonymity: “My trust was high. It was anonymous so that

makes a difference” (Ap6). Therefore, the barrier to entrust the system with personal data also

lowered: “I filled in my personal data truthfully, yes. That was because of what I read; it was

anonymous and how my privacy was important to them” (Ap4). What stood out in this section,

(18)

18

about trust, but that no one refused to hand over their personal data to the platform. Reasons for this were related to the type of data that needed to be filled in (e.g. no bank account numbers, or ID numbers), texts on the website, how professional it looked and trust in the persons and company behind the system.

The answer on the question ‘when talking about trust, did your trust focus on Karreer and its people or at the system?’, the majority of applicants chose to opt for both. “Yes, both. I

think they are inseparable, which is why I think it should be both proper/correct” (Ap1).

However, ultimately the people are responsible for how the system works, is what Ap3 acknowledges: “On the other hand, Karreer makes the algorithm. So yes. [...] No, ultimately

Karreer, because they made the platform, that is where it all starts.”

Trust in the platform - employers

Employers’ answers with regards to trust in the platform were very variated. Especially when it comes to trusting the system in accomplishing the pre-set main goals of finding candidates (more) effectively and efficiently. The answers varied from yes, to no and answers in between. Employers that trust the platform of Karreer.com to accomplish their goals mainly point at validity of the measurements and the story behind the data: “I have the trust in that it works.

[...] We tested it and spoke about the results in two teams. People recognize themselves and others in the results [...] It is clearly explained by the owner, so I have the trust that it is done properly” (Em5).

Employers that doubt whether the platform can meet their goals are interested in the near future, see the potential, but are not yet willing to fully commit themselves to use the platform. “It has the potential, yes and no. [...] if everything went well by nature, AI would not

be in the news frequently. And there would not be so much discussion about trust in AI systems. [...] I see a lot of systems with many mistakes [...] but it has the potential” (Em6). The employers

that did not trust the system to accomplish their goals, missed crucial data entries, such as education and experience, or found it too superficial.

When it comes to the aim of trust, almost every employer agrees with the applicants in the fact that Karreer and its people are inseparable from the system. Ultimately you will have contact with account managers of Karreer that will decide whether you have trust in the business, but when the system does not work, trust is nowhere to be found. “The product has

(19)

19

Trust in the platform - comparison

Regarding trusting the platform, the two main drivers are privacy and anonymity according to the applicants. They did not hesitate to fill in their personal data, because they believed that Karreer.com would handle their data with care. For employers, anonymity and privacy are logical steps in certain processes and were therefore barely mentioned in the interviews. Trust in the platform at this user group mainly comes from the people and the story behind the platform.

Perceived affordances - applicants

The user group applicants perceives more objectivity due to the use of AI in the recruitment process. For instance, Ap6 says: “A computer does not make mistakes, people make mistakes

in for example not asking the right questions in order to profile yourself the way you wanted.”

According to the applicants, besides making mistakes, objectivity concerns how the AI system treats every applicant in the same manner. So in other words, that the setting does not influence different behaviors and therefore could lead to more objectivity and trustworthiness: “Because,

now you are just filling in question after question, without the influence of a recruiter. I think that might have an influence on your answers” (Ap4). Also, Ap6 states that the recruiter might

have predetermined goals and biases which can play a role in whether or not you will be hired, which the AI system does not have: “It could be that the recruiter is someone you know who

does not allow you the function. This way {with the use of AI}, someone else, a computer does it for you.”

Closely related to the perceived affordance of more objectivity due to the system in AI is the perception of a more sustainable, qualitative match. Reasons for that are mainly to be found in the question set which the system provides, compared to a traditional recruitment process. “[...] I think it is way more important that your personality and values, things you find

important, your soft skills... They say a lot more about someone. Those are more important than a resumé” (Ap3). Creating a match based on other more relevant variables, Ap3 believes that a

match will be of more quality and sustainability. Ap7 agrees and points at the ‘match factor’ which could theoretically be housed in the AI system: “You could theoretically say that... When

you capture that ‘match factor’ in certain profiles and you know whether you should send someone to a company or not, you can totally hand that over to the system.” Applicants see an

(20)

20

to almost every applicant: identify a vacancy, write an application letter and include the resumé, perform a test and/or have a job interview –

applicants agree that Kareer is capable of taking over the first step of finding a potential match:

“It will change something in ‘pool recruitment’, that people call you for something, or you have to search yourself. So the process of finding a match, this system is particularly suitable for that. [...] It replaces that you are searching for vacancies all the time, because you are being linked to them. [...] Therefore it will replace the first few steps” (Ap5). This will lead to more

efficiency in the process, according to the majority of applicants. Ap1 perceives the affordance of being matched to companies that could be a match, which you did not think of before. In his/her words: “Yes when you are matched based on your profile, you might be matched to

companies which you did not think of before, but apparently suit you. [...] Function not per se, as that is something that you cannot easily deviate from.”

Finally, an applicant sees that the system affords him/her to perceive a lower barrier of applying. This affordance originates from the fact that the applicant does not have to pass physical barriers in order to have a conversation with a recruiter, and that you can just fill in a question set: “You don’t have to pass physical barriers, go anywhere, have to dress up, have to

call. (Ap2)”

Perceived affordances - employers

Employees responsible for decision-making in the different recruitment processes within the companies, perceived objectivity as an affordance of AI in recruitment as well. The decision of hiring someone is perceived as not being influenced by external factors or biases: “Less reliant

on the impression of a recruiter and a hiring manager. [...] When you base it on conversations only, it is reliant on if someone presents himself well, yes or no. That does not always display your true value, I believe” (Em4). This affordance goes for external vacancies for this company,

as well as for internal vacancies, where more objectivity is desired. Socially desirable answers will be filtered out and added up to the perceived objectiveness of the AI system in the process:

“[...] a judgement by someone is always subjective, because you are yourself when doing so. [...] I think it can give you a broader, more objective view, a first analysis, where socially desirable answers are filtered out, I believe. That is because recruiters would adjust their questions to the answer an applicant gives.”

(21)

21

IQ and are mostly outsourced and a few employers make use of an employment agency in the preselection part. Compared to those current practices, employers perceive the affordance of a more sustainable, qualitative match, when making use of the AI system. “In general I believe

that this system and the technology... It can predict whether somebody is going to be a good or bad employee. In theory I even believe that the recruiter [...] can be replaced by machine learning” (Em2). Criteria of a qualitative and sustainable match differ from company to

company, but all employers build on the systems’ perceived affordance of creating a match based on data in order to match the profile of a candidate to the culture of the company: “I

absolutely believe that you can build an algorithm that is able to profile your customers, your candidates, and to make matches. Absolutely” (Em6). The system affords its users to create a

better view on both the candidate and the company (from both perspectives), that will eventually lead to the desired outcomes, is what Em7 states: ”Eventually it might be that the

company does not fit your ambitions. The candidate seeks for a more competitive environment, for example. [...] It is not only for the company [...] also for the candidates.” Additionally, the

majority of the user group employers sees that AI can replace everything in the recruitment process, except for the final steps (see section on constrains).

Building on the last argument, the employers had the goal of - and perceived the affordance of – presenting themselves through the use of the AI system. For example, Em4 says: ”Yes, as a company, you can profile yourself in a positive way. [...] We are profiling

ourselves externally with the importance of vitality, movement on the external and internal market. [...] It should be in your whole process, which the system can contribute to”(Em4). It

could even help companies to get a competitive advantage, according to Em7, and contribute to the employer branding of a company: “In that case, you can get ahead of certain other

companies. This is not a one day fly. [...] AI will be applied more and more in the future.”

Another perceived affordance according to the employers, is that the system enables them to attract specific target groups of employees. This argument was preceded by ways the system should be implemented in the company, when doing so. One solution could be to focus on a particular group of employees, to see if and how it works: “It is applicable on the IT-world.

[...] People working in the IT might handle the system better than HR people, for instance. [...] Another option is to take several employee groups, do many pilots at the same time, and compare how they will react” (Em7). Also, employers foresee that younger people might be

(22)

22

not boring, has a good lay-out. Excellent” (Em5). And “what I said earlier, a certain target group, I think. Mainly the younger people who would use a certain tool [...]” (Em3).

The process of recruitment can be shortened due to the AI system, because of its capabilities. The system is capable of making a preselection, hence recruiters can spend their time on a smaller already preselected group of suitable applicants. Employers point at efficiency in both the applicant part (they do not have to call, write letters, et cetera) and at the side of the company: “Efficiency is achieved throughout the whole process. Partly at the applicant, the

time it takes to deliver data, and the way Karreer acquires data” (Em6). Time could be wasted

on a match that is not suitable, which the system is able to be ahead of: “When all the interviews

I have will not result in the desired outcome, my and the candidates’ time is wasted” (Em6).

Em7 agrees: “It saves the applicants time, and the company. For example, an intake procedure

with conversations before applying, checking resumés... Before you are in an actual conversation, you can profit a lot here.”

In line with these affordances, employers see that an AI system affords a lower barrier for applicants to apply and enhances the ease of applying. It will enhance the experience of applying for a job: “[...] maybe usability, because applicants are generally not very eager to

make resumés. At least, it is not something that people do for fun. So by doing this effectively and playful by asking a set of questions, which is quick, I can image that the user experience does play a role in this as well” (Em6).

Lastly, employers perceive the affordance of using the system more internally. This means that they could use the AI system on their current teams and its members, in order to see how teams function, whether they match the companies’ values and culture: “When you look

at the development of a department, you can do a representative sample to see what is needed for their development. [...] You can determine what you have in your organization [...] and are excellent material for discussions around performance and development” (Em5).

Perceived affordances - comparison

(23)

23

recruitment processes, without replacing the final step. This step consists of a conversation with an actual employee of a firm (social contact, working conditions). Lastly, both user groups agreed on a perceived lower barrier in the process of applying. Applicants mostly point at the fact that you do not have to pass physical barriers, while employers see improvements in the usability of AI in the recruitment process.

Applicants perceived an affordance in being able to be matched with different kinds of companies, by using AI. The matching based on the applicant’s profile and the company’s culture enables that. Employers on the other hand see an affordance in targeting different kinds of applicant groups, because this match will be created without the applicant or recruiter actively searching for a suitable match. Furthermore, the employers – compared to the applicants – saw more affordances in the business process. They perceived an affordance in deploying employer branding activities with AI, due to information that can be retrieved in the system about the company, or when a match exists. They see a changing role of employer branding in that sense. Lastly, with AI, employers perceive the affordance of using it internally. The results of the test provide a good starting point for conversations between managers and teams, departments, and/or individuals around performance and development.

Perceived constraints - applicants

As already partially described in the previous section, applicants perceived a constraint in using AI in having social contact with an actual employee of a company. The majority of them described the final step of the recruitment process still as a conversation with an (HR) employee of a firm: “Look, this is not just it, right? You have to get into conversation with each other at

some point. [...] No, it will not change replace the entire recruitment process” (Ap1).

Applicants doubt whether AI is able to make interpretations based on the input, that will fully replace the interpretation of AI: “My first reasoning would be that figures will be matched to

figures of somebody that engages with the system, with figures of a company. I doubt what the margin would be for a match this way. [...] It misses how a person expresses himself in a social manner” (Ap4). Reading the context surrounding an applicant, trying to interpret results and

the context in which certain answers are given, are found to be crucial in order to create a match with a company’s culture: “The last part, I think that you miss a certain context. [...] You can

(24)

24

In line with the previous constraint, but slightly different, is that the system is perceived as not being able to complete a full profile. As described, applicants trust the system in achieving their goal of becoming matched to a suitable company, but they see crucial parts missing in the system, in order to create a fully valid matching system: “It was fun to do, but I

am not sure whether the results that came out of the test are super valid and describe me exactly as who I am. I think you should have to incorporate more than just a game. Skills, that kind of measurements” (Ap5). Ap6 agrees with the argument: “I think that the information is too minor to profile someone perfectly.”

Perceived constraints - employers

Employers agree upon the perceived constraint of AI in the process, because it misses a certain social context. Not only do applicants have a need for “social contact”, according to Em2, also Em3 points at the systems’ inability in measuring - found to be critical criteria - as “ambiance” and perfectly determine a “match on a personal level”. Em4 agrees and adds: “I would want

to look someone in his eyes and ask him a few questions myself. Just to see how he will react. [...] I would preferably test that myself. According to Em5, the system stops at delivering

results, but misses out on interpreting and conversing the results into an actual match: “I think

that the system can make an estimate of results, but you cannot dig deeper into certain aspects. You are ‘yellow’. Point. But why, and how so, and where does it originate from?” Therefore,

employers do not believe that recruiters will be totally replaced in the near future. In order for the systems’ success in interpreting results and take that next step of replacing a human in doing so takes time, Em6 believes: “I do not think that we – human beings – will be replaced. [...]

Some aspects are not yet covered by the system yet. Maybe we might think differently in twenty years, but for now they can just advise you, take away certain tasks in order to help you focus on other things.” Em7 concludes: “On the short – and midterm range, I do not foresee that the system will replace recruiters.”

Again, in line with this previous constraint, employers perceive the constraint of the system in completely profiling an applicant and match it to a company and function. Em1 says:

“I do not believe that the system is able to screen me, or which competencies I have, at which level I am and if it is suitable on a specific function. [...] At the moment, the system does not contain enough data about our colleagues, the teams and the content of a function.” This

argument is supported and supplemented by Em4: “For these big companies it is hard, because

(25)

25

in turn could endanger the outcome of realizing a perfect match: “AI, a process in which you

retrieve an insight from data. So many aspects have to go well. You have to select the right data, the data has to be reliable, [...] relevant. [...] I do not believe in unbiased data. Data means bias, because it exists for a certain reason, through a certain lens."

Perceived constraints - comparison

Both user groups perceived similar constraints with the use of AI in the recruitment process and do not believe that certain capabilities human beings possess will be replaced by a certain system in the near future. It misses out on social context, cannot measure certain crucial aspects as looking someone in the eye, dig deeper, interpret the results and ask subsequent questions. Therefore, entrusting the complete matching to the system on a personal level is one bridge too far for now, both user groups believe.

5. DISCUSSION

Theoretical implication of findings

The perception of a traditional recruitment process (Holm, 2012) (also see appendix A) has been proven consistent within this research, with a few additions. Both applicants and employers describe the traditional (or current) recruitment process they are familiar with, consistent with the representation of Holm (2012). Some employers incorporated assessments or tests (also recognized by applicants), which has enriched the process with extra tasks and subtasks. The impact that AI will have on the recruitment process varies a lot among the population and, according to them, depends on different variables. As Mathis (2018) stated, AI has a growing support of HR professionals. The user group employers underlined this statement, as they saw using AI in recruitment as a future direction many companies will (and already) turn to. Also, both user groups perceived a great ease of use, which confirms Mathis’ (2018) statement that AI enables a user-friendly experience for both employers and applicants. The next section of the discussion focuses on the perceived affordances, followed by the constraints, compared to the possibilities and benefits/challenges that research indicated in the past years (theoretical implications).

(26)

26

test results, the user trust of both user groups was high. It increased the trust in the validity, objectivity and effectiveness of the system. This in turn, had a positive influence on the perceived affordance of creating an objective match. Moreover, due to cancelling out the subjectivity of the recruiter and not having to pass physical barriers, (the way to) a match is perceived more efficient, objective, sustainable and qualitative. Aforementioned affordances relate to Fernández and Fernández’s (2019) statement that AI in the recruitment process will lead to fairness, inclusiveness and accuracy and AI’s capability of selecting and ordering the best possible match for recruiters (Leong, 2018) in less time, generating the possibility for recruiters to focus on other tasks (Albert, 2019). In general, both user groups perceived the affordance ‘matching with AI’.

Another perceived affordance is the ability to speak to a larger and more diverse candidate pool. Both applicants and employers perceived the affordance of creating different matches compared to the traditional recruitment process. The main reason of this perceived affordance was the ability to match the candidate to the company’s culture based on the personality of the candidate. Different target groups can be recruited, and different companies will be matched to applicants. When we look at previous literature, this affordance is very similar to the outcomes of Albert (2019) with regards to the utilization of AI in multi-database sourcing. This AI area offers an improved matching quality and increased quantity of talent pools.

Lastly, employers perceived the affordance of deploying the AI system inside-out. While this mainly concerns the capability of this particular system to generate personality test results, it is still an affordance that should be mentioned. The affordance generates the possibility to use the test results for conversations around growth and teambuilding. Furthermore, using AI in the recruitment process will have a positive effect on a company’s (employer) brand. Again, compared to existing literature, this study is consistent. Albert (2019) mentions outcomes related to an improved employer brand and candidate experience.

Perceived constraints were mostly on the human interaction part of the AI system in the recruitment process. Both user groups do not see AI replace a human totally because it misses out on social context and interpretation, which is in line with Upadhyay and Khandelwal’s (2018) conclusion that some activities of the recruitment process cannot be automated, because they are not yet capable of interpreting social context and therefore able to negotiate or assess a cultural fit.

(27)

27

affordances. Trust in the validity of the system positively affected the perceived affordances. Also, both user groups trusted that the goals they had could be achieved by using the platform of Karreer.com. This research shows that the user groups have system-like trust in the use of AI in the recruitment process, because the system is helpful, reliable and functional in achieving their main goals of increasing effectivity and efficiency (Lankton et al., 2015). Human-like trust (Lankton et al, 2015) is presented in the user group of applicants as well, due to entrusting the system personal data, and the present trust in that the system handles it with integrity, ability and benevolence. Furthermore, the employers trust the system to increase their effectivity and efficiency. This research implies that a positive user attitude will have a positive effect on the acceptance of the use of AI in the recruitment process, which underlines the literature on technology adoption (Bondarouk et al., 2016).

User trust had little effect on constraints, as both user groups did entrust the system with personal data and in achieving their goals. On the other hand, employers were cautious in implementing AI in their recruitment process, because there was no proof of concept yet, leading to a ‘data-paradox’. AI needs to learn from data in order to become better and generate matches, which needs to be preceded by investment in time, effort and money by both employers and applicants (especially time). Applicants did not hesitate to use the tool, while employers did not yet invest. This leads to the inability of the system to generate matches until the date of writing.

There were differences with existing literature and therefore this research will extend several literature streams. As mentioned in the introduction, the existing literature focuses mostly on the perspective of organizations (Ruël et al., 2004; Albert, 2019; Leong, 2018). The affordance lens enabled me to extend this stream of research with the perceived affordances of different user groups and thus an individual and organizational perspective. The perceived affordance lens took the subject of AI recruitment to a ‘user group’ level, which had not been done yet. The answer to the research question: ‘How does user trust influence the perceived

affordances and constraints of using artificial intelligence in the recruitment process?’ is little

(28)

28

Recommendation for managers

This research shows that the user attitude trust is present within the two most important user groups of AI in recruitment; applicants and employers. While this is only one determinant for the success of AI in recruitment, it is an important one (Bondarouk et al., 2016). I would encourage managers to take a few steps into adopting AI in the recruitment process. AI systems are designed to become better in what they do, but they do require input to learn (Kaplan & Heanlein, 2018). Cooperating with companies that already have the knowledge to implement AI applications and providing it with data, will lead to a better product that will be beneficial for applicants and employers and lead to a more user-friendly experience for both employers and applicants (Mathis, 2018). Another advice – with the findings that user trust especially plays a role in the adoption phase of AI recruitment - would be to implement an AI tool on a specific target group, or in a specific division of the company. The investment and risk will be smaller and as the recruitment process consists of many steps, involving several actors and combining multiple sorts of data, it will advance in this particular task of recruiting people for that division, or with a specific background. On the other hand, it will help develop trust and acceptance within the organization, as proof of concept will gradually occur within the organization.

Limitations and future research

The research particularly focuses on the user attitude trust. While it provides useful in-depth insights on one of the user attitudes, it also excludes others that may play a significant role. This could be a potential future research direction; include other user attitudes (e.g. Kroenung & Eckhardt’s (2017) other attitude determinants) in a similar setup of this research (e.g. qualitative interview-based) or broaden the scope and include multiple user attitudes that might affect perceived affordances and constraints.

(29)

29

future research setting could include other sources of data, for instance including qualitative observations.

With the use of purposive sampling, the population is created without probability sampling. The selecting process in this research was done by choosing the population based on several criteria (participants knew the platform and did already have the opportunity to create a meaning about the subject and platform). One the one hand this provided me with more in depth and specific knowledge, while on the other hand, more objectivity could be added while using a probability sampling and larger sample size. As stated in the methodology section, purposive sampling could lead to respondents feeling prohibited to talk freely or refusing participation. A future research set up could include a larger sample size, with a different method selection that includes equal chances of selection which could lead to more reliability.

Lastly, the research focused on perceived affordances and constraints and therefore excluded actualized affordances. The research setting did not allow me to investigate whether actualized affordances existed, mostly because the platform is not able to generate matches yet. Again, this offers an in depth understanding of the users’ perception, while leaving actualized affordances behind. In the future, when AI in recruitment will become even more common and applied, actualized affordances will be an interesting future direction to become subject of research.

Conclusion

(30)

30

REFERENCE LIST

Albert, E.T. (2019). AI in talent acquisition: a review of AI-applications used in recruitment and selection. Strategic HR Review, 18(5), 215-221.

Balci, B., Rosenkranz, C., & Schuhen, S. (2014). “Identification of different affordances of information technology systems: an empirical study.” In Proceedings of the European Conference on Information Systems. Tel Aviv, Israel: AIS.

Barrat, J. (2013). Our final invention: Artificial intelligence and the end of the human era. Thomas Dunne Books.

Bélanger, F., & Carter, L. (2008). Trust and risk in e-government adoption. The Journal of

Strategic Information Systems, 17(2), 165-176.

Bondarouk, T., & Brewster, C. (2016). Conceptualizing the future of HRM and technology research. The International Journal of Human Resource Management, 27(21), 2652- 2671.

Bondarouk, T., Parry, E., & Furtmueller, E. (2016). Electronic HRM: four decades of research on adoption and consequences. The International Journal of Human

Resource Management, 28(1), 98–131.

Boxall, P. (1996). The strategic HRM debate and the resource-based view of the firm, Human

Resource Management Journal, 6(3), 59-75.

Boxall, P., & Purcell, J. (2003). Strategy and Human Resource Management. London: Macmillan.

Chatterjee, S., & Datta, P. (2008). Examining Inefficiencies and Consumer Uncertainty in E-Commerce. Communications of the Association for Information Systems, 22.

Chellappa, R. K., & Pavlou, P. A. (2002). Perceived information security, financial liability and consumer trust in electronic commerce transactions. Logistics Information

Management, 15(5/6), 358-368.

Chemero, A. (2003). An Outline of a Theory of Affordances, Ecological Psychology, (15:2), 181-195.

Eagly, A., & Chaiken, S. (1993). The psychology of attitudes. Belmont, PA: Wadsworth. Edmondson, A. C., & McManus, S. E. (2007). Methodological fit in management field

research. The Academy of Management Review, 32(4), 1155-1179.

(31)

31

Fernández, C., & Fernández, A. (2019) Ethical and Legal Implications of AI Recruiting Software. In Ercim News, Special theme: Transparency in Algorithmic Decision

Making, 22-23.

Gefen, D. (2000). E-commerce: the role of familiarity and trust. Omega, 28(6), 725–737. Gefen, D., Karahanna, E., & Straub, D. W. (2003). Trust and TAM in online shopping: An

integrated model. MIS Quarterly, 27(1), 51-90.

Gibson, J. J. (1977). The Theory of Affordances. Perceiving, acting and Knowing, Shaw R. and Bransford J. Hillsdale, N.J. Lawrence Erlbaum Associates.

Gibson, J. J. (1979). The ecological approach to visual perception. Boston: Houghton Mifflin. Gibson, J. J. (1986). The ecological approach to visual perception. Hillsdale, NJ: Lawrence

Erlbaum Associates.

Grace, K., Salvatier, J., Dafoe, A., Zhang, B., & Evans, O. (2017). When Will AI Exceed Human Performance? Evidence from AI Experts. Journal of Artificial Intelligence

Research, 62, 729-754.

HireVue. (2017). HireVue: Case study. Retrieved from

https://cdn2.hubspot.net/hubfs/464889/Hilton%20Aug%202017/2017_12_SuccessStor y_Hilton_CustomerMarketing3.pdf?__hstc=&__hssc=&hsCtaTracking=b76 cefe9-ece-4631-bef5-53084aa900e5%7Ca26c5bca-5fbe-46e3-b0e4- c1f79894f72c. (accessed 19-07-2020).

Holm, A. B. (2012). E-recruitment: Towards an Ubiquitous Recruitment Process and Candidate Relationship Management. German Journal of Human Resource

Management, 26(3), 241-259.

Hutchby, I. (2001). Technologies, Texts and Affordances. Sociology, 35(2), 441–456. Isaias, P., Casaca, C., & Pifano, S. (2010). Recommender systems for human resources task

assignment. In 24th IEEE international conference on advanced information

networking and applications, 214–221.

Jennings, N. R., Sycara, K., & Wooldridge, M. (1998). Autonomous Agents and Multi-Agent

Systems, 1(1), 7–38. doi:10.1023/a:1010090405266.

Kaplan, A., & Haenlein, M. (2018) Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Business

Horizons, 62, 15-25.

Referenties

GERELATEERDE DOCUMENTEN

Allereerst is m.b.v. een handmeter de windsnelheid op werktafel-hoogte bepaald. Vervolgens is m.b.v. een N.T.C.-meter en -schrijver de wind- snelheid gemeten en

Still, discourse around the hegemony of large technology companies can potentially cause harm on their power, since hegemony is most powerful when it convinces the less powerful

Eindexamen havo Engels 2013-I havovwo.nl havovwo.nl examen-cd.nl Tekst 3 Can we trust the forecasts?. by weatherman

On the societal level transparency can (be necessary to) build trust, but once something is out in the open, it cannot be undone. No information should be published that

To support the development of a computational model for turn-taking behaviour of a virtual suspect agent we evaluate the suggestions presented in the literature review: we assess

Moreover, it can been shown that for the 1-dimensional problem, Bayes-Nash incentive com- patibility (BIC) and Dominant Strategy incentive compatibility (DIC) is equivalent in the

Design procedure: The amplifier is simulated using ADS2009 and Modelithics v7.0 models are used for the surface mount components and the transistors (except for the transistors’

As these technologies allow for a more complete and dynamic view of soil microbial communities, and the importance of microbial community structure to ecosystem functioning be-