• No results found

“Fearing the unknown or striving for innovation?” Communication practitioners’ definition, perception of and dealing with the implementation of Artificial Intelligence in their workpl

N/A
N/A
Protected

Academic year: 2021

Share "“Fearing the unknown or striving for innovation?” Communication practitioners’ definition, perception of and dealing with the implementation of Artificial Intelligence in their workpl"

Copied!
55
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

“Fearing the unknown or striving for innovation?”

Communication practitioners’ definition, perception of and dealing with the

implementation of Artificial Intelligence in their workplace in the context of

organizational change - an exploratory study

By

Lara Maria Weber 12808709

Master’s Thesis

Graduate School of Communication Master’s programme Comunication Science

Dr. Pernill van der Rijt 26th of June 2020

(2)

Abstract

This study aims to provide insight into how communication practitioners define, perceive, and deal with the emergence of Artificial Intelligence in the context of organizational change. An extensive body of literature theorizes organizational change in general, but the present study is the first one, examining the emergence of AI in the context of organizational change from a communications perspective. As communication is a critical tool for an organization’s func-tioning, the present study claims that communication practitioners possess a unique duality re-garding their role and the implementation of AI: On the one hand, it is the task of communica-tion practicommunica-tioners to deal with new challenges AI poses to their daily work as AI-users. At the same time, they are managing the strategic communication, internally and externally, of what the implementation of AI means for various stakeholders of the organization as AI-communi-cators. Therefore, a qualitative study was conducted that included 12 interviews with a diverse sample of communication practitioners between the ages of 28 and 53, from either Tech or Non-Tech companies. The findings of this study uncovered, that communication practitioners can be classified into four different types defining, perceiving, and dealing with AI in various ways: The skeptical AI-newcomer, the open-minded AI-newcomer, the rational AI-newcomer, and the passionate AI-expert. Moreover, these four types show that communication practitioners differ in their knowledge and experience with and their attitude towards AI. The study demon-strates the existence of the duality and how essential it is to be aware of employees’ diverse personalities, backgrounds, and needs when implementing organizational change. Especially in times of digitalization and the emergence of Artificial Intelligence, sensitive and transparent communication is crucial for successfully implementing change since there are employees fear-ing the unknown and, in turn, those motivated and open-minded strivfear-ing for innovation and accepting change.

Keywords: artificial intelligence, technological organizational change, uncertainty manage-ment, fear of the unknown, qualitative research

(3)

“The only thing that is constant in life is change.” Heraclitus, 500 BC

Introduction

As digital and physical worlds are merging, we are now entering the fourth industrial rev-olution that causes a fundamental change in the way we live, work, and interact with each other (Schwab, 2016; Zerfass et al., 2019). This new era in the evolution of humanity is made possible by extraordinary technological developments similar to those of the first, second, and third in-dustrial revolutions (Schwab, 2016). During the first inin-dustrial revolution in the 18th century, when agricultural life became industrialized, the invention of the spinning machine and the steam engine changed the world permanently (Mathias & Davis, 1990). Today, the steam en-gine has long stopped being a game-changing innovation, since our world is fundamentally impacted by ‘the internet of things’ with new advanced technologies like the advent of Artificial Intelligence, Machine Learning, or Virtual Reality (Wisskirchen et al., 2017). Those innova-tions contribute to create a human-centered future and make data the new currency of our soci-ety (Eggers, Hamill, & Ali, 2013). One of the key drivers in this revolution is the emergence of Artificial Intelligence (Schwab, 2016).

Due to the implementation of AI along with chatbots, language assistants, and content au-tomatization, not only private life, but also daily routines and established work patterns in

or-ganizations are transforming (Kolbjørnsrud, Amico, & Thomas, 2017). Not only organizations

themselves, but also their communications departments with their key functions are redefined by technological innovations like AI (Sterne, 2017). The need for research in this area is em-phasized by the results of the European Communication Monitor 2019, exploring the knowledge and perceptions of AI among communication practitioners: The majority of com-munication practitioners believe that AI will influence the comcom-munications profession as a whole (macro-level). Every third practitioner expects a change in the way of working within

(4)

affected (Zerfass et al., 2019). The study also identified a phenomenon called “AI divide” be-tween generations: communication practitioners in their twenties seem to assess the future of AI less positively than older practitioners (Zerfass et al., 2019).

These fruitful findings emphasize the academic relevance of further investigating commu-nication practitioners and the implementation of AI combined. Therefore, the first academic aim of this study lies in providing insight into underlying emotions and perspectives of com-munication practitioners towards AI since previous research lacked such in-depth explanations. A possible explanation is that individuals tend to fear the unknown (Carleton, 2016), and deal-ing with change can result in rejection because of uncertainty (Grupe & Nitschke, 2013) or acceptance because of the motivation to reduce this uncertainty (Brashers, 2006). Accordingly,

the second academic aim is to determine whether AI is new or familiar to communication

prac-titioners, whether they perceive the emergence of AI as an organizational change and, thus, reject or accept AI. With this study, the line of research is extended by bridging the gap between underlying emotions and ways of dealing with uncertainty in the context of AI and organiza-tional change, which has not yet been examined, in detail, in this constellation before. There-fore, it fairly contributes to communication science literature and serves as a first starting point through in-depth qualitative research.

Since nowadays almost all companies have to keep up with innovations to stay competitive (Jetter, Satzger, & Neus, 2009), the practical relevance of this study lies in giving

recommen-dations for organizations’ communications. As communication is a critical tool for an

organi-zation’s functioning (Hargie, 2016), one can assume that communication practitioners possess a unique duality regarding their role and the implementation of AI: On the one hand, it is the task of communication practitioners to deal with the new challenges AI poses to their daily work as AI-users. At the same time, they are managing the strategic communication, internally and externally, of what the implementation of AI means for various stakeholders of the organ-ization as AI-communicators. Therefore, the practical goal is to determine in which way this

(5)

duality exists and to identify how to overcome communication practitioners' fear of the un-known to make them contribute successfully to the implementation of AI-change.

The guiding research question, therefore, unfolds as follows:

How do communication practitioners define Artificial Intelligence (AI), and how do they per-ceive and deal with the implementation of AI in their workplace in the context of organizational change?

Theoretical Background

The theoretical framework of this thesis starts with the discussion on the sensitizing con-cept ‘Artificial Intelligence’ and how AI might change the future of work. The following section will provide insight into the concept of organizational change and will deal, in particular, with technological organizational change in the workplace. It concludes with an assumption con-cerning the emergence of AI as a form of organizational change. Finally, both concepts will be related to the communications perspective. Therefore, predictions will be made on how AI, in the context of organizational change, might be perceived and dealt with by communication practitioners.

Artificial Intelligence - Definition and terminology

A powerful force and a key driver of the fourth industrial revolution is currently shaping our society and the future of work (Schwartz, Stockton, & Monahan, 2017). Artificial Intelli-gence (AI) - a phenomenon everyone talks about, but everyone is not necessarily aware of what it is, what it contains, and what it can do (Warwick, 2012). Since the term “Artificial Intelli-gence” itself, is accompanied by unclarity, confusion, and fuzziness, it also causes ongoing discussions in research and literature about a proper and unifying definition (Wang, 2008; Aristodemou & Tietze, 2018; Kaplan & Haenlein, 2019; Wang, 2019). With the ongoing dis-cussions about the limitations of AI, literature agrees on how human beings can be understood

(6)

in their mental or cognitive ability, which is commonly called ‘intelligence’, and AI is the at-tempt to reproduce this ability in computer systems” (Wang, 2008, p. 3). Since the aim of this study is to look at AI from a communication science perspective, the definition of the European Communication Monitor 2019 of AI is included as a working definition for this thesis:

“AI comprises flexible decision-making processes and actions of software-driven agents. They adapt to changing goals and unpredictable situations, learn from experience, and are based on technologies like natural language processing, data retrieval, knowledge representation, semantic reasoning, and machine learning” (Zerfass et al., 2019, p. 70).

Artificial Intelligence - risk or opportunity for the future of work?

Artificial Intelligence has become a new trend in various areas like academics, economics, healthcare, automotive, and education over the last few years (Jarek & Mazurek, 2019). But the overall optimistic and enthusiastic image of AI as the solution to all of our problems is increas-ingly replaced by skepticism and uncertainty especially among employees (West, 2018). AI seems to be promising for them, as long as it is distant and not applied in their own company (Ernst, Merola, & Samaan, 2019) and employees start to question whether they can feel safe and comfortable in a future workspace marked by AI (Köse, 2018). As the number of factory workers is decreasing and some fields of work become fully automated, the primary fear of employees is the loss of their jobs (Wisskirchen et al., 2017; West, 2018; Ernst et al., 2019). Other studies, in turn, expect AI to not only eliminate jobs but to also create new forms of employment (Smith & Anderson, 2014; Arntz, Gregory, & Zierahn, 2017; Fleming, 2019). With new jobs emerging from the adoption of AI, new skills are expected and have to be de-ployed by employees (Wisskirchen et al., 2017), causing many to fear that they might not be able to meet these new demands (West, 2018). Not only new skills but also considerable invest-ments into new technologies are necessary for companies in order to keep up with the challenges of the fourth industrial revolution and the advances of competitors (Corporate Finance, 2015).

However, the emergence of AI can not only be seen as a risk but also as an opportunity. Employees have to do less manual or exhausting work, and even recurring and rather dreary tasks can be handled by intelligent systems (Wisskirchen et al., 2017). As a result, it could be

(7)

rather beneficial for employees as they have more free time that they can use for creative work or other tasks to improve their competencies (Makridakis, 2017). Intelligent machines do not only provide support functions, they have life-saving capacities, like robots for instance are able to work in danger zones where human life would be at risk (Dirican, 2015; Wisskirchen et al., 2017). Moreover, advances in AI have opened up various favorable possibilities in the public sector (Wirtz, Weyerer, & Geyer, 2019), but these are also highly influencing the private sector regarding process optimization (Ransbotham, Gerbert, Reeves, Kiron, & Spira, 2019), organi-zational decision making (Jarrahi, 2018), cost reduction, and increased efficiency (Wisskirchen et al., 2017; Ernst et al., 2019).

The emergence of Artificial Intelligence as a form of organizational change

An important topic in communication science research is ‘organizational change’ and the vital and essential role of communication during such change (Weick & Quinn, 1999; Elving, 2005; Lüscher & Lewis, 2008; Husain, 2013). ‘Organizational change’ can be understood as alterations of established work routines, strategies, and objectives that concern the whole or-ganization (Herold, Fedor, Caldwell, & Liu, 2008).

In most cases, new technologies are important and powerful drivers of organizational change (Cascio & Montealegre, 2016; Gerwing, 2016) and those innovations are transforming indus-trial sectors and dynamics on the market and within organizations (Utterback, 1994). The or-ganization’s role is to accept those innovations by finding a balance between implementing new and established technologies (Utterback, 1994; Damanpour & Schneider, 2006). When talking about technological organizational change, one has to bear in mind the duality of technology, as technology causes change and at the same time, can make change in organizations possible, whereby Utterback defines it, in his words, as “[…] the creator and destroyer of industries and corporations” (Utterback, 1994, p. 6). Technological organizational change is mainly deter-mined by innovations, whereas newness, as a characteristic of every innovation, (Damanpour

(8)

As the emergence of AI can be seen as a form of innovation by comprising plenty technol-ogies and new challenges (Schwartz et al., 2017; West, 2018), the question arises whether the adoption and implementation of AI within organizations can be considered a form of (techno-logical) organizational change. Ernst et al. (2019) see the rise of technological transformation based on advancements in AI. Mazzucato (2013) argues that AI will determine the direction of technological organizational change, however, at this stage, it is not possible to foresee the far-reaching consequences that the introduction of AI could have. Taking all of those findings into consideration, the current flood of transformations based on AI is expected to be the most sig-nificant and far-reaching technological change observed in recent decades, bringing sigsig-nificant challenges and opportunities to the future of work (Ernst et al., 2019).

The effects of technological organizational change on employees

Individuals tend to fear what they are unable to control, while always striving for reducing uncertainty (Carleton, 2016). This phenomenon can be explained as the fear of the unknown, defined as an “ individual’s propensity to experience fear caused by the perceived absence of information at any level of consciousness […]” (Carleton, 2016, p. 124). Change-related uncer-tainty can be perceived in various forms, as employees experience emotional unceruncer-tainty and cognitive uncertainty (Allen, Jimmieson, Bordia, & Irmer, 2007). In light of technological or-ganizational change, research identified technostress as a consequence of an organization’s adoption of information and communications technologies (ICT), resulting in negative individ-ual cognitions and emotions (Ragu-Nathan, Tarafdar, Ragu-Nathan, & Tu, 2008; Tarafdar, Tu, & Ragu-Nathan, 2010; Shu, Tu, & Wang, 2011; Chandra, Shirish, & Srivastava, 2019). Tech-nostress is stress induced by the incapability to cope with the demands of organizational com-puter usage and technological newness (Chandra et al., 2019). Accompanied by the implemen-tation of AI in the workplace, it is similarly important to recognize the phenomenon of ro-bostress. Robostress is a subsidiary of technostress that is perceived if using and being over-burdened by intelligent machines or robots (Vänni, Salin, Cabibihan, & Kanda, 2019). All these

(9)

forms of stress are caused by employee uncertainty since they do not know what to expect in certain situations and how to appropriately react to those triggers (Allen et al., 2007). Therefore, it is interesting to conduct research on how individuals deal with stress induced by AI-change and which coping mechanisms they do apply (Ashford, 1988).

According to the literature, there are two different ways of dealing with the implementation of AI in the workplace. In light of the fear of the unknown, the ‘Uncertainty and Anticipation

Model of Anxiety’(UAMA) posits that uncertainty about a possible future risk affects the

abil-ity to avoid it or mitigate its negative effects (Grupe & Nitschke, 2013). According to the UAMA, individuals assess the change as negative and tend to show avoidance behavior. Con-cerning organizational change, research found that employee cynicism and also skepticism to-wards innovations and change are a form of rejecting-behavior (Stanley, Meyer, & Topolny-tsky, 2005). Rejecting-behavior can be explained by the psychological reactance theory stating that in case of a reduction of behavioral freedom, individuals are motivated to restore their personal freedom (Torrance & Brehm, 1968). Psychological reactance also appears in the con-text of organizational change which might in some cases undermine the employees’ freedom and activate negative emotions (Nesterkin, 2013). This could result in resistance to organiza-tional change, as employees perceive uncertainty threatening their everyday work life or even their identity (van Dijk & van Dick, 2009; Nesterkin, 2013).

Apart from rejection and negative effects, according to the ‘Theory of Uncertainty

Man-agement’ (UMT), uncertainty can also lead to a motivation of overcoming and considering

change as a rather positive opportunity (Brashers, 2006). Research found that commitment to change is influenced by employee-manager relationships, job motivation, role autonomy, and the perceived fit of change with an organization’s vision (Parish, Cadwallader, & Busch, 2008). Implications for communications on micro-, meso-, and macro-level

(10)

individual, an organization, and a profession as a whole (West & Farr, 1990; Gopalakrishnan & Damanpour, 1997; Damanpour & Schneider, 2006). Thus, we can assume that the implica-tions of organizational change for communicaimplica-tions can range from the micro- to the meso- to the macro-level.

Implications for the communication practitioner (micro-level)

Communication practitioners are engaged in a wide spectrum of professional responsibili-ties (Hall, 2015), ranging from, for example,

“ […] editorial work, internal counselling, handling of inquiries, gathering information, looking at data from research, talking to press contacts, drafting communications plans, delivering presentations, producing com-munications materials […], and administrative tasks within the department” (Cornelissen, 2011, p. 158). Since they often work at a high pace and under stress on many tasks simultaneously (Cornelis-sen, 2011), surprising or new developments can be significant components that fundamentally affect their daily work (Elving, 2005). Due to the implementation of AI, routines and established

work patterns in organizations are transforming (Kolbjørnsrud et al., 2017) and

communica-tions departments with their key funccommunica-tions are redefined (Sterne, 2017). As communication is a critical tool for an organization’s functioning (Hargie, 2016), communication practitioners seem to possess a unique duality regarding their role and the implementation of AI: On the one hand, it is the task of communication practitioners to cope with the new challenges AI poses to their daily work. At the same time, they are managing the strategic communication, internally and externally, of what the implementation of AI means for various stakeholders of the organiza-tion. Accordingly, one can assume, that communication practitioners are AI-users and AI-com-municators at the same time. Therefore, as an AI-user, the communication practitioner uses tools and programs based on AI within their daily working routine. Being an AI-communicator would mean that the communication practitioner shares information about AI-related content with internal or external stakeholders, depending on his or her employment position within the organization. Whether those roles exist has never been investigated before and will be explored in the present research.

(11)

As already discussed in the previous sections, the effects of technological organizational change through AI on individuals can vary slightly. As every human being seems to deal with uncertainty in a different way (Jarrahi, 2018), one can assume that communication practitioners also show different emotions and coping behaviors towards AI and organizational change. Ac-cording to the UMT, there are communication practitioners who assess the adoption of AI as rather positive, motivating, and challenging (Brashers, 2006). On the other hand, literature shows that, according to the UAMA, individuals tend to fear uncertainty, which might likely lead to rejecting-behavior, cynicism, skepticism, and psychological reactance among commu-nication practitioners towards AI-change (Stanley et al., 2005; Grupe & Nitschke, 2013; Nest-erkin, 2013).

The results of the European Communication Monitor 2019 exploring the knowledge and perceptions of AI among communication practitioners, partly shed light on the subject of re-search. The findings point out that a minority of communication practitioners can be considered experts in the field of AI, indicating only a small amount of personal knowledge about AI amongst them (Zerfass et al., 2019). Around half of them think the adoption of AI will change the way they work, but only 20%, however, fear job loss, a threatened professional identity, or shrinking salaries. The study also identified an “AI divide” between generations in which com-munication practitioners in their twenties seem to regard the future of AI less positively than older practitioners (Zerfass et al., 2019). Referring back to uncertainty management, this would imply that younger communication practitioners tend to reject AI-change according to the UAMA, whereas older ones are motivated to overcome the unknown according to the UMT. One angle the study did not investigate, is whether communication practitioners consider the emergence of AI a fundamental organizational change. However, this assessment is the key point of the present study. Discovering whether communication practitioners are afraid of the uncertain or already familiar with AI will generate insight into different emotions and coping

(12)

Implications for the organization as a whole (meso-level)

In addition, it is important to examine the implications the adoption of AI causes within organizations. As stated above, organizational change can have widespread consequences and most likely new technologies affect an organization’s functioning (Gerwing, 2016). With the emergence of AI and technological organizational change, new structures in companies are cre-ated (Wisskirchen et al., 2017). PwC's latest global CEO survey "Navigating the rising tide of uncertainty" revealed that both, the risks and opportunities surrounding AI, are a number one priority for organizations and their top executives (PwC, 2020). 85% of CEOs in this study agree on the fact that AI will significantly change the way their organizations will operate over the next five years (PwC, 2020). There will be a considerable amount of new structures within a company and the in-house organization will transform with the growing importance of AI (Wisskirchen et al., 2017). Especially in those matters, communication practitioners need to communicate sensitively and yet effectively about AI-change and its consequences. However, the results of the ECM 2019 show that only every third communication practitioner even ex-pects a change in the way of working within their organization through AI (Zerfass et al., 2019).

In comparison to the results of PwC’s CEO survey, this indicates that organizations’ top man-agement has higher expectations regarding the relevance of AI in the future and communication practitioners rather seem to underestimate the importance of AI within their organizations. Implications for the communications profession (macro-level)

However, it is equally important to also gain insight into how the communications profes-sion as a whole might change. It is very likely that there will be a need for new jobs in the course of the AI adoption and a change of established jobs (Wisskirchen et al., 2017; West, 2018). Those new occupations might involve many skills a communication practitioner already has, but they are also likely to require a lot of new competencies (Wisskirchen et al., 2017). Thus, it can be assumed that the communications profession will have to adapt. This could potentially lead to a whole new understanding of communications that would require the entire

(13)

communications profession to be reinvented in order to be able to meet the challenges induced by AI. This assumption is additionally supported by the findings of the ECM 2019 with the majority of communication practitioners believing that AI will influence the communications profession as a whole (Zerfass et al., 2019). One in five practitioners further considers the com-munications profession will lose its identity or core competencies (Zerfass et al., 2019). How-ever, those findings lack precise and more detailed insights into what core competencies are involved or to what extent such a loss of identity could occur.

Taking all of those findings into consideration, it became evident that there is still no uni-fying research that bridges the gap between the emergence of AI in the context of organizational change from a communication science perspective.

Methods

Since the aforementioned has not yet been extensively researched, qualitative research is well suited to provide insight into how communication practitioners define AI and how they perceive and deal with AI-change. Of particular research interest are the interviewees’ emotions and experiences towards AI. Due to the possibility of open formulated questions, the choice of qualitative, individual, in depth-guided conversations provides deeper insights than a quantita-tive survey (Carey, 2012). Moreover, since the focus of this study is on individual experiences and not on group dynamics or interpersonal processes, face-to-face interviews tend to be a more fruitful research method than conducting focus groups (Brennen, 2018). Semi-structured inter-views, in turn, assure great flexibility to intensely focus on every interviewee and to understand the meaning of information and independent opinions (Brennen, 2018). Grounded theory was used as the overarching conceptual research approach to develop an in-depth understanding of the aforementioned sensitizing concepts (Glaser & Strauss, 1967; Corbin & Strauss, 1990; Bowen, 2006).

(14)

Sampling strategy and sample characteristics

Because convenience sampling can mitigate reliability, purposeful sampling as one of the most effective approaches in qualitative research is used to ensure variability in the sample (Thyer, 2011). While traditional companies might struggle to adapt to affordances caused by AI, Tech companies have been grown with such innovations which makes a distinction between them a promising sampling criterion (Pannu & Student, 2008). Therefore, the first sampling criterion was the differentiation of communication practitioners working in either a Tech com-pany or a Non-Tech comcom-pany. Companies were considered Tech companies that are “involved in the research, developments and/or distribution of technologically based goods and services” (Frankenfield, 2019, para.1). Second, only communication practitioners in companies already implementing AI were included in the sample. They were best suited to meet the purpose of the study and were capable of giving information-rich answers according to AI-change. The infor-mation that these companies were already dealing with AI was researched via online sources and then confirmed again by the interviewees. Third, the communication practitioners should be involved in communication work within the company’s communication department to pro-vide statements on how AI affects 1) them and their work personally, 2) the organization and 3) the communications profession as a whole. This information was collected via LinkedIn and confirmed by the interviewees during the recruitment process. Eligible respondents were re-cruited via LinkedIn, as it provides the opportunity to obtain background information in ad-vance to confirm the suitability for the study. The recruitment process was enriched by the method of snowballing to gain more interviewees and important insights into facts one was not aware of before (Kothari, 2004). The interview recruitment text in German and English can be found in the appendices (see Appendix A).

110 interview requests were sent, but only a few communication practitioners replied and were able and willing to set up an interview. An interesting observation during the recruitment process was that potential interview participants initially seemed interested in participating in

(15)

the interview. However, after explaining to them that the interview would revolve around AI, some participants withdrew their confirmation immediately. It was quite noticeable that they withdrew from participating when the AI term was mentioned. In most cases, the cancellation was explained by a lack of knowledge or feeling uncomfortable. Finally, a total of eight women and four men aged between 28 and 53 years were recruited and interviewed for the present study. They differed with regard to the variations mentioned above, whereas all exhibited a common characteristic - their work as a communication practitioner for a company already implementing AI. Beyond the sampling criteria, there is some additional information concern-ing the sample characteristics which can be found in table one (see Appendix B). Hence, the participants varied in their hierarchical positions, duration of employment for the company, and country of residence. The companies they work for differ according to the type of industry they operate in. These range from, for example, automotive over fast food retail to telecommunica-tions. In total, five companies could be identified as Tech and seven could be identified as Non-Tech companies. The 12 conducted interviews led to a variety of insights and perspectives and it can be concluded that saturation has been reached, as the last three interviews partially over-lapped with information from previous interviews.

Procedure of data gathering

The period of data gathering has been carried out between 17.04.2020 and 08.05.2020. Conducting interviews with participants face-to-face is important in qualitative research since the researcher can receive spontaneous answers and a much better idea of what participants are trying to express (Knapik, 2006; Braun & Clarke, 2013). However, due to the circumstances caused by the corona crisis, all of the semi-structured interviews were conducted through tele-phone or video-conferencing. The predetermined time frame for the in-depth interviews was 30 to 60 minutes. In the end, the actual duration varied between 31 and 56 minutes, with an average of 43,3 minutes. Two of the interviews were conducted in English, ten were conducted in

(16)

German. The atmosphere in all interviews was very open and relaxed and there was a great deal of interest and enthusiasm on the topic.

Some measures have been taken to make the interview atmosphere as favorable as possible to obtain insights of high quality. The interviews with the communication practitioners were conducted when they were at home. Those circumstances are likely to enhance the interview-ees’ well-being as people feel more comfortable during an interview in a familiar environment (Bolderston, 2012). This results in statements that reflect their perspectives well and make the data as credible and authentic as possible (Braun & Clarke, 2013). Another way of comforting the interviewees and to gain consistent and repeatable findings that are not biased by social desirability was an informal and natural atmosphere (van de Mortel, 2008). This could be reached by showing them that they were talking to a person with prolonged engagement and a real interest in the topic (Cohen & Crabtree, 2006). Consequently, the interviewees talked openly about their emotions, experiences, and opinions towards AI. This is especially signifi-cant since talking about dealing with uncertainty and change can be a very personal and intimate topic.

To answer the research question, the sensitizing concepts were transferred into a semi-structured interview guide (see Appendix C). Beginning with an introduction, the interviewees were encouraged to answer without hesitation in giving information about personal experiences and insights. Telling them that there are no right or wrong answers was meant to ensure that the interviewees dare to express spontaneous thoughts. For reasons of transparency, the participants were also informed about the recording of the interviews, which increases the accuracy of their statements and also allows verbatim statements to be incorporated into the results section (An-derson, 2010; Loubere, 2017). Furthermore, the interviewees were briefed on the anonymiza-tion of their personal data which gives them a comfortable feeling to speak freely (Qu & Dumay, 2011). In line with the RQ, the interview guide was divided into three topics: The first topic covers the first part of the RQ and the sensitizing concept ‘Artificial Intelligence’. The

(17)

interviewees were asked about their definition of AI and their personal experience. The goal is to determine whether they have an understanding of AI that corresponds to the definition and terminology. Under the second topic, it is examined in which way the emergence of AI is vie-wed as a fundamental organizational change and how communication practitioners perceive AI-change. The purpose is to evaluate emotions accompanied by organizational change (e.g. change-related uncertainty or fear of the unknown). The third topic concerns the implications for communication practitioners and their dealing with AI-change. The aim is to determine how communication practitioners assess the impact of AI on micro-, meso-, and macro-level and how they cope with it. The interview guide closes with the possibility to give further infor-mation on the topic if requested.

Before the interviews were conducted, a pretest was probed with a communications expert who was not included in the sample because of her familial background that could have led to biased results. The pretest was carried out to ensure the comprehensibility of the questions, the quality of the formulation, and the feasibility of the interview which, in turn, enhances the reli-ability and validity of the study (Magnusson & Marecek, 2015). After conducting the pretest, some adjustments have been made in terms of phrasing and comprehensibility of the questions which led to the final interview guide (see Appendix C). The final interview guide was used in all of the interviews.

Analysis of the data

All interviews were recorded and transcribed with the transcription software F5. The tran-scripts were kept in the original language while the quotations for the result section were trans-lated into English. During the transcription, filler words and interruptions by third parties were not included. Other features such as pauses in speech or laughter were noted so as not to distort the interviews’ character. The analysis was guided by open coding to identify, categorize, and describe constructs or phenomena which could be found in the transcripts (Corbin & Strauss,

(18)

In the beginning, open coding was conducted in light of the research question and during this phase, the interview data was broken down into segments and researcher derived (e.g. fear of the unknown) and data derived codes (e.g. fear of machines and AI taking over) were assigned. Significant for the coding were the respondents' statements about the emergence of AI in their workplace and their expressed experiences and emotions in this regard. At this point, it was crucial for the quality of the study that the statements were not grouped into ready-made cate-gories to ensure that their emotions and experiences could freely speak for themselves. In light of the research question, open coding was carried out until the central concepts guiding the research were saturated and no new insights could be obtained anymore.

After the process of open coding and a closer in-depth examination of the codes, it became apparent that there is need for focused coding while moving beyond the sensitizing concepts and looking for structure and variation among the codes (Braun & Clarke, 2013). According to the data findings, three overall topics emerged that also refer back to the sensitizing concepts and the research question. Hence, three families could be created in Atlas.Ti: “Definition of and associations with AI”, “perception of AI as an organizational change”, and “dealing with AI”. When taking a closer look at the families, it became visible that different types of communica-tion practicommunica-tioners can be derived from the assigned codes. Accordingly, a concept indicator model was created to visualize the findings and to answer the research question while aiming at producing theory (Strauss, 1987; Corbin & Strauss, 1990). In light of “Types of communica-tion practicommunica-tioners defining, perceiving, and dealing with AI-change” as evaluated predominant concept, an inventory was made noting four different types of communication practitioners: The skeptical AI-newcomer, the open-minded AI-newcomer, the rational AI-newcomer, and the passionate AI-expert (see Figure 1). To enrich these dimensions with more detailed infor-mation, categories were created, which served as indicators for the respective dimension. For this purpose, all codes within a dimension were divided into three logical subdivisions and were adequately labeled, highlighting the interviewees’ definition, perception, and dealing with AI.

(19)

Transferability, reliability, and validity

To enhance the transferability, reliability, and validity of the study, it is important to ensure the researcher’s truthfulness, objectiveness, and deep knowledge about the researched subject (Golafshani, 2003). To foster the study’s transferability, it was of great importance to present a detailed and traceable method section. This means that the sampling, the process of open cod-ing, and the development of the CIM were presented in detail so that other researchers are able to understand and to follow the procedure and can assess whether this study is applicable to other contexts. Of particular importance was to show in a transparent way, how the four differ-ent types of communication practitioners were iddiffer-entified and developed in the CIM which also enhances the reliability and validity of the study. Another important aspect was to provide thick descriptions of the communication practitioners and the phenomenon of AI and extensive en-gagement with the aforementioned so that the reader can assess its transferability (Ely, Anzul, Friedman, Garner, & Steinmetz, 2003).

The first measure enhancing the reliability was the use of the computer-assisted qualitative data analysis software Atlas.Ti, which helped to structure the codes in order to make this struc-ture transparent and traceable for others. The second measure to enhance the study’s reliability and, at the same time, strengthening its internal validity, is memo writing. Memo writing assists in reflecting and understanding certain choices made during the research process in a transpar-ent way (Stuckey, 2015). Notes were made which, for example, helped to keep a natural flow during the interviews and made it possible to come back to certain questions at a later stage. Furthermore, taking notes during the coding and the process of analysis helped to assign mean-ing to certain codes and to remember those meanmean-ings.

To enhance the study’s validity, member checking in terms of respondent validation was applied by showing the interviewees the transcripts and asking them for feedback (Carlson, 2010). With this technique, it was possible to explore the quality of the findings and certain

(20)

Walter, 2016). Furthermore, asking the respondents to back up their statements with examples increased the study’s validity (Ely et al., 2003).

Figure 1. Concept indicator model of communication practitioners defining, perceiving, and dealing with AI

Results

It could be examined that communication practitioners define, perceive, and deal with or-ganizational change through the implementation of Artificial Intelligence in various ways. These observations can be classified into four types of communication practitioners represent-ing the four main dimensions of the CIM presented above (see Figure 1): The skeptical newcomer, the open-minded newcomer, the rational newcomer, and the passionate AI-expert. Each of these types, describing how communication practitioners cope with AI, serves as a basis for matching indicators, which will be discussed in more detail in the following results section. Types of communication practitioners defining, perceiving, and dealing with

AI-change The skeptical AI-newcomer The rational AI-newcomer The passionate AI-expert

Uncertainty and vagueness in defining AI & dystopic associations

Organizational change through AI as possible threat

Functionality and buzzwords in defining AI & economical associations

Organizational change through AI as competition

Efficiency and motivation through AI in communications

Self-confidence and preciseness in defining AI & optimistic associations

Organizational change through AI as enrichment Rejection and repression of AI in communications

Enthusiastic examination and clarification of AI in communications

The open-minded AI-newcomer

Uncertainty and intangibility in defining AI & positive associations

Organizational change through AI as opportunity

(21)

The skeptical AI-newcomer

Interviewees of this type showed a low level of experience and knowledge about AI and a rather skeptical and even suspicious attitude. The skeptical AI-newcomer is predominantly em-ployed in a Non-Tech company in which AI still plays a subordinate role. Some have not yet come into contact with AI in their workplace, while others already use tools that are based on AI unconsciously (e.g. DeepL, Grammarly). Besides that, they tended to strictly reject the use of intelligent assistants or devices in their private environment due to various concerns (e.g. data misuse, privacy issues).

Uncertainty and vagueness in defining AI & dystopic associations

When defining AI, it soon became apparent that communication practitioners were rather insecure in this regard. They mostly emphasized that the concept of AI is complex and difficult for them to grasp, while all of them even expressed their fear of giving the wrong definition of AI. An example was a man who openly showed his concerns about defining AI inaccurately. When defining the term, he then expressed himself very vaguely and rather emphasized that technological possibilities come into play when human beings need them. He also indicated that things will then change fundamentally and drastically with AI-change:

„So I would define and describe AI now with the possibility of an ehm (...) oh god one moment, not that I say something wrong now. AI for me means that where a person has certain physical and psychological limits, that through technical possibilities, now independent of the format or what that is, these limits can practically disappear completely and thus create completely new possibilities to advance into completely new dimensions in the most diverse areas“ (#8, man, 53).

Besides uncertainty and vagueness in defining AI, it was interesting to observe that the skeptical AI-newcomer associated the AI term with dystopic future scenarios in which robots take over and rule the world while AI develops its own consciousness and becomes too intelligent for humans. A female interviewee stated that, when hearing the term Artificial Intelligence, she thinks about

„[r]obots, that is a cliché, I know, so robots and computers, just very much I guess these sorts of images from Hollywood or Science Fiction movies, I guess where everything is observed and monitored and very techno-logically advanced looking and also a bit scary” (#1, woman, 33).

(22)

Organizational change through AI as possible threat

In terms of considering AI as a form of organizational change, the skeptical AI-newcomers felt that AI is only one small factor among many that can cause change, and if so, it should rather be interpreted more as a phenomenon of peer pressure and less as a necessity for them and their organization. However, when a perceived fundamental change is caused by AI, then AI itself seemed to be considered rather threatening and was viewed skeptically and critically. For instance, a woman employed for a long-established, traditional Non-Tech company ex-pressed her thoughts and emotions about a dilemmatic future with “human-like” AI:

“ […] if you create it to be exactly 100% like a human, then you can’t cut out the aspects of humans that also injure other people, whether it is emotional or physical and if you cut that out, it would never be totally human and you would maybe never get the full potential of AI” (#1, woman, 33).

In this context, a man also expressed his concern about many people being too reckless in their use of AI and urged that not only the advantages of AI should be acknowledged, but also the possible threat that occurs if AI is not treated responsibly and ethically. Accordingly, he out-lined the following scenario:

„If this gets into the wrong hands fast, of course you have to approach it carefully and not only see that it saves time and money and is efficient and I can use my skills differently, you have to take that into account. Relying only on one technique is great, as long as everything is good and everything works, no problem. If it is no longer good, you can't get it back. Because once Pandora's box is open, you can't close it” (#8, man, 53).

In general, it got evident that the respondents were aware that AI can also offer advantages, but, in their view, these do not outweigh the possible disadvantages and risks AI entails. Therefore, it could be observed that skeptical AI-newcomers did not want to see their individual job (mi-cro-level) or their organization (meso-level) affected and did also not suspect effects on macro-level.

Rejection and repression of AI in communications

Besides assessing and perceiving AI as a possible threat, the interviews demonstrate that communication practitioners of this type saw the negative consequences of AI in general, but tended to comfort themselves with the idea of AI having nothing to do with communications

(23)

and AI is not replacing them or costing them their job. One interviewee (#11, woman, 33) pointed out:

„I am also in a very lucky position that I have a job that is never the same and never repetitive and I just think that there are not so many points that it would be great if a certain tool based on AI came along. I wouldn't know what to hope for, what AI can do for me and my professional life.“

Another way to comfort themselves was the thought that AI is not capable of performing hu-man-like tasks:

“There is the fear of replacing human labour by machines and I am convinced that of course a lot of jobs will be lost, at the same time there are activities that machines can never take over. Everything that has to do with emotional intelligence and is based on interpersonal relationships, a machine will never be able to do that” (#7, woman, 42).

This illustrates two different ways of dealing with AI: On the one hand, AI was passively re-jected, thus rather repressed, since the skeptical AI-newcomers did not see a way to combine AI and communications with each other. On the other hand, there was the active rejection of AI with the call to resist the imposition of AI. For instance, an interviewee stated in this respect:

„[...] we can and should be critical, and I hope that we will not be discouraged from doing so, just because the deathblow argument comes up that we have to move with the times and that this is the future in which we will work and live. I always say that this is not right. It's only the case if we all follow those roles, but there is no one who predetermines such a future for us“ (#8, man, 53).

This shows that skeptical AI-newcomers had a general fear of AI becoming uncontrollable while always striving for comforting themselves with the idea of AI having nothing to do with their personal role (micro-level) or the communications profession (macro-level). Furthermore, they did not consider themselves in the role of AI-users, although they unconsciously made use of AI. They also did not regard themselves as AI-communicators, unless they had the oppor-tunity to make others aware of the risks posed by AI.

The open-minded AI-newcomer

In comparison to the skeptical AI-newcomer, the findings show that communication prac-titioners of this type had a more positive sentiment towards AI-change. Even though they are not AI-experts and showed rather limited experience and in-depth knowledge, their attitude was open-minded when talking about AI. There were open-minded AI-newcomers in both, Tech

(24)

background but personal characteristics that evoked their positive attitude. During their work, they were already using tools based on AI (e.g. DeepL and Grammarly), although partly uncon-sciously, and one interviewee (#9, woman, 39) already gained some experience with a chatbot for external customer communications. Furthermore, they seemed to be more open towards intelligent assistants or devices and another interviewee (#10, woman, 33) used Google Home because of her curiosity and to explore its capabilities.

Uncertainty and intangibility in defining AI & positive associations

When defining AI, the open-minded AI-newcomer was rather uncertain and avoided to give a detailed definition. This type tends to swing from one technical term to the next specific concept and considered AI as a beneficial process for both, organizations and individuals. Ac-cordingly, an interviewee expressed:

„Ehm for me, when I think of AI, somehow immediately a connection with Machine Learning opens up, even if I now put the next buzzword in the room, but ehm something that is a process, which can be improved by machines and technology. Ehm for me that means something that we, as humans, can't yet grasp but might be useful for organizations and for me as well“ (#10, woman, 33).

This shows that AI is still a phenomenon that is difficult for them to grasp while it is, at the same time, not diminishing their positive attitude towards AI. This positivity is also reflected in their associations with AI and their future prospects. The woman that already gained some experience with a chatbot for example mentioned the following:

[…] for me personally, AI is a great chance to make work easier for people. But I think that we are not yet at the point where you can say that you can do something and it will work by itself. For me, it is more like a support that can help people practically in their everyday work and everyday life in general in the future” (#9, woman, 39).

This illustrates that this type of communication practitioner was open to AI, although they did not yet have a clear idea of what the future with AI might look like.

Organizational change through AI as opportunity

While the skeptical AI-newcomer was perceiving organizational change through the emer-gence of AI as rather threatening, the open-minded AI-newcomer assessed it as a chance and opportunity. At this point, the interviews reveal that this type was much more focused on what

(25)

AI can do and enable, instead of fearing what could happen if AI was misused or not questioned enough critically. This is well illustrated by the following quotation:

„I think if you see the chances that AI can bring to you or the organization, that is, if you really have some-thing tangible that you can test or can implement, and then also say ‘helps us or doesn't help us’, then we really have a positive and successful organizational change through AI“ (#10, woman, 33).

This is furthermore reflected in the perception that AI does not jeopardize existing jobs, but causes new jobs to be created for which, in turn, new skills must be acquired. But more im-portant than the possibility of new emerging jobs due to AI was the prospect of personal growth and a positive mindset in dealing with new technologies. An interviewee stated:

“I think with AI, the question is how to deal with it. So you sit down and try to solve the problem, or you let uncertainty guide you. The important thing is that you always see new things as an opportunity to improve yourself and to learn” (#9, woman, 39).

However, it is relevant to point out that the open-minded AI-newcomer needed to have an un-derstanding of AI and its capabilities to feel secure and confident in dealing with it. In addition, it is an interesting observation that this type mainly suspected opportunities through AI at the micro level and for the whole organization (meso-level). However, they could not yet imagine how AI will transform communications in general (macro-level).

Curious expectations and acceptance of AI in communications

The findings show that the open-minded AI-newcomer was curious about what AI might offer him in the future. This applies to all levels (micro, meso, & macro level) of possible im-plications. They were especially curious about micro-level changes and already started to ask themselves during the interview which tools were going to be available in the future to simplify daily working practices. At the same time, however, the communication practitioners of this type also wished for “[…] an interplay, that AI supports the human being and does not take over emotions, humanity, or imperfections because the human-error-factor makes our lives in-teresting” (#9, woman, 39). Although open-minded AI-newcomers faced a rather uncertain fu-ture with AI, they can be positioned in the role of AI-testers, which can be defined as AI-users that are actively engaging with AI and curiously exploring and partly unknowingly using it.

(26)

interviewee (#10, woman, 33) indicated that after the interview, she would like to initiate an internal discourse on the AI topic:

“After our conversation, I think I'd probably google and find out a bit more about what it all means. And then to communicate about it, for example internally, to ask: ‘Hey, who uses that?’, ‘How do you perceive that?’, ‘Do you see that critically?’. So to start a discourse and a discussion there.”

This demonstrates that they were motivated to obtain more knowledge about AI, probably to integrate it more consciously into their everyday work. It was important for them, however, that human abilities are not neglected and that they have enough time and capacity for creative and fulfilling tasks. Given these circumstances, they tended to consider themselves to be AI-users and AI-communicators in the future.

The rational AI-newcomer

Just like the open-minded AI-newcomer, the rational AI-newcomer works in both, Tech companies and Non-Tech companies. But the rational AI-newcomer, compared to the skeptical and open-minded AI-newcomer, was less emotionally involved in the field of AI. One can ra-ther assume, that the rationally considered economic advantages were the main focus of atten-tion for him. Moreover, communicaatten-tion practiatten-tioners of this type occupied a higher posiatten-tion than the other types and were responsible for taking impactful decisions. They showed a slightly more advanced understanding of AI than the other AI-newcomers, although they had very few practical experiences but more theoretical knowledge in this field. They were partly using in-telligent assistants (e.g. Google Home, Siri) in their personal environment, but only to quickly and efficiently fulfill their needs (e.g. playing music, speech-to-text messaging).

Functionality and buzzwords in defining AI & economical associations

The definition of Artificial Intelligence was mainly based on technical, functional, and ra-ther economic components of AI. The rational AI-newcomer defined AI as “[…] self-learning systems, simplification, and reduction of complexity, which of course is very similar. Also higher efficiency and higher speed” (#12, man, 43). It became apparent that he is not an expert, as he relied even more on buzzwords when defining AI than the skeptical and the open-minded

(27)

AI-newcomer. However, his definition gravtitated towards the functional dimension of AI and what AI can enable from an economical perspective. This also applied to the associations with AI. As soon as the AI term was mentioned, the rational AI-newcomer immediately started think-ing about how AI might optimize processes efficiently and might create a potential competitive advantage for his organization. Furthermore, it was interesting to observe that he believed in an inflationary use of the AI term, stating that “[…] the term AI is run through the roost in mar-keting and communications” (#6, man, 35). This indicates that he was aware of the relevance of AI and also of the importance of the purposeful use of the AI term (e.g. for marketing rea-sons).

Organizational change through AI as competition

The rational AI-newcomer considered the emergence of AI as natural development and AI “[…] is not a decision anymore, as the question is not ‘whether?’ but ‘how?’ AI changes our lives” (#6, man, 35). He viewed AI-change as a competition that is necessary to ensure rele-vance and competitiveness on the market in the future stating that: „With AI, now you can go as far as Darwin, in the sense of ‘survival of the fittest’, when the fittest is not the most athletic, but the most adaptable” (#6, man, 35). This demonstrates that the rational AI-newcomer per-ceived AI, on the one hand, as an opportunity and, on the other hand, as a risk, whereas AI only constitutes a promising change for those who are most adaptable and actively deal with the affordances of the AI-change. This is illustrated by the following statement:

“So AI will cause higher complexity, higher speed, greater volatility in the business, entire business models that just change. Of course, this also means that all companies, and therefore ultimately other agencies, work differently and must achieve different results. And that means that they will also change internally in the way they work or collaborate, how they work together. And that is, of course, the change that, to be honest, is permanent” (#12, man, 43).

Accordingly, the rational AI-newcomer mostly assumed changes through the emergence of AI that are relevant for the whole organizational structure (meso-level) and the whole communi-cations industry (macro-level) since AI-changes also affect society. Those changes are perma-nent and considered a natural development if one wants to stay innovative and competitive in

(28)

Efficiency and motivation through AI in communications

Whereas the skeptical AI-newcomer strongly rejected AI, the rational AI-newcomer was the total opposite. Communication practitioners of this type were not afraid of the emergence of AI and possible risks. Rather, they were encouraged and motivated to overcome the gaps in their knowledge and to actively tackle what they had not yet learned about AI. This motivation for overcoming the unknown was much stronger among rational AI-newcomers than among open-minded AI-newcomers. For instance, an interviewee stated that AI and newness

“[…] motivate me to work very hard to stay up to date. I know that I have to do this, otherwise I will become obsolete on the job market, even from an HR perspective. I have to do that, and I try to educate myself consistently so that I don't get into the situation where I have the feeling that I'm hung up or that I tend to surf behind the wave” (#6, man, 35).

Here, it could be observed that the rational AI-newcomer actively faces innovations and was aware of the fact that one can be replaceable. However, he was not afraid that AI might replace him in his job one day since he is in a high position “[…] doing no operational tasks like writing press releases, but more the strategic things like consulting clients” (#12, man, 43). The inter-views demonstrate that the rational AI-newcomer saw the potential for improved efficiency through AI enabling added value and resource allocation. Besides, he regarded himself as being on the winning end of the AI-change, since, on the one hand, he actively accepts the change and tackles challenges. On the other hand, he is in a high position where he does not anticipate being replaced by AI. AI was an effective means for him to monitor the quality of his employ-ees' work (e.g. DeepL for controlling translations) (#12, man, 43) or to have a personal, intelli-gent assistant scheduling meetings that unburdens his human assistant (#6, man, 35). Accord-ingly, the rational AI-newcomer is also AI-user and AI-communicator. He utilized AI to pursue economic goals and regarded himself as an AI-communicator who has to frame AI in such a way that his employees do not fear job loss.

The passionate AI-expert

While the findings show a variation among newcomers, there was only one type of AI-experts in the data: The passionate AI-expert. Passionate AI-AI-experts predominantly work in

(29)

Tech companies and were involved with AI on a daily basis. Additionally, in their private life, they were quite open-minded towards AI and integrated intelligent assistants and devices into their everyday life. While the three types of AI-newcomers seemed to be more like creatures of habit when it comes to AI usage, the AI-expert is a person who explored new territory and was involved in developing, testing, and distributing innovations with AI. Furthermore, it is inter-esting that only the skeptical AI-newcomer and the passionate AI-expert mentioned the topic of responsibility in working with AI. While the skeptical AI-newcomer seemed to be concerned about ethics in AI and suspected employees of large Tech companies disregarding morals and ethics, the passionate AI-expert aimed to anchor a responsible approach to AI in society. Self-confidence and preciseness in defining AI & optimistic associations

When defining AI, the passionate AI-expert seemed to be certain and self-confident about providing an all-encompassing and detailed picture. In comparison to the three types of AI-newcomers, the findings show that passionate AI-experts were not dropping buzzwords of which they possibly did not know their exact meaning, but instead illustrated AI on an applica-tion-related level by referring to its dynamic functionalities. This illustrating, an interviewee defined AI as „[...] the technical simulation of the personal cognitive performance of humans, which increasingly with the ability, as well as a machine, can learn in an intelligent way to also evolve over time” (#5, woman, 43). Moreover, the findings show that passionate AI-experts were positive and, at the same time, realistic in their attitude towards AI. This is also evident in their associations with AI. They immediately associated AI with the “[…] support of individuals and the expansion of human capabilities” (#5, woman, 43). They put these associations into perspective by saying that the only way to succeed with AI was to use it responsibly. For in-stance, an interviewee stated:

„I see it as a way to really be more efficient in the future because data is the new gold. This is a slogan that everyone uses but it is true. So there is more and more data around and we need to make sense of it and AI really allows to do it in a more efficient way, I would say. But we have to be responsible in dealing with AI because data is also the most sensitive good we have“ (#2, man, 29).

(30)

Organizational change through AI as enrichment

An important insight is that passionate AI-experts, who operate in an environment where AI and technological innovations seem to be common, still considered AI to be a key driver for organizational change and did not take it for granted. This shows that they perceived gradual organizational change, triggered by AI, as important and enriching for organizations. In contrast to the skeptical AI-newcomer, who believed that AI has no raison d'être in communications, the passionate AI-expert argued that AI-change urgently needs communications. An inter-viewee explained AI-change in the following way:

„I would not necessarily say that AI is an organizational change in itself, but it is a triggering factor for technological change. Ehm because AI in itself does not change anything, but the availability and also simply in general such a massive amount of data and data volume, which is increasing more and more, makes it a necessity that organizational change becomes necessary for communications. It is one of the key drivers I would say“ (#5, woman, 43).

For the passionate AI-expert, AI was not threatening or restricting, it was an opportunity which, if accepted and actively worked on, can become an enrichment. This enrichment applies to all levels as they considered AI as personal enrichment (micro-level), an opportunity for the or-ganization (meso-level), and an important touchpoint for communications in general (macro-level). They perceived the supporting role of AI, with every instance benefitting from the emer-gence of AI and argued that more AI enables more connectedness within organizational cul-tures. To illustrate this, an interviewee explained:

“Well, you can actually say that time-consuming, uncreative tasks are enriched by a certain amount of input, that you have more time for creative and exciting tasks and also for, we didn't mention that at all, but it is super important for the exchange and cooperation with people. So, I think this is a great opportunity that one has if such time-consuming activities are shortened because a preparatory work is done by the machine so that we have more time for interaction with each other which is beneficial for everyone” (#4, woman, 38). Enthusiastic examination and clarification of AI in communications

As previously mentioned, the passionate AI-expert assessed AI as supporting and not tak-ing over humans’ responsibilities. Communication practitioners of this type were very enthusi-astic in dealing with AI and tried to examine the possibilities that AI-based tools have to offer, for example the “tool ‘Trello’ with the ‘Trello Butler’ for workflow automation” (#3, woman, 28). Accordingly, the findings show that the passionate AI-expert is at the same time AI-user

(31)

and AI-communicator. On the one hand, the passionate AI-expert is an AI-user since he used both, hardware and software, entailing and driven by AI. An interesting insight is that Tech companies and also their communication practitioners seemed to use their own developed AI-tools or products as an interviewee stated “in our company, it is like, as they say, ‘eating your own dog food’, so AI will of course also continuously shape our working environment inter-nally. Either with chatbots or analysis tools” (#5, woman, 43). On the other hand, it is crucial to mention that passionate AI-experts had a natural understanding of themselves being in the role of the AI-communicator. An interviewee explained that “it is my job as a communication expert to keep triggering and introducing new topics and innovations so that everyone internally and externally can keep up with progress and change” (#3, woman, 28). But one can even go one step further, since the findings show that passionate AI-experts understood themselves in the role of an AI-encourager. Therefore, they considered it their function and responsibility to take away humans' potential fear and concerns towards AI by demonstrating in a transparent and responsible way what AI can do and what the limits of AI are. An interviewee stressed the importance of communications during AI-change:

„It doesn't work when two or three people think: ‘Okay, this is how we want the company to run in the future and we're putting that over everyone else and over the organization and then it'll work’. But it doesn't work either if you are introducing a new technology now and then, everything will be great, then we are a digital company. It just doesn't work if you don't talk to the employees at the same time if you create a transparent environment where employees can ask questions and learn. New technologies are not self-explanatory either, but you have to look at how to deal with them. This is what I am doing and what is my task as a communi-cation expert“ (#4, woman, 38).

Another important insight is that passionate AI-experts seemed to be in a bind. On the one hand, they wanted to use the AI term to demonstrate transparently which products contain AI and how certain tools and processes are working. On the other hand, they were aware of the fact that the AI term can cause fear and skepticism, for instance, among skeptical AI-newcomers. Here, they seem to be on thin ice and have to consider carefully how to communicate about technologies based on or containing AI and whether the term "Artificial Intelligence" should be mentioned. An interviewee explained:

(32)

„So you either use the keyword AI to inform about it and to reach people or you just talk about algorithms and machine learning and so on, but then you run the risk of not reaching some people you actually wanted to address. Otherwise, you run the risk of frightening people“ (#4, woman, 38).

Conclusion & Discussion

This study aimed to provide insight into how communication practitioners define, perceive, and deal with the emergence of Artificial Intelligence in the context of organizational change. The findings of this study uncovered that communication practitioners can be classified into four types defining, perceiving, and dealing with AI in various ways: The skeptical AI-new-comer, the open-minded AI-newAI-new-comer, the rational AI-newAI-new-comer, and the passionate AI-ex-pert. Moreover, these four types show that communication practitioners differ in their knowledge and experience with as well as their attitude towards AI.

Definition of AI

First, communication practitioners differed in their definition and understanding of AI. All three types of AI-newcomers quoted buzzwords when defining AI. Using buzzwords might have been a way for them to show that they have an understanding of AI but may not know exactly what it stands for. While the skeptical AI-newcomer and the open-minded AI-new-comer were both rather uncertain when defining AI, they noticeably differed in their attitude towards AI. As the skeptical AI-newcomer associated AI with negative outcomes and dystopic future scenarios, the open-minded AI-newcomer was positively inclined towards AI. Even the rational AI-newcomer was not negatively influenced by uncertainty and associated the emer-gence of AI with the possibility of gaining competitive advantage. This shows that the AI term was indeed accompanied by ambiguity, confusion, and fuzziness, especially among AI-new-comers, which is in line with the research of Wang (2008). But this fuzziness only seems to be problematic among skeptical AI-newcomers. In comparison to the AI-newcomers in general, the passionate AI-expert was not dropping buzzwords but instead illustrated AI on an applica-tion-related level providing an all-encompassing picture of it. Moreover, his associations with AI were not idealistic, but positive and realistic towards a future with AI. The passionate

Referenties

GERELATEERDE DOCUMENTEN

The impingement and dislocation are predicted to occur at picking up activity in the Western-style activity for the inclination and anteversion combination of the acetabular liner

This study shows how Italian food culture is closely related to the values expressed through the Hofstede’s cultural dimensions culture and how these can affect intercultural

The analysis results showed that (1) the players would like to play a BCI game actively if the BCI controls critical game elements, (2) the technical challenges of BCI cannot

The goal of this paper is to present (i) a process model that can be used as a guide for developing maturity models, and (ii) the first version of a maturity model to assess pro-

Normative Power: Besides the EU’s general guiding principles in external action (Art. 21 TEU), specific norms on external trade remain contradicting. As Neil

The study area included the 62 km reach of the Elands River and its tributaries between the Waterval-Boven and De Villiers Waterfalls and a portion of the Crocodile River upstream and

The aim of the research is to ascertain how the independent variables (price general, price premium and service levels) influence the dependent variable of

Dan bekruipt mi} een nog veel engere gedachte, iets wam-van ik eigenlijk vind dat ik het niet zou mogen denken maar ik denk het toch: de mensen die het 't minste