• No results found

AI for All? AI Platforms and the Democratization of Artificial Intelligence

N/A
N/A
Protected

Academic year: 2021

Share "AI for All? AI Platforms and the Democratization of Artificial Intelligence"

Copied!
91
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

AI for All? AI Platforms and the Democratization of

Artificial Intelligence

Luka Kiehne, S4113357

University of Groningen 29th of October, 2020

(2)

Author: Luka Kiehne Student Number: S4113357

Degree program: M.A. Media Studies: Datafication & Digital Literacy First Reader: Dr. Clemens Apprich

Second Reader: Dr. Robert Prey

Declaration of Academic Integrity

I, Luka Kiehne, hereby confirm that this thesis on t0d0 is solely my own work and that I have used no sources or aids other than the ones stated. All passages in my thesis for which other sources, including electronic media, have been used, be it direct quotes or content references, have been acknowledged as such and the sources cited. It has not been submitted nor accepted anywhere else for the award of any other degree or diploma.

(3)

Table of Contents

1 Introduction ... 6

2 Theoretical Framework ...11

2.1 AI Leaving the Niche Market - A Short History of AI ...11

2.2 Democratizing Developments in Technology ... 14

2.2.1 Democratizing AI ... 14

2.2.2 Endogenous and Exogenous AI ... 16

2.3 Hegemony or Self-governance - The Relation Between AI Service Providers and Their Users ... 19

2.4 AI Without a Clue? Organizational Digital Divide in AI Implementation ... 25

2.5 WYSIWYG? AI as a Marketing Buzzword ... 28

3 Method ... 31

3.1 Hypermodal Critical Discourse Analysis (Method) ... 32

3.1.1 Choice of the Methodological Approach... 32

3.1.2 Selection of Semiotic Resources and Platforms ... 34

3.1.3 Execution of Analysis... 37

3.2 The Walkthrough (Method)... 38

3.2.1 Choice of the Methodological Approach ... 39

3.2.2 Selection of Platforms ... 40

3.2.3 Execution of Analysis ... 41

4 Results ... 43

4.1 Hypermodal Critical Discourse Analysis (Results) ... 43

4.1.1 Narrative ... 44

(4)

4.1.3 References ... 47

4.1.4 Promises ... 49

4.1.5 Portfolio vs. Single Product ... 52

4.2 The Walkthrough (Results) ... 54

4.2.1 The Interfaces ... 54 4.2.2 The Engines ... 56 4.2.3 The Walls ... 58 5 Discussion ... 59 5.1 AI Platform Characteristics ... 65 5.2 Black Box ... 66 5.3 AI Platforms’ Hegemony? ... 67 6 Conclusion ... 69 Bibliography ... 72 Appendices ... 82

Appendix I: Coding Scheme ... 82

(5)

List of Figures

Figure 1: Hegemony Triangle by Johnsen, Lacoste and Meehan (2020, p.69) ... 22

Figure 2: A General Model of Trading Zones ... 24

Figure 3: Hype Cycle by Gartner, 2020 release... 30

Figure 4: Metaphorical AI by Oracle ... 46

Figure 5: Overview of Google AI Hub ... 53

List of Tables

Table 1: Democratization of AI Machine Learning Platforms ... 15

Table 2: Comparison of Endogenous and Exogenous AI ... 19

(6)

1 Introduction

Database management used to be a hobby-like activity of the 1980s that changed significantly in nature over the last decades. When Russ Walter wrote one of the earlier versions of his book Secret Guide to Computers, of which more than thirty editions are available today, he described database management systems as a standard office filing cabinet. Their usefulness, in Walter’s view, accounts for individuals and small businesses alike. In 1995, in a later edition of the book, however, there has been a shift from database providers focusing on institutions. Walter recommended its readers to get an older database software, due to the lack of accessibility of newer ones. Non-technicians were largely excluded from using database software. With the incorporation of more database features, Excel and other spreadsheet software disrupted the field of database software, providing basic databases to everyone with a fair amount of digital literacy (Driscoll, 2012, p.21). While this was long before the age of big data, it perfectly shows how media can become accessible to broader publics (Mayer-Schönberger, 2013). More importantly, it shows how specific tools can be responsible for opening technology to more individuals and organizations - essentially, to democratize technology.

While Russ Walter’s guides stem from a time when there was a vision of the internet as a democratic parallel world “where information is fully accessible to all citizens as an essential service” (Mosco, 2016, p.516), this ideal portrayal of the web has clearly changed. Even if we leave issues like the digital divide aside, it is obvious that the current architecture of the web is not democratizing as it was once envisioned. Network effects as an affordance of the web seem to structurally favour larger companies over small companies or private websites at an exponential factor (Bonomi, Milito, Natarajan & Zhu, 2014, p. 169). The internet is commercialized, and its structure dominated by private organizations (Weis, 2010). Also within academic and political realms, the discourse of power relations on the internet on the confrontation between smaller and medium-sized enterprises (henceforth SMEs) and quasi-monopolistic or oligarchical technology giants is more apparent than ever. There is a strong focus on

(7)

companies that are being put together in acronyms such as GAFAM or FAANG (Gafa: Google, Apple, Facebook, Amazon, Microsoft; FAANG: Facebook, Apple, Amazon, Netflix, Google) and their influence in a “network society” (Galloway, 2017; Driscoll, 2012; Mosco, 2016, p.516, Castells, 2010). Talking about digital dominance from very few companies seems to be a popular practice among scholars (de Bustos, 2016; Casado & Miguel, 2016).

Nowadays, similar developments happen with big data technology. In 2012, Harvard Business Review dubbed data scientist “The Sexiest Job of the 21st Century” (Davenport & Patil, 2012). Data scientists and engineers knowledgeable in artificial intelligence are therefore difficult and expensive to hire. To counter the scarcity of people knowing how to operate the huge amounts of data, democratization and industrialization is seemingly taking place in big data as well. US-based technology giants are developing SaaS (software-as-a-service) tools and platforms promising to enable more people to work with AI. The claims promise the democratization of AI along the lines of “AI for Everyone” and “Get started with AI” (Casalaina, 2019; IBM, n.d.). Generally, the players active in the field offer subscription-based services to be implemented by other organizations with a different core business (Miller, 2018). The technology companies Google, Apple and Microsoft also adopted the notion of democratic AI (Burkhardt, 2020, p. 210). They advertise an easy entry into applying AI without having to go deep into programming. Sudmann says that criticized Silicon Valley companies are increasingly promoting democratic values, such as accessibility, participation and transparency (Sudmann, 2020, p.10). The impact is huge: Small and medium-sized enterprises (SMEs) can suddenly use the performance-enhancing advantages of applied AI without investing large sums in hiring expensive AI engineers (Kharkovyna, 2019): “Setting the right infrastructure to create an AI-based system is quite an investment not many companies are willing to make, despite acknowledging the benefits it could bring for their business.” (Clark, 2020).

As Sultan (2013, p.813) argues, SMEs are among the main beneficiaries of cloud computing innovations which in part include AI services. This is to say, while the big technology companies have a lot of power, they also use this expertise to build new

(8)

products which allow non-technology companies to make use of AI. Jeanne Ross, a MIT researcher, has a more critical take on the ease of distributing AI as a tool. She acknowledges the huge opportunities coming with implementable AI technologies, but see AI technologies that are ‘ready-to-use’ critical. For more inexperienced companies, fantasizing a technology able to beat the best chess, Go or poker players is easy, but in reality, Ross says, the implementation is more comparable to making enterprise resource planning (ERP) systems work. Her main argument is that the technology is needed, but more important are employees knowledgeable enough to make it work and generate value off it (Ross, 2017). Still, large companies position themselves as enablers of AI by providing technology ‘infused’ with AI and selling it to the masses (Burkhardt, 2020, p.210). That is because only a select few, have the power to shape AI in a way that benefits the masses (Ahmed, Mula, Dhavala, 2020). According to a Deloitte study, a focus on SMEs is far from altruistic. It has been shown that large technology companies focusing on SMEs demonstrate more revenue, operating performance and return on invested capital (Banerjee & Openshaw, 2014, p.139). Thus, the implementation of AI into more SMEs can be seen as the realization of the high expectations AI gets. This makes it important to understand the under-researched field of AI platforms. They slowly wriggle throughout the economy without getting nearly as much attention as critical AI technologies speaking to end-consumers. Especially in a field like artificial intelligence, which is dubbed complicated by many (Shapiro, 2019), it is important to research how it is being opened up to a broader variety of organizations.

In addition to the market-driven relevance, it is viable to acknowledge that AI platforms operate in a field which is still dominated by volatility and unpredictability. Companies implement technologies that are largely black-boxed and resources, competencies and potentially motivation to reverse-engineer those black boxes are missing (Pasquale, 2015; Kitchin, 2017). This tension between accessibility of plug-and-play AI solutions and reflection on implemented technologies is a critical angle to look at for the democratization of AI. Therefore, I propose the following research question: How do AI platforms impact the democratization of artificial intelligence?

(9)

While Sudmann uses democratization mainly as linked to political democracy and democratic values such as consequences of AI for the job market, surveillance technologies used on citizens, using AI to generate ‘fake news’ (Sudmann, 2020), I zoom in on a more specific angle of democratization. The research question is framed alongside the lines of ‘democratization’ as making accessible: “Democratization is focused on providing people with access to technical expertise” (Gartner, 2019; see also Shukla, 2019). Democratization is the action of making something accessible to ‘common masses’ (Burkhardt, 2020, p. 210). In line with this conception, Newman poses in a Forbes article: “the future of digital transformation itself depends on democratization of the latest technologies” (2020). I am, therefore, focusing on

democratization of technology which Gartner described as one of the top 10 strategic

technology trends for 2020. In this work, I am interested in how AI tools influence the power of SME and technology companies offering AI SaaS tools alike.

Because the research field of democratized AI is still evolving, I propose a more explorative approach. A quantitative snapshot of the current state of AI platforms would instead be obsolete soon. To get a sophisticated and tenable answer to the preceding research question, it must be approached from two sides since two parties are involved: large technology companies and SMEs. Firstly, I am researching how the market is presented by technology giants. To do so, I will conduct a discourse analysis of the AI platform websites. This approach will allow me to achieve a better understanding of the aspirations and expectations of the technology companies. How they write about the AI platform tells much about how they want to position AI platforms within the AI industry as a whole.

Secondly, I am applying the “walkthrough method” (Light, Burgess & Duguay, 2018). That entails going into the platform themselves to use and test them. My analysis includes ethnographic elements and is driven by the field notes I am making while working on the platforms. This approach allows me to assess the feasibility of democratizing AI with AI platforms. The strength of my methodology, however, is in the mixed method approach shedding light on a complex and large phenomenon from two different perspectives. It allows me to differentiate between perception and

(10)

imaginary on the one side, and feasibility and realisticness on the other. I can compare the function to the rhetoric.

Overall, this thesis offers a new way to look at how organizations are the driving force and outcome of the increasing mediatization, more specifically spread of AI (Beyes, Holt & Pias, 2019, p.1). Media is being thought of more “as structuring conditions configuring the very possibility of agency: they are not just objects braided with information, but also power.” (Beyes, Holt & Pias, 2019, p.1). Media studies already entails a valuable contribution to understanding organization and organizational life (Beverungen, Beyes & Conrad, 2019, p. 622). This is essential, because contemporary organizations are at interplay of social and technological domains and the technological side seems to grow (Beyes, Holt & Pias, p.1). Not too long ago it was common practice to define organizations solely as social entities (Daft, Murphy and Willmott, 2010, p.10). Now, however, the social entity is directed by technology in a way that it becomes increasingly important to understand how technology re-organizes. Since organization is now being thought of as socio-material assemblages, digitization also guides social processes within organizations. Kittler already stressed that ‘media determine organization’ (as cited in Beyes, Holt & Pias, 2019, p. 1). Moreover, the growing mediatization of organizations led to the emerging field of media and organization studies, which this thesis accounts now. While the overall impact of media in organization is indisputable in our mediated society, my focus is more specific. I am narrowing down the field of media and organization studies to the high-potential product range of AI platforms.

My thesis is guided by a literature review that goes into the development of AI and the linked emerging industry. I am also looking at other technologies that have been democratized and shifted from being exclusively available to a broader accessibility. The theory of hegemony builds the base for my theoretical analysis to draw conclusions from my analysis. I am therefore giving an overview of how hegemony is defined as used as a concept as part of the literature review. Hegemony is also linked to AI platforms in that chapter as it is elementary to my discussion. Thereafter, in chapter three, I will go more in depth on my methodological approach. I explain how I

(11)

integrated both methods and how I adapted them to my specific needs. While I separately present my results of the discourse analysis and walkthrough in chapter four, I am connecting them with my theoretical framework from chapter three in the discussion (chapter five). This is where I will also approach the discourse analysis from a more critical perspective. Lastly follows the conclusion to my thesis.

2 Theoretical Framework

2.1 AI Leaving the Niche Market - A Short History of AI

The developments AI is undertaking at the moment are closely related to how AI has been implemented in commercial settings and how it is being used in this context. It has progressed from being a widely unknown concept to a regular term used to describe and market products or services to customers.

Long before the first AI systems were developed, humankind imagined mechanical assistants (Buchanan, 2006, p.53) and tried to build them or knock-off smart systems (Sheehan, 2018, p. 140). The AI-imaginary has always been around, with sci-fi giving away the wildest perceptions of artificially created intelligent beings (Buchanan, 2006, p.53). Most likely, this is also because AI started as a technology only a few were capable of working with. Institutions like Alan Turing’s laboratory in Manchester, the Moore School at Penn, Howard Aiken’s laboratory at Harvard, the IBM and Bell Laboratories, and a few others, pushed the development of AI in the post-World War II rise of modern computers (Buchanan, 2006, p.54). Apart from the relative complexity of AI systems, early “programs were necessarily limited in scope by the size and speed of memory and processors and by the relative clumsiness of the early operating systems and languages.” (Buchanan, 2006, p.56). While companies were interested and involved from the beginning, AI technology was not ready to be used at scale for commercial purposes and stayed mostly in academic realms. Researchers dealt with so-called “toy problems” - programs that solve puzzles or games (Nilsson, 2009, p. 265). Beyond that, first practical applications of AI were rule-based expert systems that assisted the work from jet pilots over bankers to surgeons (Nilsson, 2009, p. 302). All of this

(12)

accounts to the age of Good old-fashioned artificial intelligence (GOFAI), a term developed by Haugeland (1985) to describe the paradigm in AI research that concerned itself with symbolic representations of problems, logic and search. The strict focus on formal rules, however, produced several limits of GOFAI, despite impressive early successes. From a research perspective, neural networks replaced GOFAI already in the 1980s when multi-layered perceptrons were developed and parallel processing became viable (Dreyfus, 1992, p. 33).

Subsequently to the GOFAI paradigm, companies realized the potential of AI and technological advancements made business applications of AI a reality. Neural networks got picked up as the staple AI technology that allowed organizations to solve real, more unstructured business problems. It took until recently that neural networks, at least in theory already superior over GOFAI in the 1980s, were viable for large-scale business operations due to the exponential increase in data, greater computer power, and impressive investments (Apprich, 2018, p. 30). The shift went from deductive reasoning to inductive approaches that are inherently data-driven and can only succeed in times of big data (Apprich, 2018, p. 31; see also Pasquinelli, 2017, p. 5). Leaving aside the imaginary surrounding AI, “what neural networks calculate is a form of statistical induction” (Pasquinelli, 2017, p. 9). Despite its power, AI remains within human mathematical categories. Pasquinelli even concluded that it is ‘too human’ to “represent per se the automation of intelligence qua invention” (Pasquinelli, 2017, p. 9). Regardless of the mystery or mathematics that eventually defines AI, some examples in which neural networks produce outperforming results are image classification (Krizhevsky, Sutskever & Hinton, 2017), machine-translation improvements (Wu et al., 2016), and self-driving cars (Tian, Pei, Jana & Ray, 2018). Today, AI technology in various forms is central to a technology companies’ business model. Netflix is often used as an example for a company that is truly centered around their data and algorithms (Hallinan & Striphas, 2016) and it is being said that Netflix’s recommendation engine is valued at over $1B per year (Gomez-Uribe & Hunt, 2015, p. 7). Still, there are many more companies in which applied AI is central. Google’s famous PageRank algorithm and every alteration of it rose to be a riddle for SEO (Search Engine

(13)

Optimization) specialists and researchers alike (Rieder, 2012; Noble, 2018). Several comparable algorithms “brought AI out of the laboratory and into the marketplace” in the 1990s (National Research Council. 1999, p. 216). Regardless of the dotcom bubble that burst in 2000, more and more AI applying companies emerged. Related, investments in AI rose significantly: McKinsey estimates that established firms have spent between $18B and $27B on AI-related internal investments (Furman & Seamans, 2018, p. 165). Also, US patent applications including the term ‘artificial intelligence’ in its title more than tripled since 2002 (Furman & Seamans, 2018, p. 168). According to Gartner, AI use of enterprises grew by a rate of 270% in the years from 2015 - 2019 (“Gartner”, 2019). The spread can be explained with the advances of AI-based systems since Brynjolfsson & McAfee said, AI-systems spread more quickly once they surpass human performances (Brynjolfsson & McAfee, 2017, p. 6).

The notable shift here is that companies do not necessarily need to develop AI themselves to be able to apply it. Often, companies integrate plug-and-play AI solutions into already existing processes. This approach theoretically allows a discrepancy between AI applying companies and companies inheriting AI expertise because AI projects are often viewed in isolation (Lünendonk, Godzik & Maas, 2019). To improve the application of AI, the Boston Consulting Group and MIT developed the rule of thumb to split organizational AI investments into 10% for algorithms, 20% on technology and 70% on business process transformation (Khodabandeh et al., 2019). That is because developing cloud-based technologies requires a substantial organizational change in the beginning. Adersberger and Siedersleben (2018) differentiate it as Mode 1 and Mode 2 development: Mode 1 is the traditional ‘waterfall model’ where projects follow an assembly line of design and architecture to development to testing to deployment. Mode 2, on the other hand, is more aligned with the idea of continuous delivery and expects to work teams in parallel, largely independently with little interaction. For companies not used to Mode 2 development, this can be an organizational challenge (Adersberger & Siedersleben, 2018, p. 714). While it seems as AI is integrated in virtually all processes already, from consumer electronics over game analysis engines to complex industrial solutions, Brynjolfsson

(14)

and McAfee provide an outlook on the future of AI that goes even further. They are convinced that most big opportunities have not yet been tapped at and effects of AI will be magnified in the coming decades (Brynjolfsson & McAfee, 2017, p.3). The bottleneck has not been technological advancement but management, implementation, and business imagination (National Research Council. 1999, p. 218; Brynjolfsson & McAfee, 2017, p.3). This ties in with the research question of this thesis looking into the processes that go into a wide-ranging adoption of AI technology.

2.2 Democratizing Developments in Technology

The spread of AI explained in the previous chapter is a prime example of technology in a democratizing process. Democratization is used in articles about technology as the process to bring a specific technology or service to the crowds (Gartner, 2019; Microsoft, 2016). Since entry barriers of technology are usually high in new markets (Karakaya & Stahl, 1989), they undertake a process of maturing to become accessible to more significant mass markets. As explained by Katzan, a shift from IT infrastructure investments in hardware, system software, applications, network people and other organizational assets to “on-demand” computers that one can, figuratively speaking, plug into the wall, is apparent (Katzan, 2011, p.2). As a consequence, such a shift results in lowered entry barriers regarding financial investments and infrastructural challenges. Often, platforms and various other technology companies provide so-called libraries for code that is usually free of charge, relatively easy to implement and powerful with regards to its capabilities (Kobayashi, Ishibashi & Kobayashi, 2018, p. 11). They make programming less work-intensive, because developers can include code that already exists. Those libraries do focus on programmers, however. Therefore, the simplification of developing tools is limited to an exclusive group. Tools that aim to approach larger audiences are built with a different interface, which requires less code.

2.2.1 Democratizing AI

The spread of self-service technology applying AI is still going on. In 2018, Gartner predicted that self-service analytics by non-experts would account for more analysis

(15)

than professional data scientists (van der Meulen & Pettey, 2018). By now, there are countless opportunities for organizations to buy AI solutions from external providers ranging from small startups to large technology conglomerates. The four major vendors are Amazon Web Services (AWS), Microsoft Azure, Google Cloud, and IBM Cloud. Each of them offers a range of services, like bots, APIs, machine learning frameworks, and some of the vendors offer fully-managed machine learning options. Other platforms like Salesforce or Oracle are entering the market, too. In Table 1, a few platforms offering ‘plug-and-play’ AI and the corresponding libraries are listed.

Table 1

Democratization of AI Machine Learning Platforms.

Note. Reprinted from “How will ‘democratization of artificial intelligence’ change the

future of radiologists?” (p. 11), by Kobayashi, Ishibashi & Kobayashi, 2018. Copyright 2018 by Japan Radiological Society.

Still, the democratization of AI remains a multi-faceted problem that requires multiple stakeholders to work together. It requires advances in science, technology and policy to succeed (Ahmed, Mula & Dhavala, 2020, p. 1). To harness the full potential of AI, which is a very specialized field, a “plausible solution is to democratize the very creation, distribution, and consumption of the technology” (Ahmed, Mula & Dhavala, 2020, p. 1). While some of the concepts to democratize AI are seemingly too idealistic to work in

(16)

capitalistic platform mechanics, multiple scholars are concerned with what it actually needs to democratize AI. First of all, AI production does not necessarily need to be part of AI use. Therefore, one of the main aims of Ahmed, Mula and Dhavala is to make AI portable - to separate creation and consumption (Ahmed, Mula and Dhavala, 2020, p. 1). This also requires explainability of it. Especially in high-stake domains like FinTech, Criminal Justice or Healthcare, “it is imperative that a human being would be able to interpret those decisions in the context of the situation, and explain how the agent has come to that decision” (Ahmed, Mula and Dhavala, 2020, p. 2; see also Rieder, 2020, p.11). Moreover, Katzan designed a framework for cloud computer characteristics which can also be used to outline the defining requirements of democratization of AI. It is focused more on the infrastructural demands of platform-like technologies resulting in four key factors: necessity, reliability, usability, and scalability (Katzan, 2011, p. 2). Necessity is describing the idea that organizations need to use the technology to satisfy everyday needs. Reliability is closely related to it and refers to the expectation that the technology is available and working once it is expected to. Usability is a very important factor in the democratization of AI because it directly relates to the degree of expertise required. A well-designed usability is required, so that the technology is convenient to use regardless of its underlying complexity. Scalability means that sufficient resources are available to service a number of users resulting in the advantage of economies of scale. All of those aspects relate to the importance and ease of the technology. Forman’s research on internet adoption identified a similar distinction between technology that is implementable without large changes in business processes and those that require significant organizational changes (Forman, 2005, p. 642). If AI can be implemented without expecting the whole organization to change, the likelihood of adoption is much higher. With technology companies offering modular solutions, AI-tools can be ‘passed on’ and exercised in isolation.

2.2.2 Endogenous and Exogenous AI

Central to my research is the ‘passing on’ of AI-technology. Opening and passing on technology have become a common practice in the last decades, but with regards to AI

(17)

specifically, it has not yet been theorized. Therefore, I draw from other practices and disciplines to distinct endogenous and exogenous AI.

Endogenous AI is apparent when the developing organization of an AI system is the only institution implementing and using this particular AI. Facebook’s algorithm, for example, is solely used by Facebook and cannot be bought as a service to be implemented into other social networks. Amazon’s recommendation engine cannot be implemented into other e-commerce platforms. This form of AI is the more traditional one, since it does not require collaboration between entities. Further, endogenous AI is characterized by central data production and ownership. This accounts to the often-criticized data monopolies of Google and Facebook, for example (Stucke, 2018). Nevertheless, endogenous AI is not defined by size or even commercial potential. An AI system like Alpha Go (“AlphaGo”, n.d.), the first AI to defeat the best Go players, is an example of a non-commercial endogenous AI project.

On the other hand, exogenous AI is gaining popularity. That is AI systems which are not developed by the institution leveraging them, but external creators of the AI. It overlaps with Artificial-Intelligence-as-a-Service (AIaaS), while the latter term puts the service aspect central. The technology itself can be similar between endogenous and exogenous AI. Salesforce Einstein1, for example, could be described as the exogenous AI

counterpart to Amazon’s own recommendation engine. What differentiates the two forms of AI is the control over the algorithm. In exogenous AI, influence on the algorithm is outside of the control of the organization implementing the AI solution. Also, the dependence remains when those customers’ implement AI, contrary to open source code which is first implemented but can also become an external institutions’ ‘own code’: “If we succeed, we will disappear” (Kelty, 2008, p. 243). Open Source allows free of charge use and unlimited modifications to the code (Kelty, 2008, p. 11). Open Source was a revolutionary and disrupting development because it contradicted the basic principles of capitalism. The widespread practice is to keep source code secret in order to prevent copycats and defend control of the market (Kelty, 2008, p. 101).

(18)

a-service solutions usually follow the practice and excel at defending their proprietary AI systems. Modulation is, just as in open source, possible, but only within the specific lines the provider defines. The software being “welded shut” allows a variety of arguments into different directions (Kelty, 2008, p. 199). The companies providing those services argue that the parties using the services do not need to worry about the technical side of the products they are using (Godse & Mulik, 2009, p. 155). Then again, more critical voices raise concerns about the secrecy around services and traceability of (automated) decisions (Pasquale, 2015; Noble, 2018). Pasquale says: “The secrecy is understandable as a business strategy, but it devastates our ability to understand the social world Silicon Valley is creating.” (Pasquale, 2015, p. 66). The opaque plug-and-play AI would then mean that companies implementing AI are left far behind in knowledge creation despite using the latest tools.

It is easy to find the responsible organization, if an endogenous AI malfunctions. An erroneous exogenous AI raises more complex questions about the responsibility: Is the organization that implemented the AI or the organization that developed the AI responsible? Who holds responsibility for the processing of the data? Must organizations understand the algorithms they implement in close detail? Is their limited understanding ignorance or sheer helplessness? Lastly, how many malfunctioning systems are in place and not discovered because the SMEs with a close relation to the data produced cannot gain insight into the data processing?

All of those questions are specific to exogenous AI, but the differences are more wide-reaching than the aspect of responsibility. In Table 1, I contrasted what defines endogenous and exogenous AI.

(19)

Table 2

Comparison of Endogenous and Exogenous AI.

Characteristics Endogenous AI Exogenous AI

Developed by Applying organization Another organization

Transparency Organization has best possible transparency into

the AI

Organization has limited access to the AI

Technical knowledge

required High (Relatively) low

Specialization High degree Low degree

Responsibility Applying/developing

organization Not clear (organization or provider)

2.3 Hegemony or Self-governance - The Relation Between AI

Service Providers and Their Users

Many scholars raised questions and criticized the quasi-monopolistic or oligopolistic positions of technology giants such as the GAFAM companies. With regards to policy, especially data collecting consumer platforms seem to be at the center of public discussion. In Europe, the discourse around the power of single (foreign) organizations is picked up. Sigmar Gabriel, the former German minister of foreign affairs, said in 2018 that no single European company would be capable of countering the US-American and Chinese domination (“Sigmar Gabriel auf der DIGITAL2018: ‘Wir wollen keine digitale Kolonie werden’” 2018). However, it is not only a European issue. The question is much larger. Recently, the CEOs of Google, Apple, Facebook and Amazon had to testify before congress in “the big tech hearing” (Edelman, 2020). The overshadowing question was: are those companies too powerful? While the focus in the six-hour long hearing focused

(20)

on the pure market power and undermined competition, it is also important to think about what degree of dependency and agency small and medium enterprises have with regards to large technology companies. To understand the relation, it is important to not only look at economical metrics such as market capitalization but at the ecosystem itself and, therefore, understand the links between stakeholders. While internet infrastructure simply favors quasi-monopolies, substitution or consolidation of those is, in general, possible. With companies like Spotify or Booking.com, both interestingly European companies, one can easily imagine a competitor taking over the music streaming or accommodation booking market. GAFAM-like companies, however, are “systemically important digital platforms” or SIDPs (Kitsing, 2020, p.6). Those are comparable to systemically important banks in that they provide an economic infrastructure. Kitsing differentiates SIDP and dominant non-SIDP, amongst other factors, by the impact it has on other businesses. With Google, Amazon (Web Services) and Facebook this is the case, since the companies make the majority of their income from B2B transactions and companies rely on the dominant players for their marketing efforts or server infrastructure. Apple, however, is in that sense not a SIDP because it is aimed at private customers. Looking at the characteristic of impact on other companies, it is evident that B2B (business-to-business) platforms are more critical to observe. In contrast to registering on a music streaming or hotel booking platform as an individual, contracting with an enterprise resource platform or customer resource management tool can be a significant financial investment and long-term commitment. Naturally, the impact B2B companies have on other businesses is higher than the impact of B2C (business-to-consumer) companies. This implicates that companies have an intrinsic and economically driven motivation to preserve dominant positions of their suppliers. Private end consumers profit from disruptive innovations that surpass older solutions because they are offered a better solution and the costs to switch from one service to the other are very low. Companies, however, are potentially sticking with dominant solutions even when challenger companies outdo them. This phenomenon can be described with the cultural-scientific concept of hegemony as a “distribution of social power which has the force of culture behind it” (Feenberg, 2008, p. 309). The concept of hegemony fits because it describes dominance through consent

(21)

as much as coercion (Lash, 2007, p. 54). Originally developed by Marxist scholar Antonio Gramsci, hegemony is a form of domination that requires support by the dominated class (Gramsci, 2000). Therefore, if I ask a company using AI software provided by a large technology firm, it is likely they have not yet reflected about the hegemony that could be involved in that use. The power relation is manifested from within the digital ecosystem of suppliers of AI technology and its buyers alike. Even though many European politicians commonly criticize the dominant economic power some companies like Amazon or Google inherit (Johnsen, Lacoste & Meehan, 2020, p. 63), they rarely look beyond the economic metrics and deeply into how companies are already networked in their own economy. Still, discourse around the hegemony of large technology companies can potentially cause harm on their power, since hegemony is most powerful when it convinces the less powerful to not question the current existing system: “Hegemony at its most effective is mute” (Comaroff & Comaroff, 1991, p. 24). Interestingly, the dynamics of business relations are seemingly different between ‘traditional’ suppliers and buyers and digital B2B supply and demand. Whilst Johnsen, Lacoste & Meehan (2020, p. 64) argue that firm size acts as a proxy for asymmetric power favoring larger producers over smaller suppliers, suppliers of technology solutions are instead the larger corporations and, therefore, power shifts towards suppliers in digital realms. Smaller companies consent to participate in hegemonic networks without enjoying their “fair share” or having influence over the “rules of the game” (Levy, 2008, p. 952). Another peculiarity of the relationship between technology service providers and its users is the prima facie liberating self-service possibility. However, this takes away the space for resistance that is usually available within contracting between supplier and buyer regardless of who is in the hegemonic position (Johnsen, Lacoste & Meehan, 2020, p. 65; Levy, 2008, p. 952). While contracting gives at least an illusion of power to the weaker party, self-service is not negotiable and thus, full consent is regardless of internal discussions given with the purchase of a service. This is to say, self-service is another critical dimension of sustaining a hegemonic system by preventing discussion around the conditions. This self-service is undermining agonism, defined as the importance of conflict. Since the parties sourcing the AI platform are silenced, (valuable) conflict cannot properly arise in the first place.

(22)

Figure 1

Hegemony Triangle by Johnsen, Lacoste and Meehan (2020, p.69).

In line with the hegemony triangle that was recently developed by Johnsen, Lacoste and Meehan (2020, p. 69) in the context of marketing management (see Figure 1), I support the notion that hegemony in customer-supplier relationships goes beyond ideology anchored in consensus. Dominance, authority and mastery, as described by Johnsen, Lacoste and Meehan, are characteristics of hegemonial orders to consider (Johnsen, Lacoste and Meehan, 2020, p. 69). Dominance is obviously apparent in quasi-monopolistic markets since it is defined as the dominant actor having a higher position allowing to be more noticeable in (inter)actions and more attractive to the relationship counterparts than other potential alternatives. Choosing a market solution that is not dominant on the market simultaneously often means to make compromises with regards to the functionality. Authority as the second corner of the hegemonic triangle describes the right to give order, make decisions independently or enforce obedience. This is exercised with technology providers by the creation of ecosystems that are implemented and constantly updated, but not easily stepped out of. Once an SME

(23)

chooses to work with SAP for example, there is a very high likelihood that they will not quit SAP as their enterprise resource platform even when SAP updates into a direction that does not benefit the SME. The authority is heavily skewed towards the technology platforms because customers are locked into the ecosystems. The third corner is mastery, describing the superiority of one party over its counterpart in a specific domain. It is imposed on the customers by large technology companies. There is a possibility for mastery-stagnation in more traditional customer-supplier relationships (Johnsen, Lacoste & Meehan, 2020, p. 70) which can also be seen in in-housing or backsourcing, a process of companies moving work that was previously done by agencies, suppliers in a sense, inside the company (Monllos, 2019; Jesse, 2020). Overall, mastery is what makes the weaker parties accept their inferiority to the stronger parties. Especially with high-technology like AI that smaller companies do not have the resources to develop themselves, it is becoming a norm to source technology from suppliers and not challenge the providers. Moreover, only the largest technology companies will be able to provide the required computational resources, hardware and R&D investments to create global oligopolistic AI standards. They provide an AI “hose” to companies buying the services allowing them to “parameterize their needs, train remote state-of-the-art processors, and access the results through cloud services, while paying for the time and computing capacity used.” (Ferrás, 2018). Therefore, they are entering so-called “trading zones” (Collins, Evans & Gorman, 2007). The term was introduced by Galison to resolve the incommensurability between paradigm shifts. While the concept was developed to understand science better, the potential is much more general. Collins et al. define “‘trading zones’ as locations in which communities with a deep problem of communication manage to communicate” (Collins, Evans & Gorman, 2007, p. 658). The issue of communication is what distinguishes trading zones from traditional trade. The trading zones themself can be differentiated by looking at two axes: the extent to which power is exercised to enforce trade on the one side, and the extent to which the trade leads to new homogeneous or heterogeneous cultures. The Figure 2 below shows the four idealistic forms of trading zones at the ends of each axis in the matrix.

(24)

Figure 2

A General Model of Trading Zones.

Note. Reprinted from Trading zones and interactional expertise (p. 659), by Collins et

al., 2007. Copyright 2007 by Collins et al.

The collaboration between AI platforms and their users takes place through coercion. The intention to collaborate seems mutual at first sight, because the organizations sourcing AI are actively seeking to implement the technology. However, the coercion lies in the narrative that is being built up around AI. It makes organizations think they

must implement AI in some way to stay competitive. This is to say, enforcement is not

necessarily brute force, but it can be exercised legally, institutional and economic (Collins, Evans & Gorman, 2007, p. 659). The conditions of this collaboration are not found on common grounds. The elite group’s, which is the AI platform here, knowledge is black-boxed and access to it by non-elites tightly monitored by the elite. At the same time, the elite group will not invest many resources to gain access to the expertise of the non-elites (Collins, Evans & Gorman, 2007, p. 659). This is true for the relation between AI platforms and its users, because the users are usually active in specialist areas that are irrelevant to AI platform providers.

(25)

Additionally, organizations sourcing AI solutions will potentially not have much choice between different vendors. Equivalent to the Ford Model T (automobiles), the Remington typewriter, the IBM PC (personal computers), or Google (search engine), a dominant design of AI technology can be expected to emerge, developed by one of the technology giants (Ferrás, 2018). Foucault’s argument here is that one should not attack the technology giant per se but the system of power. Supplying AI platform tools puts smaller companies in a position to query from technology giants and nothing more. This is a struggle of subjectification, because companies sourcing plug-and-play AI tools are put into a position of AI non-experts and that position is difficult to escape (Foucault, 1982, p. 781). It thus becomes more unrealistic for smaller companies to become a challenger company.

The one-way mirror Pasquale (2015, p. 9) uses as an analogy to contrast the protective mechanisms available to single individuals and corporates can also be used to understand the relation of SMEs and the AI platforms. While SMEs do not have any insights into the algorithms they implement, the AI platforms have total access to the data feeding the algorithm and, therefore, access to the source of the customers’ knowledge.

2.4 AI Without a Clue? Organizational Digital Divide in AI

Implementation

To understand how and why companies make decisions on the implementation of AI, it is important to research the present digital divide of the organizations with regards to AI technology. While the digital divide is usually used as a concept to research societal differences and resulting challenges (Muschert & Ragnedda, 2015), it is used in this context to understand the environment of companies that apply AI. The digital divide is understood in this thesis as a broad framework to analyze the underlying discrepancy, not at an individual but a firm level. Rarely, the digital divide is used in an organizational context, but there are a handful of papers proving the eligibility like Forman’s research on internet adoption decisions of companies in 1998 (Forman, 2005). Of course, the focus of my work lies on AI even though the digital divide is much

(26)

more encompassing than AI as only one specialized field of modern digital business environments.

The digital divide can easily be identified by looking at how AI technology is adopted. Now that large companies regularly use AI technologies, more studies and articles deal with AI applications for SMEs that have not found applications for AI yet ("Industrial applications of artificial intelligence and big data - Internal Market, Industry, Entrepreneurship and SMEs", n.d.; Fraunhofer IPT, 2019). Without doubt, AI technology providing platforms can be classified as experts in AI. They have the human and financial resources to invest in new technologies and stay up to date with developments in artificial intelligence. Looking at the other side of the digital divide, that is the organizations that implement plug-and-play AI, it is not as easy to classify them holistically. This is due to the heterogeneity of companies potentially using AI technology. Also, AI is still at an early experimental stage outside the technology sector (Bughin et al., 2017, p. 6). Especially smaller firms did not progress beyond the pilot phase because the results were poor or uncertain (Bughin et al., 2017, p. 13), if they even entered the pilot phase. According to a study conducted by the German Economic Institute, 71% of companies do not use AI nor do they plan to use AI in their companies in the future. Another 19% plan to use AI in the future and only around 10% use it already. Of those, 6% use AI products they bought from external providers and only 3% developed AI themselves. 1% is invested in AI companies (“Angst vor dem Unbekannten”, 2019, p. 2). Another study conducted by McKinsey & Co. on a European level in 2017 already reports an adoption of 20% (Bughin et al., 2017, p. 5). While both numbers seem relatively low at first, the growth is impressive. Other studies showed that only 2% used AI in 2017, and only 5% in 2018 (“Angst vor dem Unbekannten”, 2019). This is to show that even though I only look at roughly 10% of companies in my study, the number is likely to increase quickly in the coming years. Oliver Thomas, professor for business informatics, says the current adoption does not matter: comparable to making popcorn, nothing happens for a long time. Then, after around two minutes a few first corn pops and a few seconds afterwards almost every corn popped already (“Angst vor dem Unbekannten”, 2019, p. 4).

(27)

Numerous studies position AI as the next so-called ‘General Purpose Technology’ (GPT). GPTs are key technologies that drive generations of technological and economic progress (Bresnahan & Trajtenberg, 1992, p. 1). With constant improvement of the technology, GPTs spread throughout the economy (Bresnahan & Trajtenberg, 1992, p. 22). 26 years after Bresnahan and Trajtenberg first developed the concept of GPT, Trajtenberg writes about AI as the next GPT ranking it alongside the invention of the steam engine and electricity (Trajtenberg, 2018, p. 2). AI becoming a GPT would suggest a gradual dissolution of the AI-related digital divide since GPTs are characterized by pervasiveness. However, there is a long way to go considering the majority of companies remain to not use nor intend to use AI.

In this study, I am interested in the first movers of the companies buying plug-and-play AI. While everyone is talking about AI as an elementary technology, it is more of an exception in current daily business (“Angst vor dem Unbekannten”, 2019, p. 2). As Pickup says, “there is a huge ‘long tail’ of companies that you would call ‘late tech adopters’” (Pickup, 2017). A lot of companies do not know what is possible, nor do they have the expertise. Therefore, the companies just starting to use AI are guiding the way for other companies. The question raised by the McKinsey report summarizes the current organizational digital divide: “Artificial Intelligence is getting ready for business, but are businesses ready for AI?” (Bughin et al., 2017, p. 6). This goes in line with Forman, who pointed out that “compatibility with existing practices, values, or norms can be a key factor in the decision to adopt new innovations” (Forman, 2005, p. 642). Read more critically, this means disruptive and new innovations, regardless of feasibility and potential, have it harder to be adopted. More than that, the e-business systems of most enterprises consist of multiple heterogeneous systems, which are based on different platforms and function in distinct departments (Xian, 2011). Organizations are well-aware of the added complexity that comes with every larger system to implement. Consequently, the potential they see in AI must outweigh the inconvenience they take on by implementing a new AI platform.

Overall, a narrowing digital divide with regards to AI technology between SMEs and larger corporations or technology firms cannot be observed yet. Since technological

(28)

developments build upon the previous ones, companies that already invest in related technologies such as cloud computing and big data are leading the AI adoption, and it remains hard for other companies to catch up (Bughin et al., 2017, p. 14).

2.5 WYSIWYG? AI as a Marketing Buzzword

After the last chapter attached numbers to the plans of companies to implement or develop AI themselves, this chapter challenges AI as a substantial term. Nowadays, it is used as a term to implicate the ‘smartness’ of an organization. Here, smartness refers to “computationally and digitally managed systems, from electrical grids to building management systems, that can learn and, in theory, adapt by analyzing data about themselves.” (Halpern, Mitchell & Geoghegan, 2017, p. 115). Looking at this definition of smartness and common colloquial perceptions of AI, the concepts are almost interchangeable.

The promises about computation including their assumptions and goals, summarized as “the smartness mandate”, have led to the mere desire for smartness. Artificial intelligence, as ‘intelligence’ suggests, signifies the perceived epicenter of smartness. Linking back to the age-old AI-imaginary discussed in chapter 2.1, smartness is also both reality and an imaginary (Halpern, Mitchell & Geoghegan, 2017, p. 125). However, not everything promising smartness as AI actually does so - there is no ‘What you see is what you get’ (WYSIWYG). Instead, ‘the smartness mandate’ is often only realized as nothing more than a promotional claim as a study by venture capitalist MMC showed. Approximately 40% of Europe’s AI startups do not make use of AI in their core functions (Kelnar, 2019, p. 99). AI has grown to a buzzword in recent years, but it is often used incorrectly (Mahase, 2019, p. 1). AI has its roots in basic statistics and the distinction of AI and non-AI statistical methods is not clear-cut to many (Kelleher & Tierney, 2018, pp. 12 - 15). Therefore, oversight of the tools is important to ensure the safety and effectiveness of applications claiming to use AI (Mahase, 2019, p. 1). It is viable to recognize the underlying technologies hiding behind terms like AI in order to understand the organizational processes required to implement the tools and the potential upsides. For the platform companies, however, a vague understanding of

(29)

what AI really includes can also be beneficial. IBM’s Watson was criticized for raising unrealistic expectations with regards to AI. “Watson is a joke”, says Palihapitiya, an influential tech investor. Most of the criticism at Watson does not aim at any specific flaw in technology, but at the way Watson is presented (Freedman, 2017). Watson is cleverly staged as one single machine, as a supercomputer. Watson’s 2011 success in the TV show Jeopardy! shows Watson as an uber-intelligent human-like operation (“Watson and the Jeopardy! Challenge”, 2013). Realistically, it is an ecosystem consisting of different services and processes that are AI-related (Spohn, 2018). The expectations that build up are influenced by one’s prior knowledge of AI since many B2B applications are developed behind closed doors for reasons to protect technological innovations from competitors. To some extent, this accounts to the other AI platforms alike. The expectation management of AI solutions must be prioritized and not exploited when working together with other organizations that have significantly less expertise in the field. An AI chatbot builder (i.e. SAP Conversational AI) is on a different level than an AI infused dashboard (Beverungen, 2019) or Salesforce Einstein recommendation engine despite all of the products claiming to utilize AI. Researching technologies that are positioned closely to AI, it is therefore important to pay attention to the underlying technologies.

A more general approach to understand how hype builds up and fades is the Gartner

Hype Cycle (Fenn & Blosch, 2018). They offer a snapshot into the perceived value of

innovation, instead of their actual value. It works as a framework for organizations to decide when to adopt a technology and when it is only overhyped. One of the remarkable theses of the hype cycle is that the majority of innovations go through a phase of overenthusiasm and disillusionment. Productivity is only reached once the hype plateaus. Especially in earlier phases of the hype cycle, where uncertainty overweighs the maturity of an innovation, the hype cycle is a more qualitative tool focused on hype levels and market expectations. Later on, when information about maturity, performance and adoption become available, hype levels play a lesser role to determine the position on the graph. The traps organizations want to bypass using the hype cycle are adopting too early, adopting too late, giving up too soon and hanging on too long. Overall, AI as a topic is too large to be condensed into one bullet point. This obviously

(30)

speaks for AI in general, since it is already viewed as a lever for new emerging technologies and not an emerging technology itself by Gartner. In their 2020 release about the current positions on the hype cycle, twelve of thirty innovations are directly linked to AI and data science. While plug-and-play AI or AI-as-a-service are not mentioned, the technologies of explainable AI, embedded AI and AI-augmented development are closely related (Gartner Inc, 2020). It is also in an AI-specific hype cycle for 2020 that democratization and industrialization of AI are identified as the two emerging megatrends (Goasduff, 2020).

Figure 3

Hype Cycle by Gartner, 2020 release.

The hype cycle as a standalone tool is useful to get a comprehensive overview of how markets will develop with new innovations. However, to make it more operational it is useful to use Gartner’s priority matrix as an addition (Fenn & Blosch, 2018). That is because hype in general is a bad, even harming, decision making intel. With the priority

(31)

matrix, they can assess the innovations’ impact on the organization. The impact is classified into four categories. The first is “low”, which is a matter of slightly improved processes, like better UX, which is difficult to translate into increased profits or revenue. “Moderate” provides incremental innovation to existing processes, leading to improved revenue or profit. Then comes “high” impact. This is classified as enabler to new ways of performing processes, leading to significant improvements to revenue or cost savings. The biggest listed impact is “transformational”. It takes a more holistic approach, as it is defined to enable new ways of doing business overall, leading to major shifts in industry dynamics (Fenn & Blosch, 2018). Because priorities are set beforehand, this prioritization is starting with at least improvements. Considering all possible options, most of the innovations would not be eligible and fitting to an organization and thus, they would not produce “low” impact, nor any higher. To my research, the relation between hype cycle position and assigned priority is of highest interest, because this elucidates the net effectiveness of AI hype.

3 Method

As already outlined in the theoretical framework, the overarching research question cannot be answered by looking only at AI platforms which profit from the narrative, nor can it be answered by only researching the use of AI platforms. AI narrative and AI use are interrelated, and thus the methodological approach is two-folded: It is important to look at both parties of the ecosystem, namely the SMEs seeking to adopt AI technology and the platform companies providing the technology. To be able to research both sides, two different methodological approaches are required. Firstly, I will explore the narrative that platform-companies providing AI open up with regards to the age of AI. To do so, I will conduct a hypermodal discourse analysis, including the websites and other information those companies publish promoting the technology. I am also looking at independent discourse about the companies’ solutions.

Secondly, I want to understand better how the narrative can be found or missed in the AI platforms themselves. Originally, a mixed-methods approach was selected to put the

(32)

narrative to the test. I intended to talk to SME decision-makers who work with large AI platforms. This approach would have allowed me to follow SMEs' decision-making process and understand which role marketing and the overall AI-hype plays in their application of AI. However, the field of AI-as-a-service is too small and isolated to identify relevant SMEs. Since those AI solutions operate in the backend, it is not possible to find many companies publishing information that they use an external AI platform. Despite the information that IBM Watson was used by 700 organizations already in 2016 (“IBM Watson kommt schon in 700 Firmen zum Einsatz”, 2016), it is impossible to sift through this intransparent field. The current Covid-19 crisis made it even harder to find interviewees, because there are no larger gatherings, like exhibitions, to approach people. Additionally, many small companies are in a significantly less comfortable position right now and prioritize expert interviews understandably low.

One of the potential entry points is the websites of AI platforms themselves. They do publish about AI-applying SMEs. However, those are likely to be prejudiced since they are published as customer success stories. Also, the companies were hesitating to respond to my research inquiries. Instead of asking organizations for a reproduction of the process of AI implementation, I am collecting first-hand experiences with the platforms. I am using the walkthrough method (Light, Burgess & Duguay, 2018) to research how the platforms work in practice. While the hypermodal discourse analysis looks at the exercise of power from a top-down perspective, this approach gives me insight into resistance or, in line with the theory of hegemony, consensus from the subordinate group.

3.1 Hypermodal Critical Discourse Analysis (Method)

3.1.1 Choice of the Methodological Approach

In the most basic sense, discourse analysis seeks to understand what makes text and speech meaningful and coherent (Krippendorf, 2011, p. 2). Krippendorf refers to dictionaries to define discourse as a large body of text or prolonged monologue

(33)

(Krippendorf, 2011, p. 2). As a method, it encompasses critical discourse analysis and multimodal or hypermodal discourse analysis, variations I will integrate into this thesis. Overall, discourse analysis as a method will be used to study the hype and platform's positioning. The multimodal and hypermodal dimensions assist to encounter the agrestal discipline of AI communication.

While the beginnings of discourse analysis are situated mostly in linguistic disciplines and therefore have a strong focus on language, the multimodality of our lives is also represented in modern research with a more multimodal approach. Norris realized that focus on spoken language limited her research. Consequently, she developed a more holistic investigation in a practice-based way which considers that different modes of communication are interwoven (Norris, 2004, p. 102). I am going to the edge of multimodality here, by looking at hypermodality. That is the conflation of hypertext and multimodality, which gives discourse analysis a new level of complication by combining language and visual communication with the manifold possibilities of paths to take by clicking on further links. Analyzing how AI platforms position themselves visually enriches my study with a semiotic dimension. Lemke (2002), further, developed the associated scheme for the analysis of composite verbal–visual meanings that I will base my analysis on. As with other qualitative methods, this is not a step-by-step approach and instead flexible to the peculiarities of the research question. It is important to use a distinct framework suitable for hypertext mediums for the analysis considering the consumption behavior of the readers. According to Lemke, one cannot consume hypertext information in their entirety like a linear text or a spoken monologue (Lemke, 2002, p. 301). The links between the texts do not account for any specific reading sequence. Text and visual media play an essential role in producing lay knowledge of topics that are not widely accessible.

Another variation of discourse analysis that I will utilize in this thesis is critical discourse analysis (CDA). The ‘critical’ adds the "existence of power relations, the maintenance of hegemonic structures, racism, inequalities, and ideologies in the environment in which the analyzed body of text was found" [CA2] (Krippendorf, 2011, p.

(34)

corresponding power relations. The critique should not be ad hoc or individual, but general, structural and focused on the involved parties (Van Dijk, 1993, p. 253). To research hegemony and power relations, it is, beyond pure discourse, representation overall that moves into a politically more important role. Fairclough raises the questions "whose representations are these, who gains what from them, what social relations do they draw people into, what are their ideological effects, and what alternative representations are there?" (Fairclough, 1999, p. 75).

I integrate the two approaches of hypermodal discourse analysis and CDA to cover the variety and peculiarities of media while keeping the critical lens that is required to research corporately crafted content. They build-up on each other with hypermodal discourse analysis covering the breadth of content and CDA covering the depth of content.

Considering the website texts to be rather corporately authored than written by single individuals, reinforces the need to look closely at the websites of AI platforms since carefully crafted strategic communication presumably underlies every decision that led to the information presented on the website. Discourse is not as ungoverned and freewheeling as unproblematic conversations and presumes to include more detail and specific formulations (Krippendorf, 2011, p. 3). Krippendorf defines discourse drawing from the regulations involving from within, resulting in limitations of what can be said to whom. Even though he is not explicitly writing about corporate texts, it is especially important in those instances where organizations hold responsibility for the published artifacts and norms exist beyond staying true to one's own identity in the form of corporate identity and communication guidelines.

3.1.2 Selection of Semiotic Resources and Platforms

After having integrated hypermodal critical discourse analysis to provide the best fit for answering the sub research question (How do AI platforms position themselves in the democratization of AI?) of this paper, I am now selecting, in a first step, a range of AI platforms ostensibly promising the democratization of AI as objects of my research.

(35)

The second step will be to narrow down the semiotic resources used to a manageable and still theoretically saturating degree.

Different criteria are guiding my selection of AI platforms to analyze. Firstly, platforms should suggest and communicate a simplification of AI use and thus the potential to open AI to non-experts. Aligned with the first criteria, the tools should be principally usable by SME companies. They should be loosely general purpose and not tailored down to a specific industry or market. If they do offer industry-specific AI solutions, additionally, this is not leading to disqualification. Also, the size of the investigated AI platforms is important. I am looking at companies in a position to influence the dissemination of AI, which requires a leading market position. While no one can foresee the rapid developments of AI-related market disruptions, the large technology companies existing now are among the companies most likely to play a role in the spread of AI. There are smaller companies aiming to democratize AI, like Petuum or H²O (“Petuum | AI for all”, n.d.; “Open Source Leader in AI and ML”, n.d.). However, since this thesis is raising questions of market influence and hegemonic power, I will not cover those companies in close detail. Based on the above-described criteria, I selected six organizations and their associated AI platforms as presented in Table 3.

(36)

Table 3

Overview of selected websites of AI platforms.

AI Platform Token Website

Amazon AWS AI AWS https://aws.amazon.com/de/ai/

Google AI Solutions Google https://cloud.google.com/solutions/ai

Google AI and Machine Learning products

Google https://cloud.google.com/products/ai

IBM Watson IBM https://www.ibm.com/watson

Microsoft AI MS https://www.microsoft.com/en-us/ai

Microsoft Azure AI MS

https://azure.microsoft.com/en-us/overview/ai-platform/

Oracle Artificial

Intelligence O https://www.oracle.com/artificial-intelligence/

Salesforce Einstein SF https://www.salesforce.com/eu/products/einstein/over

view/ SAP AI Business Services SAP https://www.sap.com/products/intelligent-technologies/artificial-intelligence.html

Referenties

GERELATEERDE DOCUMENTEN

Worryingly, scholars increasingly notice analogies between discourses of the former technological nationalist discourse of the Manhattan Project and those

In order to assess whether perspective-taking is an effective intervention to improve intergroup relations and thereby decrease societal inequalities within contexts of

In order to research hegemony and the influence of states and multinational corporations in global cyber governance, the UN GGEs were analysed in accordance with the fundamentals

The last steps to execute in a simple genetic algorithm cycle are to determine the fitness of the new- born offspring and, based on the fitness values, to decide which of them can

On the societal level transparency can (be necessary to) build trust, but once something is out in the open, it cannot be undone. No information should be published that

The results for the firm age as proxy for Value Uncertainty during an environment of pessimistic sentiment suggest that stocks bought by active institutional investors earn

As mentioned towards the end of section 1.0, the possible academic relevance of this research question is that the study could of a little help on deeper comparing and

Objectives: The general aim of this study was to determine whether the risk of sleep apnea and self-reported insomnia are independently associated with