• No results found

Theoretical and managerial implications

In document A New Way of Working: (pagina 48-51)

PERMANENT POSITION

5. Discussion

5.2 Theoretical and managerial implications

In this research, a connection has been made between the widely used acceptance models of Venkatesh (2003) and Ajzen (1991), and the relatively newer variant of Kim (2007). By combining the models, we looked at a model with the advantages of both for the first time.

The strongest result emerging from this study is the relationship between Performance Expectancy and the dependent variable Behavioural Intentions. This result can be

immediately put into use in practice and is therefore very valuable. Because it has been established that familiarity leads to an earlier acceptance of AI, similar AI applications could

49 be linked during implementation. Simply said, by giving examples and sharing previous positive experiences with AI from a consumer perspective, a link can be made with AI in the company and thus increase the willingness to use AI. The software is no longer the vague, abstract concept but becomes a recognisable functionality that was already well-received.

This can be applied by linking use from the consumer perspective to the workplace, but also from an already existing AI application within the company that is experienced nicely. In this way, AI is already being introduced positively. An association is made with a positive

experience which leads to less resistance to use AI in the work environment.

Building further on this subject, the next step in the implementation should be explainability. As indicated earlier, increasing comprehensibility leads to a decrease in the importance of the performance expectancy. For this reason, when implementing AI, it is essential to start by linking with familiarity in order to get the most out of it. The next focus should be on the explainability of the specific AI application. This study shows that when the explainability of AI grows, the intention to use it also results in growth.

The explainability of AI itself leads to the direct result of greater acceptance. This is a very clear signal to both academia and industry. It further confirms the current XAI trend. But also provides direct tools for companies to ensure that employees are properly included in the understanding of AI. An obvious way is to train employees in this subject. However, letting them think and design the software in a phase prior to the implementation could perhaps generate further interest and inspire employees, in addition to contributing to acceptance.

This is something to develop further in follow-up research.

Another direct piece of advice for companies that this study gives is to deploy their employees. A relatively simple concept that can lead to acceptance. As discussed earlier, a positive environmental view of AI is more likely to lead to using AI. Since it is more difficult to influence the environment outside the company, a first step can be the deployment of

50 middle management. When management gives off a positive atmosphere around AI,

employees will be inclined to accept the software. How to get middle management on board?

A direct link can be made with explainability, as discussed above. Another angle that can be chosen, is the deployment of colleagues with positive experiences about AI. It is not

uncommon for the news to be discussed on the gossip spot of the company, namely the coffee corner. By making employees enthusiastic about AI, the intention to use it could be further spread in this way. The ability to ask colleagues with some experience with AI applications questions, could also be an option. It is a good starting point for further developing a theoretical basis for this topic.

Influencing the environment outside the company in the field of AI is a bit more complicated. Companies can ensure that they are as open as possible using AI without revealing a possible competitive advantage. If more companies do this, it will become increasingly clear what the intentions are with AI and uncertainties in the field of ethical issues can be detected. How to apply this and the exact effect of such a strategy will have to be further investigated.

An interesting gap that this study has tried to bridge is the factor of Job Security. This is a variable that could play a role in the acceptance of software and its implementation. No significant result was measured in this study, which may be due to the skewed distribution regarding having a permanent contract or not. There are still opportunities for further

research. The combination made by using the Explainability and Understandability of AI has also ensured that this research looks from a different perspective than the usual paths.

Understanding and fathoming AI is a topic that seems to be becoming increasingly relevant and should therefore certainly be further investigated.

This research comes up with several starting points for the academic world and handles for the business community. For example, applying the outcomes mentioned above

51 can lead to a decrease in the fear surrounding AI. In particular, the added value that the

software can offer becomes visible. The need of the new generation to 'job hop' (Khatri et al., 2001) can be responded to, by training employees broadly in this area. Nevertheless, because of the diversity that the software offers, AI is constantly learning and changing. It is also possible to vent to the shortage in the labour market, which the Western world is struggling with (Acemoglu & Restrepo, 2017; de Meijer et al., 2013). Support can be provided in health care, an urgent need that the covid crisis has made painfully clear. The results that this research has already given could immediately be used in practice.

In document A New Way of Working: (pagina 48-51)