• No results found

The perceived impact of Big Data analytics on the behavior of auditors in the financial audit field : a case study

N/A
N/A
Protected

Academic year: 2021

Share "The perceived impact of Big Data analytics on the behavior of auditors in the financial audit field : a case study"

Copied!
51
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

The perceived impact of Big Data analytics on the behavior of

auditors in the financial audit field:

A case study.

Name: Daan de Boer Student number: 10346600 Date: 26-06-2017

Word count: 19628

Thesis supervisor: Dhr. Prof. dr. B.G.D. (Brendan) O’Dwyer MSc Accountancy & Control, specialization Accountancy Amsterdam Business School

(2)

2 This document is written by Daan de Boer who declares to take full responsibility for the contents of this document.

I declare that the text and the work presented in this document is original and that no sources other than those mentioned in the text and its references have been used in creating it.

The Faculty of Economics and Business is responsible solely for the supervision of completion of the work, not for the contents.

(3)

3

Abstract

The amount of literature regarding data analytics in the financial audit field is limited. This could be due to several reasons: a lack of standards on this topic, firms may not be ready to implement Big Data analytics, the risk of litigation, or firms refuse to disclose information about their data analytics, because it is part of their competitive advantage. This research contributes to the prior literature, by filling the gap in the literature of data analytics and providing an answer to the potential reason why this is the case. This research takes place in a unique case setting: during the implementation of a new data analytics tool for journal entry testing at a Dutch Big Four accounting firm. This study analyses the experiences of the auditors and data analysts with the use of the Framework of Formalization of Adler & Borys (1996). Twelve interviewees have participated in this research from various departments of the firm. The interviews were conducted on a semi-structured basis. Overall, I find that the new tool is having an enabling effect on the operationalization of data analytics experienced by the auditors in the financial audit field.

(4)

4

Content

1. Introduction ... 6

2. Literature ... 8

2.1. Big data analytics ... 8

2.2. Data analytics and auditor behavior ... 9

2.3. Improvements by data analytics ... 10

2.4. Limitations of data analytics ... 11

2.5. Standards for data analytics ... 12

3. Theory ... 14 3.1. Framework of Formalization ... 14 3.1.1. Repair ... 15 3.1.2. Internal Transparency ... 16 3.1.3. Global Transparency ... 16 3.1.4. Flexibility ... 17 4. Research methodology ... 19 4.1. Research Setting ... 19 4.2. Method ... 19 4.3. Sample ... 20 4.4. Data Analysis ... 22 5. Case Context ... 23 6. Case Analysis ... 26 6.1. Repair ... 26 6.2. Internal Transparency ... 30 6.3. Global Transparency ... 34 6.4. Flexibility ... 37 7. Discussion ... 39 7.1. Repair ... 39 7.2. Internal Transparency ... 40 7.3. Global Transparency ... 41

(5)

5

7.4. Flexibility ... 41

8. Conclusion ... 43

8.1. Conclusion ... 43

8.2. Limitations and Potential future research ... 43

References ... 45

Appendix A: Interview details ... 48

Appendix B: Process of the Journal Entry testing tool ... 49

(6)

6

1. Introduction

Society is addicted to analyzing various sources of data, because it makes our lives easier (Röell, 2017). That is why data analytics are becoming more and more embedded in our way of living. Systems using data analytics have access to a broad spectrum of new data types, as video, audio and textual information can be analyzed (Warren, Morfitt & Byrnes, 2015). These new forms of data are defined as Big Data and have considerable implications for the financial audit field. Big Data is a term used to describe datasets that are large and complex to comprehend using standard tools (Warren, Morfitt & Byrnes, 2015). McAfee and Brynjolfsson (2012) characterize Big Data as the three Vs, volume, velocity and variety. To put this in alternative wording, Big Data is about size, speed of data generation and spread across various sources of data.

Across various industries data analytics are applied to find more effective ways to analyze Big Data for various predictions and analyses (Cao, Chychyla & Stewart, 2015). A Big Four firm (2015) describes the potential benefits of Big Data across three themes: better insight, greater efficiency, and improve coordination of risk and compliance activities. As well as the Big Four firm, Cao, Chychyla and Stewart (2015) describe the beneficial use of Big Data analytics in stock price prediction, crime scene prediction, and fraud or errors investigations by the Securities and Exchange Commission (SEC). These benefits could potentially be applied in the financial audit field. Data sources used to predict stock price could be used to assess the overall financial state of a potential client. Crime scene data could help with assessing fraud risks1 and history of the client. The SEC uses market information and social media to identify high-risk activities this could be the same case for financial audit clients. This are only a few potentials according to Cao et al. (2015). Another major implication for the financial audit field is the potential of continuous auditing (Alles, Brennan, Kogan & Vasarhelyi, 2006). Continuous auditing is:

“to provide assurance simultaneously with, or a short period of time after, the occurrence of events underlying the subject matter by automatically identifying exceptions or misstatements as defined by some prespecified criteria in the embedded audit modules or a monitoring control layer” (Zhang, Yang & Appelbaum, 2015).

These beneficial applications of data analytics would make it reasonable to think that the financial audit field is leading or heavily focused on this area to seek competitive advantage. While the audit field is the leading group affected by technological developments (Eimers, 2017). But according to various articles (Alles, 2006; Gray & Debreceny, 2013; Vasarhelyi, Kogan & Tuttle, 2015) this is not the case, instead the financial audit field remains behind compared to other industries. The AICPA (American Institute of CPAs) agrees with this finding:

1 Fraud risk is the ability of management or personnel to intentionally overrun controls to gain a personal

(7)

7

“Today, many audit processes are essentially unchanged from those performed decades ago, even though newer technology may be used to perform them more efficiently” (AICPA, 2014).

There are several factors which could explain the lagging behind of the financial audit field. One of the factors lies in the lack of standards set by the audit oversight boards (Vasarhelyi et al. 2015). These standards are according to Zhang et al. (2012) one of the main stimulators in improving the audit process. Many executives do not consider their firms to be ready to process Big Data (Big 4, 2015). Alternatively, the litigation risk for auditors could be a reason. Litigation risk can increase, because the traditional argument by auditors that a certain fraudulent item is not discovered in the sample and they are not accountable no longer holds when all the data is examined (Zhang, Pawlicki, McQuilken & Titera, 2012). On the other hand, litigation risk could be decreased, while data sources used in stock price prediction could as well predict the financial state of the firm. This in turn could lead to better evaluations of risk during the client acceptance phase (Cao et. al, 2015). The IAASB (2016) states that although the auditor might be able to test 100% of a population this does not imply that the auditor is able to provide something more than reasonable assurance opinion or that the meaning of reasonable assurance changes. Still the claim of Vasarhelyi et al. (2015) that the financial audit field remains behind, is yet to be falsified by the field and players themselves. Earley (2015) claims that the argument by Vasarhelyi (2015), is caused by the fact that accounting firms are not providing information about their Data Analytics processes, because it is part of their competitive advantage. So, the current position of the financial audit field in Data Analytics remains to be discovered. This gap in research provides an opportunity for further research. The aim of this study is to gain an in-depth understanding of data analytics and how they have an impact on the auditors within the financial audit. To develop a better understanding about the experience of data analytics in the accounting field, this study aims to answer the following research question: How are Big Data analytics operationalized and experienced by the auditors in the financial audit field?

The remainder of this thesis is structured as follows. Section 2 describes the literature written on data analytics and section 3 gives an overview of the formalization framework. Section 4 describes the research method. Section 5 provides the context of the case. Section 6 analyzes the response of the interviewees put in perspective by the formalization framework. Section 7 concludes this thesis.

(8)

8

2. Literature

This section provides an overview of the existing literature about Big Data analytics. First the definition and the processes regarding a data analytics tool are discussed. After that the impact of Big Data analytics is evaluated looking at the behavior of the auditor, the improvements by data analytics, the limitations and finally the impact of standards. Specifically, how the International Auditing and Assurance Standards Board (IAASB) is dealing with current developments in technology affecting the standards on the financial audit and the use of data analytics within the audit engagement.

2.1. Big data analytics

Extensive research within the financial audit field about Big Data analytics is lacking. First a definition is provided about what Big Data analytics is, to understand the subject discussed in this paper. By breaking down Big Data analytics into two components, Big Data and data analytics, a separate description for each component is given below.

Big Data has a major influence on current society and still the complete possibilities remain to be unknown (Mayer-Schönberger & Cukier, 2013). Its meaning is depending on which industry or field you look at according to Vasarhelyi, Kogan and Tuttle (2015). For instance, the capabilities of information systems small accounting firms are limited, so from their point of view data is “bigger” compared to the capacity of Big 4 firms. While their ability to process any data takes more time and is sometimes not even possible. A more general description, already mentioned in the introduction, is the one of McAfee and Brynjolfsson (2012). They characterize Big Data as the three Vs, volume, velocity and variety. So, Big Data is about size, speed of data generation, and spread across various sources of data.

Data analytics are in their basic form software programs that help to analyze data (IAASB, 2016). According to Earley (2015) they could support the financial audit by giving a better understanding of data through visualization. The data analytics tool that is subject of this thesis is an example of such a data analytics and visualization tool. Cao, Chychyla and Stewart (2015) provide an explicit description of Big Data analytics:

“Big Data analytics is the process of inspecting, cleaning, transforming and modeling Big Data

to discover and communicate useful information and patterns, suggest conclusions, and support decision making.”

The process of inspecting, cleaning, transforming and modeling Big Data is referred to by various authors using alternative terminology. Gray and Debreceny (2014) describe the process in three steps. Starting with the least predictive and requiring the least amount of software sophistication and ending with the step with the highest requirements.

The first step is data extraction, the client data is extracted and reviewed in a smaller sample of the total data.

(9)

9 The second step is data analysis this covers descriptive statistics from basic functions to correlation analysis. Correlation is crucial in the development of Big Data analytics, as it forms the foundation for its predictive capabilities (Mayer-Schönberger & Cukier, 2013). The acceptance of correlation as leading indicator of analysis and prediction requires a paradigm shift away from causality (Mayer-Schönberger & Cukier, 2013; Vasarhelyi, Kogan & Tuttle, 2015). But if this shift is made the underlying foundation of auditing namely causation is undermined (Brown-Liburd, Issa & Lombardi, 2015). Causation is a key part in providing sufficient and appropriate audit evidence.

The third step, mining, is about discovering knowledge and supporting decision making and has overlap with step two (Gray and Debreceny, 2014; Vasarhelyi, Kogan & Tuttle, 2015). Mining consists of two parts: directed data mining and undirected data mining.

Directed data mining has an objective which it tries to analyze. Gray and Debreceny (2014) describe three types of this part. Classification uses a set of historical categories to analyze for instance clients and put them in a certain risk category. Estimation uses compared to classification continuous variables. Which leads to a broader outcome of risks scores. Prediction is often used to find anomalies in the data. Such as outliers and potential fraud indicators to support further investigation.

Undirected data mining aims to find any correlation between variables in the data of the client (Gray & Debreceny, 2014). Three examples are provided to describe the forms of undirected data. Affinity grouping is a form which looks for relations between variables that were unknown before. After the grouping, the auditor needs to investigate unexpected outliers. Clustering is comparable to classification in putting variables in categories, although clustering has no categories beforehand. These categories are made after classification by the auditor. Description and visualization creates an incentive for an auditor to investigate the anomalies inside the data of the client. Though according to internal guidelines, auditors should always rely on their independent skills to assess a certain outlier or journal entry and not on the data analytics tool (Big 4, 2015).

This paragraph started with giving an overview of Big Data. After that data analytics and the process regarding the tools are described. Finally, data mining is described in more detail and several examples of directed and undirected data mining are discussed.

2.2. Data analytics and auditor behavior

To make use of the potential of data analytics there several behavioral implications of the auditor that could moderate the benefits (Brown-Liburd, Issa & Lombardi, 2015). These weaknesses of the auditor arise from their educational background. Auditors are educated to collect, organize and analyze financial information. This could help auditors in structuring non-financial data, but this education is seen as traditional and may not by suitable to effectively and efficiently analyze Big Data. Furthermore, traditional audit methods have not always been effective in assessing fraud risks. Because fraud risks where dealt by with standard checklists and standard planning (Brown-Liburd, Issa & Lombardi, 2015). Data analytics tools can resolve these shortcomings to a significant extent, but similarly as the traditional

(10)

10 method the data analytics have some shortcomings as well. Brown-Liburd, Issa and Lombardi (2015) identify four sources that affect the behavior of auditors: information overload, information relevance, pattern recognition, and ambiguity.

Information overload is defined as receiving too much information to comprehend (Brown-Liburd, Issa & Lombardi, 2015). Large amounts of accounting information could lead to a negative impact on financial and auditing judgements (Alles, Brennan, Kogan, & Vasarhelyi, 2006). Data analytic tools make it possible to extracts and analyze Big Data extractions, but still the output may result in too much information to comprehend. This could result in a reduction of audit quality. To overcome this information overload, a decision model is suggested, to separate various decisions among decision makers (Brown-Liburd, Issa & Lombardi, 2015).

Information relevance is an issue that arises along with the increasing amount of information and may result in decreased performance (Brown-Liburd, Issa & Lombardi, 2015). The issue is that it is difficult for the auditor to identify the relevant topics. This is called the dilution effect, due to irrelevant information the decision maker is not able to make a high-quality judgment (Brown-Liburd, Issa & Lombardi, 2015). This can be overcome by making use of pattern recognition and relevant risk identification.

Pattern recognition is about the ability of the decision maker to search for patterns in a large population that could not be made in smaller samples (Alles, Brennan, Kogan, & Vasarhelyi, 2006; Brown-Liburd, Issa & Lombardi, 2015). An auditor assesses risk of the client by recognizing patterns in the journals or other transactions that may suggest fraud or errors. This can be educated and trained using technology to support auditors in pattern recognition.

Ambiguity is a component of Big Data which entails the unstructured nature of the data and the many formats in which the data comes (Brown-Liburd, Issa & Lombardi, 2015). This creates the challenge to make a tool that is compatible with all sources and structure the data. Another challenge is to choose the suitable data to use to make a risk assessment.

2.3. Improvements by data analytics

As discussed in the introduction data analytics has a great potential and impact on the financial audit (Eimers, 2017). Earley (2015) provides four particular improvements that could be made using data analytics compared to a period where the level and use was less significant than today. First, auditors can test a greater number of transactions than they do now. So instead of sampling a fraction of the population, data analytics provides the opportunity to include hundred percent of the transactions made by a client. Second, the audit quality can be increased as data analytics can provide better insights in the client business practices. This can be done through new insights provided by the data analytics tool but visualization is a key component in this theme. Third, fraud will be easier to detect because the audit team can take an advantage of the new data analytics tools and the technology that they already use. Fourth, auditors can support clients with new services and solve problems for them which are currently

(11)

11 not within the capabilities of the auditor. The external data which can be used to inform the audit team is key contributor to the fourth improvement.

The first improvement is the amount of transactions can increase from sampling size to one hundred percent testing due to data analytics. This improves the audit by increasing the appropriate amount of audit evidence (Earley, 2015). Until now auditors conducted a risk-based model, with sampling of transactions to determine whether the balances are fairly stated. Data analytics will make it possible to test the entire population which results in another shift in perspective. Currently the auditor focuses on errors in the sample, but when all data can be tested the focus will shift to anomalies in patters of data about the population. An anomaly is a deviation from the expected pattern. So, for instance when the data does not match the expectations of the auditor based on his experience with the client: journal entries made by an employee when that specific employee is supposed to be on vacation on the journal entry date (Brown-Liburd et al., 2015).

The second improvement entails the potential of data analytics to increase the insights in the client business practices. This depends on the ability of a data analytics tool to be self-learning and to build a history of clients’ anomalies (Earley, 2015). If the tool can transfer history from year to year about actions undertaken by the engagement team regarding the anomalies, it can support developing new expectations by the team. The potential of non-financial data can be put into practice by data analytics. This could result in tools which provide predictive analytics to support auditors in their risk analysis of the client, focus on areas during planning, and help evaluate going concern of the client (Earley, 2015).

The third improvement by data analytics is the potential to discover fraud. The AICPA (2014) argues that this can be achieved through data analytics because these software tools enable auditors to analyze large amounts of data with relatively low cost for the audit firms. Although these tools are not completely new, auditors have worked effectively with computer assisted audit techniques (CAATs) (Dowling & Leech, 2007). But because of the audit personnel who mostly refused to accept and incorporate the tools the use of the CAATs has been limited (Earley, 2015). This reluctance to work with is reducing due to the pressure from competition to join the data analytics space by the audit field. The fourth improvement by data analytics is the opportunity to use non-financial data and data from external sources to support the audit (Earley, 2015). These sources can be used to support in risk assessment, audit on valuations, and going concern. Additionally, these sources can be used to develop models to predict certain events in the future. This could help the clients in their decision-making process. Non-financial data is internally generated data for instance, personnel questionnaires, marketing data, feedback from customers, and customer data. External data is according to Earley (2015) more extensively defined. Some examples of external data are economic trends, competitor data, and data retrieved from social media. Social media platforms introduce a new space to collect store and distribute all kinds of data.

(12)

12 Opposite of the improvements there are several limitations of data analytics that need to be overcome to successfully implement data analytics in the financial audit field (Earley, 2015). First, the level of training and expertise of auditors needs to be improved to be able to work with data analytics (Big 4, 2015; Earley, 2015). Second, the data availability, relevance and integrity is a key component when considering the use of data analytics. Third, expectations of the regulators and financial statements users are important. While the user might think that reasonable assurance given by auditors might increase. Instead the regulator argues that this is not the case and the auditor is not able to give more than reasonable assurance (IAASB, 2016).

2.5. Standards for data analytics

The IAASB established the Data Analytics Working Group (DAWG) to inform the board on developments in technology and how to respond to them effectively in the public interest (IAASB, 2016). The DAWG has contacted various stakeholders across the world to exchange thoughts about developments of technology. Other activities of the DAWG include monitoring and gathering information on the various applications of data analytics and the relationship of those applications to the financial statement audit. For instance, the effect on risk assessments, testing approaches, analytical procedures and other audit evidence (IAASB, 2016).

According to the DAWG (IAASB, 2016) current standards do not prohibit, nor stimulate, the use of data analytics within the financial audit. This is a problem according to Zhang et al. (2012), because data standards are one of the main drivers to evolve the audit process. When standards are in place they argue significant improvements regarding quality, effectiveness and efficiency of audits can be achieved. Standards in this perspective are aimed at creating a uniform format for files, general ledger detail, trial balance detail, chart of accounts detail, source listing information, preparer listing information, and business unit listing information (Zhang et al., 2012). When a standard data set is exported from a client system, a data profiling report is provided to validate the data. A data profiling report is gives the data analyst a range of tests which could be performed to make sure the data is complete and reliable. In turn, the auditor should review the tests performed by the data analyst or provider. The tests consist of confirming that the population is complete, no data is missing or invalid, and analyzing non-balancing journal entries. When the export requires additional information about the data a request is placed at the client the discuss outstanding questions (Zhang et al., 2012).

Another perspective on standards focuses more on applicable law. In the European Union data cannot be transferred between different entities without thorough application of the data protection law (Accountant, 2015). This refers mostly to personal data and the protection of individuals. But data of companies which is audited in one business unit in a certain country, may not be transferred to another business unit within another country, without complying with the data protection law. This law is based on accountability and privacy is a central issue. When firms fail to comply with the law penalties can be

(13)

13 substantial. They could sum up to five percent of the annual revenue of a firm and for each violation the penalty can be added to the previous one.

Standards applicable to the specific data analytics tool of this thesis are currently not available. This is in line with paper provided by the DAWG and the argument by Zhang et al. that standards on data analytics are currently not in place. However, there is of course a standard applicable to the use of the tool. In this case setting the Dutch law regarding accountancy is applicable. More specific the NVCOS 240 (Nadere voorschriften controle- en overige standaarden) deals with the auditors’ responsibility regarding fraud in the context of a control of financial statements (NBA, 2017). The NVCOS 240 states that the auditor is responsible for obtaining reasonable assurance regarding the financial statements, that they do not contain any material misstatement resulting from fraud or error within their limitations (NBA, 2017).

(14)

14

3. Theory

This section provides an overview of the framework of formalization by Adler and Borys (1996). First a general description and the reasons for using the framework are given. Then the two alternative forms, coercive and enabling formalization are discussed. Followed by the elements; repair, internal transparency, global transparency, and flexibility which are defined and discussed in detail.

3.1. Framework of Formalization

Formalization can be used to understand how a structured system influences the behavior of an actor in a field (Adler & Borys, 1996). This framework provides a structure to better understand how a system can be either enabling or coercive in its form (Dowling & Leech, 2014). An example of a structured system is a data analytics tool which is the subject of this research. The influence of this data analytics tool can be viewed by its users as an enabling or coercive one. This depends on the characteristics of the formalization and on the process of designing and implementing the system (Wouters & Wilderom, 2008). These characteristics are discussed in the remainder of this paragraph.

The implementation of the system can work enabling according to Adler and Borys (1996), but it depends on the employee voice, employee skills, process control, and flexibility in changing controls. They suggest that involvement of employees in the construction of procedures is likely to have a positive effect on both attitudinal and technical outcomes (Wouters & Wilderom, 2008). These procedures should have a focus on users and usability, early and ongoing user testing, and repetitive design processes (Wouters & Wilderom, 2008). Alternatively, according to Dowling and Leech (2014) formalization is about the rules, policies and procedures of an organization. To analyze the responses of the interviewees, the framework of formalization is considered suitable. While it is suitable for assessing the implementation of a structured system, like a data analytics tool, and capable of analyzing in which type of formalization is perceived by its users.

Coercive formalization refers to top-down control method and focuses on centralization and creating boundaries (Ahrens & Chapman, 2004). This form of bureaucracy only serves higher-management needs and controls employees’ behavior (Wouters & Wilderom, 2008). When relating it to a data analytics tool it is aimed at creating a foolproof system (Ahrens & Chapman, 2004). The underlying design of the tool is the basis for creating boundaries. Procedures are created within the system to limit the alternatives for the users. So, this type of formalization imposes its procedures on actors and thereby compromising their own initiatives. The restriction of auditor behavior could create an operational risk, because auditors could respond in a negative way to the data analytics tool (Dowling & Leech, 2014). The operational risk lies in the fact that the auditors does not follow the manner expected of him or her. When they respond in a negative way, there are a couple of outcomes possible. They could disengage with their tasks when they perceive them as routine jobs (Dowling & Leech,

(15)

15 2014). Alternatively, they could reject the system and work around it, reducing the effectiveness of the system (Bedard, Jackson, Ettredge & Johnstone, 2003).

Enabling formalization designs organizational rules that empower the knowledge and skills of employees to increase their effectiveness (Adler & Borys, 1996; Ahrens & Chapman, 2004; Dowling & Leech, 2014). Systems are developed in a way that support users in doing their tasks (Ahrens & Chapman, 2004; Wouters & Wilderom, 2008). When the system is perceived as effective and supporting the tasks of the auditors, they may respond positively to the system (Adler & Borys, 2016). According to Dowling and Leech (2014) this could also have a downside, namely auditors may over rely on the system which could cause insufficient critical assessment of evidence and lack of pursuing of alternative evidence. But as mentioned in section 2 according to the IAASB (2016) the use of data analytics in an audit of financial statements will not replace the need for the auditor to exercise appropriate professional judgment and professional skepticism.

3.1.1. Repair

Repair is an element of formalization which is based on the assumption that events and tasks are not uniform and specific responses are required (Ahrens & Chapman, 2004; Dowling & Leech, 2014). When relating repair to the data analytics tool it focuses on the ability of the tool to visualize and provide insights to auditors in assessing the clients’ journals. Repair focuses on disruptions and the possible reaction of auditors on disruptions in the system (Adler & Borys, 1996). Repair can be implemented as a separate function in the day to day operations or it can be incorporated within the function. When repair is a separate function it is referred to as coercive repair. If repair is incorporated within the function then it is referred to as enabling repair (Ahrens & Chapman, 2004). Dowling and Leech (2014) argue that the difference between enabling and coercive repair is how the tool assists actors in dealing with uncertainties, contingencies or breakdowns.

Coercive repair

This type of repair differentiates routine operational roles from nonroutine (Ahrens & Chapman, 2004). Actors using the system are limited to predefined tasks given by the tool (Dowling & Leech, 2014). When an actor runs into a problem which cannot be resolved within his predefined tasks, then another party needs to be consulted to help solve it (Dowling & Leech, 2014; Wouters & Wilderom, 2008). For instance, in factories the control panel may be secured to prevent operators from interfering with the predefined routine tasks of the machine (Adler & Borys, 1996). If the operators would want to change the process, their supervisor need to be contacted to discuss this potential change. Another example of coercive repair is presented in the case of Dowling and Leech (2014). Auditors would work around an error identified by a diagnostic report, because the system did not provide any guidance as to how to resolve the error. They found out that putting a full stop in a box resulted in the error not presenting itself again.

(16)

16 By contrast the assumption of enabling repair is that operations cannot be fully predefined (Ahrens & Chapman, 2004). Enabling systems to assist an actor in dealing with unexpected issues (Dowling & Leech, 2014). This kind of system would provide users the option to fix and repair themselves rather than letting non-predefined events cause users to have to stop their work (Wouters & Wilderom, 2008). So, this function integrates the repair processes within the routine operations. Users of the system are trusted and incentivized to speak up and discuss practical problems, thereby improving the usability of the system (Ahrens & Chapman, 2004). This argument is also given by Chapman and Kihn (2009):

“The information system might be designed such that user-driven changes to the format and make up in terms of measurements of the reports are possible, facilitating what-if type analysis rather than simply facilitating the production of routine of reports”.

3.1.2. Internal Transparency

Internal transparency is related to repair in that it focuses on the internal processes to provide an understanding of the nature of the local system being repaired (Ahrens & Chapman, 2004; Chapman & Kihn, 2009). A similar view on internal transparency is the one of Wouters and Wilderom (2008). They define it as users having a good understanding of a logic of a system’s internal function and them having information on its status. A case based example is given by Dowling and Leech (2014), they define the audit engagement as the internal process because the objective of the audit support system is to facilitate the production of audit engagements. When looking at my research topic, the local process is also the audit engagement. As the data analytics tool is a part of the fraud risk analysis which is a part of the overall audit engagement tasks.

Coercive Internal Transparency

A coercive system will not give users a clear view of the position they act in (Dowling & Leech, 2014). A very coercive system is one that singles out each member of the engagement team and only superiors would have a complete view of the engagement (Dowling & Leech, 2014). The individual auditors would only have access to the applications relevant to their own tasks. Furthermore, auditors would not receive feedback on their performance (Wouters & Wilderom, 2008).

Enabling Internal Transparency

Enabling internal transparency is an aspect of a system which helps an auditor understand their tasks with respect the entire engagement (Dowling & Leech, 2014). Key components of processes can be identified and best practice routines codified (Ahrens & Chapman, 2004; Wouters & Wilderom, 2008). Another example is that budgeting procedures can be integrated within the operational planning cycle (Ahrens & Chapman, 2004).

3.1.3. Global Transparency

Global transparency refers to the understanding of where and how the local processes fit into the organization (Chapman & Kihn, 2009). Or put alternatively: global transparency is concerned with the visibility of the overall context in which organizational members perform their duties (Ahrens &

(17)

17 Chapman, 2004). Both descriptions are based on the theory of Adler and Borys (1996). They give an example of a coercive and enabling form of global transparency which are discussed below.

Coercive Global Transparency

Coercive global transparency is described as a prison with a wheel-like layout in which the warden is in the center tower and the cells are located at the boundaries of the wheel. The cells and tower are linked by hallways which isolate each prisoner from each other but provide the warden with full sight in each cell (Adler & Borys, 1996). Another example more related to the data analytics tool is also provided by Adler and Borys (1996). The tasks of the employees are structured in a way that they cannot move beyond their domains. Ahrens and Chapman (2004) sketch a situation wherein budgets are the most widely used tool to provide a financial overview of the firm. In some firms, the budgets are only accessible to senior management which makes it difficult for other employees to obtain an overview of the fit of the processes within the firm.

Enabling Global Transparency

By contrast, in an enabling form of global transparency employees are given a wide spectrum of background information to assist them to interact with the broader organization (Adler & Borys, 1996). Dowling and Leech (2014) describe a couple of controls in their case that facilitate global transparency. A passive control within the system, the methodology icon, provides guidance about the overall process. An active control is the so-called knowledge in context which provide a step by step approach to help auditors to comply with firm methodology. This step by step approach offers an insight in the overall process.

3.1.4. Flexibility

Flexibility is the ability of users to decide between various options provided by the system (Wouters & Wilderom, 2008). Technical developments have influenced flexibility by enabling personalization of personal computer applications (Ahrens & Chapman, 2004). Dowling and leech (2014) describe two alternative systems design which determine how actors can react to issues that arise. Either the flexibility is constrained or unconstrained by the system. These terms can be translated into coercive and enabling flexibility.

Coercive Flexibility

Coercive flexibility establishes boundaries around how actors can respond to issues that arise (Dowling & Leech, 2014). A step by step approach needs to be followed by the actor to be able to work within the system. This assumes that the manual prescribes, the actor performs the tasks, and only the supervisor can authorize deviation from the process (Adler & Borys, 1996).

Enabling Flexibility

Enabling flexibility puts no constrains on the actions of actors when issues arise (Dowling & Leech, 2014). It assumes that deviations are not only risks but at the same time learning opportunities (Adler & Borys, 1996). For example, Dowling and Leech (2014) identify several system characteristics that support flexibility. First the timing and extent of prescribed substantive procedures is not explicit which

(18)

18 gives auditors more flexibility within the system. Secondly auditors can overrule the system’s advice by documenting their motives in so called rationale boxes. Third, the scalability of the system is adjustable to the engagement teams’ needs. This three characteristics support flexibility each in their own way.

(19)

19

4. Research methodology

In this section, the research methodology is discussed in more detail, especially the approach of the qualitative data research. Qualitative research is a form of multiple research methods that uses an interpretive, naturalistic approach to its subject matter (Gephart, 2004). It studies phenomena in the environments in which they naturally occur and uses the social actors’ meanings to understand the phenomena (Denzin & Lincoln, 1994).

The objective of this research is to get an in-depth understanding about how Big Data analytics are operationalized and experienced by the auditors in the financial audit field. The phenomenon that is being researched is Big Data analytics. More specifically this thesis explores how the introduction of a new data analytics tool, as described in Appendix B, is operationalized and experienced by the auditors and other personnel who work with the tool. The social actors are being interviewed to capture their perspectives to get an in-depth understanding. The data will be collected by semi structured interviews with auditors, champions2, and data analysts who work within a Dutch Big Four accounting firm. The natural environment in this case is the financial audit field within a Dutch Big Four accounting firm.

First the research setting is drawn, a more in-depth perspective can be found in paragraph 5 Case Context. Second, the method of research is described. Third, the sample size and nature is described. Fourth, the data analysis approach is described.

4.1. Research Setting

As discussed in the introduction little literature is written on the data analysis development of accounting firms (Alles, 2006). This limits the opportunity for students to get a better understanding of this development. So, to overcome this limitation I have decided to use this topic for my research. The research is conducted at a Dutch Big Four accounting firm. The motivation for this specific firm is based on the developments within this firm. This firm had recently been awarded for a new data analysis tool. The award recognizes organizations that, with a new initiative or innovation in audit, have made a major change in improving audit quality, efficiency or added value to clients (Accountant, 2016). This created a unique opportunity in combining the gap in literature with the developments at a firm. This development could be a puzzle piece which fits this gap.

An essential step in conducting this research was getting access to the information and documentation within this firm. Through an internship this step has been taken. By this means internal documents and auditors were made available to become part of my research. Auditors who have been part of the development in one way or another are interviewed on a semi structured basis to discover their perspectives.

4.2. Method

2 Champions are auditors in the firm’s business units who were appointed to facilitate and support the

(20)

20 As discussed at the beginning of this paragraph, this study will consist of a qualitative case study. Qualitative research is a form of multiple research methods that uses an interpretive, naturalistic approach to its subject matter (Gephart, 2004). A case study provides the ability to answer how and why questions in a deeper and more detailed manner (Yin, 2009). There are several sources of information that could be collected when performing a qualitative research: in-depth semi structured interviews, the direct observation of a phenomenon, and analyzing internal documents (Patton, 2005). For this research conducting semi-structured interviews are chosen as primary source. Due to the structure of the interviews various perspectives are gathered. Additionally, internal documents, E-learnings, and a demo version of the tool were used to support the interviews.

An interview guide was constructed (see Appendix C for the interview guide). This include a schedule of open-ended how and why questions. The purpose of this guide was to make sure that all the various topics were being covered during all interviews. The main topics that were discussed are: the impact of the tool on efficiency, effectiveness and quality, comparison with prior tool, process of the tool, and the visualization of the tool.

The interviews were conducted, transcribed and reviewed by myself. The interviews were recorded, afterwards they were transcribed and the notable details of the interviews were written down. Several interviews were conducted by phone, simply because the research period was right in the middle of busy season3. So, for some of the auditors it was more convenient to conduct it by phone. This resulted in some challenges to build up a solid conversation in the interviews, because the replies to a lot of question were very concise. So, this required more work to discover subjects of interest regarding data analytics with the interviewee.

The information gathered by the interviews was analyzed to form a structured base for the findings section. The information within the transcripts will be coded and the data considered important will be labeled. This is discussed in more detail in de Data Analysis section of this paragraph. The transcript of the interview was e-mailed to the interviewee and approval for anonymous usage was asked. The e-mail was also used as feedback opportunity between the interviewee and me with as purpose to reduce the errors and interpretation bias of the interviews. The interviewees were selected by means of their role in the organization and their involvement with the data analysis tool. The participants were contacted by mail or by previous interviewees.

4.3. Sample

To answer the research question there are several steps that must be taken to demarcate the desired sample. In this case the minimal requirement of an interviewee was that he or she had experience with the data analysis tool for journal entry testing. Next to that another requirement was to contact people who had different roles within the process of the tool. So, the decision was made to contact the users

3 Period from half January until April of significant increased workload, while most deadlines for the final

(21)

21 within the engagement team. As they are involved in the entire process and are the end-users of the tool. The reason to contact the champions was that they oversee the way that the users deal with the tool and how they accept or reject the tool. Champions also resolve problems for users so they play a vital role in the process. Finally, the reason to interview the data analysts was because they were involved in the implementation of the tool from the start. They could oversee the process within the firm and put into the broader picture of implementing the tool. Another criterion was that the interviewees were from various positions in the chain of command, to get a better view on the different perspectives.

The firm was a completely new environment for me so I had to come up with a way to identify people which could be suitable for participation in this research. The first way of trying to contact people was during office hours and social events. Most of the personnel was very interested in the subject, as this new tool was a new development for them. Furthermore, by looking up the internal documents about the tool, I came across a list of people, which were called champions. Every champion was send an E-mail for an interview. Even though the initial contact was not always straightforward. A total of 12 people was interviewed, please refer to Appendix A for an overview. 4 senior auditors of the financial audit, 4 senior auditors and one manager in their role as champion, and a junior auditor, senior auditor, and manager from the data analytics department were interviewed.

The four senior auditors from the financial audit, I1, 16, I11, and I12, play a significant role in the process. They experience the shift of the former tool for journal entry testing to the new one. They see how this affects their work and what the consequences of the tool are.

Champions are responsible for promoting and supporting engagement teams in their adoption of the data analytics tool. They provide guidance and are the first point of contact. Interviewee 2, 4, 5, 7, and 8 have worked with the tool during the pilot phase and during year end audits. So, they can make a reasonable comparison between the startup phase of the implementation and the period when the tool is live for all the engagement teams.

Data analysts, I3, I9, and I10, are involved in the implementation. They constructed a framework based on the globally initiated implementation model4. The global model turned out to be way too optimistic about the capabilities of the auditor about the new tool. They have thought of creating additional lines of support to improve the process. It’s interesting to capture their perspective on the entire implantation, while they were involved from the moment the global decision was made to start implementing the tool.

During the interviews multiple, other sources were introduced to me. At the second half of the first interview the application which is used for documenting the audit file was explained. This was useful to be able to place the tool for journal entry testing within the broader picture of the financial audit. In addition to the audit application, interviewee 3 made me aware of the demo version of the tool.

4 The global implementation model: all steps are covered by the auditor. No additional support is provided.

The auditor is responsible for the data acquisition, deliver data to Delivery Center, to answer questions about the client from Delivery Center.

(22)

22 Eventually interviewee 9 provided access to the demo version. This provided insight in the functionalities of the various tests. The internal documents on data analytics consisted of: guidelines, manuals, presentations, and internal papers. These internal documents were found by searching within the internal data base. Some of the documents were introduced by the interviewees during the interviews.

4.4. Data Analysis

All the interviews were transcribed manually by myself. This resulted in 12 separate documents with a summary of each interview at the end. The transcribed interviews were sent to interviewees to validate the transcription, interpretation, and translation. Then the transcripts were read again and the relevant quotes were highlighted. Then the highlights were coded in subthemes. The themes of formalization framework of Adler and Borys (1996): Repair, Internal Transparency, Global Transparency, and Flexibility were used as basis. To analyze the 12 documents of interview data three sub processes were performed: data reduction, data display, and conclusion drawing (O’Dwyer, 2004).

Data reduction is the first sub process that consists of selecting, focusing, simplifying, and transforming the data of the interviews. This was done by reading the interview notes, coding the data in line with the themes, and making summaries of each interview. During the process of analysis, the interviews the internal documents regarding the process were very useful to gain a better understanding of the tool. By putting the various in-depth quotes, in perspective of the broader process.

Data display is the second sub process that consists of grouping codes into themes to be able to draw conclusions in step three. This step consists of organizing and arranging the initial themes into more concise groups and themes to support the analysis and conclusion drawing.

Conclusion drawing is the third sub process, patterns, contradictions, relations, explanations, and complements are searched to structure results that can support the main research question of this study.

(23)

23

5. Case Context

The researched is based on the implementation of the firms’ new data analysis tool for journal entry testing5 to mitigate the fraud risk during the audit. For first year audits and clients with a fee above 100k in audit fee, the tool is obligatory to implement. The tool provides 15 predefined tests using complex algorithms and visualizations to provide more and better insight to the auditor and client. The improvements compared to the former tool are: (1) all transactions can be interrogated, tested and analyzed, (2) applicable to any information from any client system, (3) Better client understanding. For an insight in the process of the tool refer to Appendix B. The tool is a technology that helps to identify journals for further testing. This testing is a key part in response to the risk of fraud. This tool can process high volumes of journals which cannot be done manually anymore. During the testing phase, the engagement team select one of they would like to run. Once the test is selected and run, the team can filter the results to align the journal population for testing with the fraud risks identified at the planning phase. The tests are performed based one of the following criteria: all data, backdated items, Benford’s law6, create and approve, duplicates and reversals, large entries, large items, post close entries, unexpected account combinations, unexpected users, unusual amounts, unusual days, unusual times, unusual words.

The interviews were conducted at the audit department and the risk assurance department at a Dutch Big Four firm. At the audit department interviews were conducted with personnel who worked with the tool and champions. Champions are audit personnel who support the engagement team when questions about to tool arise. Interviewee 2, who is an auditor with affinity for data analytics and a champion, describes it as follows:

“My role as a champion is to support colleagues with the use of the data analysis tool. From the startup phase, when you must apply for the use of the tool on the management website. Then when you must retrieve or extract data from the client. Every engagement team also gets support from a data analyst with the extraction of the data. When you have your data, you send it to the delivery center. The delivery center is responsible for uploading the data into the tool. When this process is finished you can start testing.” (I2)

The tool has various stakeholders involved across different layers of organizations, this requires solid communication between the stakeholders. Especially the engagement team is required to be on top of their game to facilitate this. Interviewee 4, a champion of the tool, has a structured view on which stakeholders are involved in the process.

5 Testing of journal entries which are, based on the risk assessment, potential fraudulent.

6 Benford’s law is a mathematical theory which states that, in lists of numbers from many real-life sources of

data, the leading digit is distributed in a specific, non-uniform way. This test compares the distribution of the journal lines against the theoretical one from Benford’s law.

(24)

24

“There are several stakeholders involved in this process. The client, as in the management of the client who must agree to deliver their data. The IT personnel of the client, who should assist with extracting data out of their system, because their system is their expertise area. Our data analysts who must convert the data into readable data for the tool. Our delivery center who should run all the completeness and integrity checks and upload the data into the portal. And finally, the engagement team who has to use the tool and run tests.” (I4)

During the interviews, more fundaments for about the development of this tool and data analytics in general were discussed. At this firm, the shift to more emphasis on data analytics has emerged roughly six years ago. The traditional methods are being developed into a more data oriented way of working. Interviewee 10 is a data analyst who was hired to make this shift possible.

“When I was hired most of the data analysis was decentralized among the different departments. Then we started with centralizing data analysis with the pure focus on the financial administration of a client. We started developing tools which supported that focus. We started off with 4 pioneers who wanted to offer more than just analysis on the general ledger. And during the course of six years our department grew from 4 to 27 people. We do lots of analyses on various topics: purchase process, selling process, financial administration, within the energy sector, within the public sector, quality reporting, tailor made tools for the client, you name it. The bottom line is: we developed from a narrow focus of data analytics on the financial administration to a very broad application of data analysis in the audit but also for compliance jobs.” (I10)

One of the developed tools by the data pioneers is the data analysis tool for journal entry testing. The implementation of this tool is initiated globally. This is the first data analysis tool that all various business units over the entire world use for journal entry testing.

“A uniform method of working increases our quality. That’s of course fantastic to see. But on the other hand, a global implementation comes with some challenges. Every country should put together an implementation team and a center of excellence to answer questions from the different stakeholders. And what is then the essence of implementing a new tool? Mainly the software should be working reliably. Next to that, access for the users to the tool must be put in place. Regarding the financial aspect you have make sure that the tool is sustainable and that you don’t spend more on it than it brings you. To make sure that the delivery center, placed in a location in Europe outside of the Netherlands, was up to speed we send a liaison to support them.” (I10)

This position was covered by interviewee 3, who works as a data analyst. His first experience was with a data analysis tool for performance which he says is comparable to the tool for journal entry testing when you look at the implementation part.

“In January, I spend some time at the delivery center. I just walked around and whenever someone needed me they just called my name and I would come to solve their question. This

(25)

25

created a better relationship between the engagement team and the data analysts. The engagement team could communicate in their native language with me and I could translate this into the right technical terms for the data analysts of the delivery center.” (I3, I10)

A pilot version of the tool was first tested 3 years ago in the Netherlands. And 1,5 years ago, after a second wave of pilots at several clients, it was globally decided to go live.

“I was having my doubts about going live. But it was globally decided that the pilot was successful enough to go live. It was somewhat pushed.” (I9)

Another champion described the process as being initiated at such a short notice that the rise of additional challenges for the engagement teams was inevitable.

“In September, last year we received a manual for the tool, and we were told that the implementation was set for the same year. A lot of interim audits had already taken place so the teams had no time to test the tool with the interim which could result in struggles during the final.” (I4)

(26)

26

6. Case Analysis

In this paragraph, a descriptive analysis based on the data of the interviews with auditors, champions and data analysts will be performed. In the interviews, the various perspectives on the new data analytics tool were captured and information on the influence on the audit is coupled in various subthemes.

The analysis is done based on the formalization framework of Adler and Borys (1996). This framework offers a structure for understanding the interviews in an enabling or coercive form (Free, 2007). Formalization is about how the rules, policies and procedures are structured within a firm.

This paragraph is structured as follows. Section 6.1 describes the repair structure of the formalization framework. Section 6.2 describes the internal transparency, section 6.3 the global transparency of the framework. Section 6.4 describes the flexibility structure of the formalization framework.

6.1. Repair

Repair is an element of formalization which assumes that events and tasks are not uniform and specific responses are required (Ahrens & Chapman, 2004; Dowling & Leech, 2014). Data analysis tools are part of a development where data is analyzed with the use of computer tools. As with every new tool this one has some flaws and errors in it. More important however is the way and ability of users to overcome these flaws and errors.

“New technologies are most of the time still immature. There is always a chance that a bug is overseen somewhere in the tool. You can compare with game developing. You can test the game on and on but once the game is released to a bigger audience there is always someone who finds a bug. We have seen the same with Microsoft this weekend. I cannot imagine that Microsoft hasn’t reviewed its security system to protect it against malware. Still computers got infected with ransomware. Such things just happen.” (I10)

The users of the tool have several ways to resolve questions. Every auditor is trained on data auditing fundamentals and introduced to the tool. The teams who work with the tool receive further training on the process described in Appendix B. They created teams of data specialists, data analysts who are specialized in supporting teams with technical challenges. A startup toolkit has been made, to support the user with their initial encounter with the tool. The startup toolkit consists of user guides for the extraction, transform and load process, using the tool itself, using the tools website to request usage of the tool, and talking points with clients. This repair mechanisms are constructed by the global implementation team and additions are made by the regional implementation teams7.

“First, manuals are made available to all users of the tool. A hotline is set up so people can call when they need help.” (I10)

7 A regional implementation team has the responsibility to implement the tool for all business units in its

(27)

27

“This hotline was not very effective since people didn’t always got a response when they called. Apparently, the voice mail wasn’t connected to any number, so during lunch and before opening hours people didn’t get any response. A lot of people felt letdown, which of course I can imagine.” (I5)

The global model turned out to be too complex for the engagement team to be executed. In this model engagement teams were expected to fill in the management website, perform the data extraction, upload the data to delivery center, and perform the tests all without any direct support. All these steps turned out to be too complex for a load of teams because they simply didn’t have the expertise regarding data analytics. As described in the literature this level of expertise needs to be improved (Big 4, 2015; Earley, 2015).

“I think we have to be better educated, a more analytical way of thinking. And we need to be informed on what the preconditions are for the use of data analytics. I think this subject is totally neglected in the auditor curriculum. There is a lot of attention in the news and discussion about it but there are hardly any courses in our curriculum.” (I12)

“The education as is doesn’t give any attention to data analytics. I don’t know if the IT audit curriculum gives any attention to it. I know that the VU offers digital auditing, but you cannot choose this as a minor. The auditor needs to become more future proof.” (I7)

This lack of experience or expertise in data analytics was acknowledged by the implementation team and they tried to overcome this by putting a data analyst on every team who used the tool.

“The first year we set up a center of excellence, this center was closely involved with the engagement teams. After the first year we chose to put a data specialist on every team. Data specialists are more familiar with questions of a more technical nature. So, they can help with the conversion of client data to data readable for the tool.” (I3)

But still the process wasn’t running in an optimal manner. Especially the delivery center 8 received some remarks from the audit personnel. Some auditors felt superfluous in the process. Because they felt like they were acting between two parties who could better be in direct contact with each other.

“I think it wasn’t very useful that we were acting between the client and delivery center, because delivery center was asking really technical questions of a technical nature which I wasn’t familiar with.” (I6)

Another remark was regarding living up to the promised deadlines. Initially the expected process time for each request for the tool by the delivery center was 5 days. This expectation was based on a standard system of the client, for instance SAP9, where the data extraction was a quite straight forward assignment. But as discovered by the interviews there are a lot of different client systems which all

8 Delivery Center is a business unit in Poland which performs all the data integrity and completeness tests for

all engagement teams. They are focused on data analysis only and their tasks are highly standardized. They check the data before it is loaded into the tool. Similarly, they check the data during and after the loading process to check whether is loaded into the tool correctly.

(28)

28 require a tailor-made query to extract the data. This caused a lot of additional work for the data analysts and for the delivery center. This was not foreseen when making the estimation for the process time by delivery center. Consequently, a lot of questions arose between delivery center and the engagement teams, because the delivery center required a lot more specific details on the client systems.

“The testing of data completeness and integrity are outsourced to delivery center. When they have missing items or questions they come back to us (engagement team). Consequently, you are e-mailing back and forward which is not very efficient and can become quite annoying.”

(I12)

This remark is put in perspective by interviewee 10, who was more involved with the delivery center, and argued that it is important to get to know each other before working together to know where someone is coming from during arguments. This is essential if the firm want to make a success of the implementation, people and teams must work together and communicate as clear as possible.

“They (delivery center) had to grow enormously. Imagine you start to work with a new tool which you barely know yourself and then from one day to another you have to double your department 3 or 4 times. You can imagine how hard it is to safeguard the transfer of knowledge. Maybe you can see it like this: you get hired as a mason one day, and you have no experience at all. On your first day, your supervisor says: ‘We are building a house, you have to mason a wall. I come back at 11 to see how that wall is built.’ You can imagine how though it is, if not hardly impossible.” (I10)

To prevent significant delays during busy season, the implementation team decided to send a liaison to the delivery center to support them during January. This decision was made to reduce any communication challenges. The geographical distance between the delivery center was sometimes an issue as described above. The engagement teams and personnel from the delivery center have never met. Under pressure of deadlines and promised processing times the communication could sometimes be on edge. The liaison acted as a mediator between the two parties and turned out to be a wise move.

“We send a liaison to the delivery center to build up a relationship, and to support them.” (I10) “In January, I spend some time at the delivery center. I just walked around and whenever someone needed me they just called my name and I would come to solve their question. This created a better relationship between the engagement team and the data analysts. The engagement team could communicate in their native language with me and I could translate this into the right technical terms for the data analysts of the delivery center.” (I3) (I10)

After the pilot, the implementation team introduced and extra line of support which was positioned between the center of excellence (the data analytics department) and the engagement team: they were called champions. They are the first line of contact for any questions of engagement teams. The data analytics department who had initiated the introduction of champions described them as follows:

“We created a relationship schedule with the names of the various champions. Their task was to form the helpdesk for the tool.” (I10)

Referenties

GERELATEERDE DOCUMENTEN

Our main argument for involving patients in translational research was that their input may help to make an innovation more relevant and useable for patients, and that it may

User profiling is the starting point for the user requirement analysis, limiting the research to particular users (Delikostidis, van Elzakker, & Kraak, 2016). Based

Life Cycle Assessment of low temperature asphalt mixtures for road pavement surfaces: a

In addition to Bickel, I will argue in the following chapter that the informal doctrine within the Marine Corps was, besides a result of the personal convictions of Marine

Figure 4.1: Foot analysis: Foot type and static and dynamic foot motion 61 Figure 4.2: Analysis of the left foot: Heel contact, mid stance and propulsion 63 Figure 4.3: Analysis

For these reasons, the central argument of this paper is that CA students - at least at the third-year level of their degrees - should be exposed to inter-disciplinary

While Figure 54 indicates that the long-only PEAD strategy using fixed fractions capital allocation is the optimal mean variance portfolio from the set, the

chemical reactions decomposition, cracking, polymerisation, heat transfer and mass transfer evaporation, sublimation, random ejection, and their interplay during the fast pyrolysis