• No results found

Data Quality Bottlenecks in ABN AMRO’s Incident Management Process

N/A
N/A
Protected

Academic year: 2021

Share "Data Quality Bottlenecks in ABN AMRO’s Incident Management Process"

Copied!
30
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Data Quality Bottlenecks in

ABN AMRO’s Incident Management

Process

Student Name: Alrian Kamdhi

Student Number: 11030682

Date: 13-07-2018

Supervisor: Sander van Splunter

2

nd

Examinor: Frank Nack

ABN AMRO Stakeholder: Monique Gerrets

Bachelor Thesis Information Science

Faculty of Science

(2)

Abstract

ABN AMRO’s incident management process is an important part of the services ABN AMRO offers, because they affect services’ functionality and potentially ABN AMRO’s image and trust with its clients. However, this process experiences bottlenecks that may result in poor data quality and other internal problems, e.g., bouncing. This thesis identifies some problem areas of the process, which includes some of the leading causes of poor data quality, like human error and data migration. Two interventions are proposed by this thesis to mitigate the problem areas, implementing single-person double entry and restructuring the incident management process.

(3)

Table of contents

1. Introduction 3

1.1. Data in companies 3

1.2. ABN AMRO 3

1.3. Incident management process 4

1.3.1. Internal incidents 5

1.3.2. External incidents 6

1.4. Research subject 6

1.5. Content of this thesis 7

2. Literature review 8

2.1. Defining data quality 8

2.2. Judging data quality 9

2.3. Causes of poor data quality 10

2.4. Improving data quality 11

3. Method 12 3.1. Quantitative research 12 3.2. Qualitative research 13 4. Results 14 4.1. Quantitative results 14 4.2. Qualitative results 14 4.2.1. Helpdesk 15 4.2.2. IT Run 16 4.2.3. Assignment Group 16 5. Conclusion 17 6. Possible interventions 18

6.1. Single-person double entry method 19

6.2. Departmentalization 20 7. Discussion 21 7.1. Limitations 21 7.2. Further research 22 References 23 Appendix 26

Appendix A. Survey questions 26

Appendix B. Interview structure 27

(4)

1. Introduction

1.1. Data in companies

In the current “Information Age”, data acts as a kind of resource to be used for various purposes. However, in contrast to physical resources, data is not consumed and can be reused multiple times in many different ways (Tayi & Ballou, 1998). Upon realizing the potential of data, more and more companies are adopting a data-driven approach into their decision-making and sometimes even making it a core component in a company’s business model. While not all companies share a consensus on how important data is within their own respective companies and use of data as a resource still varies widely between companies, research has shown that companies characterizing themselves as more data-driven performed better on objective measures of financial and operational results (McAfee & Brynjolfsson, 2012; Kwon, Lee & Shin, 2014). These results are further supported by research regarding data analytics and how actively adopting data analytics can strengthen a company’s market position and even offer new business opportunities (Powell, 1995; Kwon, Lee & Shin, 2014). Using data effectively to gain a market advantage or to find new business opportunities depends heavily on what kind of data you have and how you use it. Therefore, one of the growing fields of research surrounding data is related to data quality. Data can be extremely useful for all kinds of business purposes or completely useless depending on its quality. If not managed properly, poor data quality can have a negative impact in different business areas, like customer dissatisfaction, compromising decision-making abilities and difficulties in making and executing business strategies (Redman, 1998). Thus, ensuring that a company uses and produces a sufficient level of data quality should be considered just as critical as the data itself.

1.2. ABN AMRO

ABN AMRO is the third-largest bank in the Netherlands, accounting for over 16% of total banking assets (TheBanks.EU, 2017). Thus, they are responsible for a large part of online and offline transactions that happen in the Netherlands. Online banking has become more and more widely accepted by people because of its usefulness, ease of use, and security (Pikkarainen, et al., 2004; Qureshi, Zafar & Khan, 2008). This acceptance has also made online business processes and online customer experience more important for ABN AMRO. To keep their customers satisfied, ABN AMRO should strive to keep improving their online services (ABN AMRO, n.d.). Whenever customers experience a problem with one of ABN AMRO’s online services, they are able to contact one of the bank’s customer service

(5)

representatives to get the problem resolved. Simple problems can usually be directly solved by the customer service representative, thus resolving these issues during the call of a customer. However, more complex problems that cannot be solved on the spot have to be assigned and sent to a team within ABN AMRO to resolve this problem at a later time. The process of solving the more complex problems includes additional steps and these steps are where a problem may experience other internal problems, like insufficient information or being sent to the wrong team. These internal problems may result in a solve time of the original problem that the customer deems too long, resulting in customer dissatisfaction. One contributing factor to the internal problems may be related to the data quality of the problem description that is made by the Customer Service representative and sent to the teams. Improving the data quality of the problem descriptions could potentially avoid the longer solve time of customer problems due to internal problems and lessen the amount of customer dissatisfaction.

1.3. Incident management process

The incident management process is a chain of people that communicate with each other through various systems. Various terms used in the incident management process will first be explained, followed by an explanation of the process itself.

• Internal client: An employee of ABN AMRO. • External client: A customer of ABN AMRO.

• Helpdesk: A helpline for ABN AMRO employees (internal clients).

• Customer Service: A helpline for ABN AMRO customers (external clients). • First Line: The base level of Customer Service. These representatives have basic

knowledge of ABN AMRO and its many services.

• Second Line: A higher level Customer Service. These representatives have more technical knowledge compared to the First line.

• IT Run: The people responsible for keeping ABN AMRO’s IT services, like online banking, running.

• Assignment Group: A group of technical specialists responsible for the systems that underlie ABN AMRO’s IT services. They are third-party companies that have

contracted with ABN AMRO.

• IBM Notes Database: Software used by Customer Service to log incidents and by IT Run to retrieve logged incidents.

(6)

• ServiceNow: Software used by the Helpdesk and IT Run to log incidents and by Assignment Groups to retrieve logged incidents

• Knowledge base: A database containing knowledge articles. IBM Notes and ServiceNow both have separate knowledge bases.

• Knowledge article: A page in a knowledge base that contains information about frequently occurring incidents and possible fixes. For example, if incident X can be solved by doing Y or Z, a knowledge article will describe incident X and recommends doing Y and/or Z to solve the incident.

• Escalation: When an incident gets logged in IBM Notes Database or ServiceNow because it cannot be solved by Customer Service, the Helpdesk, or IT Run. The incident management process at ABN AMRO is schematically displayed in figure 1.

Figure 1. Incident Management Process.

Incident management starts at either an external client or an internal client, who

experiences a problem with ABN AMRO’s online banking software. The distinction between incidents of internal and external clients is important, as they follow different “routes” in the incident management process.

1.3.1. Internal incidents

Internal clients follow a relatively straightforward process. An internal client calls the ABN AMRO Helpdesk to report an incident with online banking. A Helpdesk employee first attempts to solve the client’s incident using ServiceNow’s knowledge base. This way, simple incidents can quickly be solved on the spot. If the knowledge base contains no knowledge article for an internal client’s incident and the incident cannot be solved using other

methods proposed by Customer Service, the incident has to be escalated using an incident-form in ServiceNow. An incident-incident-form requires several inincident-formation items, including the business process the incident is related to, a description of the client’s incident, and the

(7)

team that is responsible for maintaining applications related to the incident. After the incident-incident form is filled, it is sent to the Assignment Group specified in the form. The Assignment Group uses the incident log in ServiceNow to find out what caused the incident and fix it. The Assignment Group notifies the Helpdesk when the cause of the incident has been fixed and the Helpdesk notifies the client that the incident has been solved. Finally, the incident log in ServiceNow gets closed to declare the incident as solved.

1.3.2. External incidents

The process is slightly different for external clients. It starts the same as the internal client’s process, but instead of the Helpdesk, the client is directed to Customer Service. Customer Service essentially fulfills the same role as the Helpdesk, but on a more superficial level because the scope of incidents that Customer Service receives is broader. This is due to Customer Service getting calls about everything related to ABN AMRO’s online services, from helping clients get their forgotten password back to questions about failed money transactions. Customer Service also uses a wider range of communication methods, as external clients can call to the Call Center or go online to Facebook or other social media websites (this online Customer Service is called Webcare) to contact ABN AMRO about issues. When a call comes in with an incident, First Line Customer Service initially tries to solve the incident using knowledge articles. If the knowledge articles are insufficient and the Second Line also is unable to solve the incident, the incident gets escalated by logging it into IBM Notes Database. IT Run takes the information given by Customer Service in the IBM Notes Database and analyzes the incident to figure out if they have a possible solution. If IT Run knows a solution that Customer Service has not yet tried, they will suggest it to

Customer Service to try. IT Run uses their specialists’ expertise and ServiceNow’s knowledge articles to find this possible untried solution. If IT Run’s possible solutions were also not able to solve the incident, the incident gets escalated further via ServiceNow. The incident gets logged in ServiceNow and sent to an Assignment Group to figure the incident out. This process works the same as with the Helpdesk.

1.4. Research subject

Online banking is an online service ABN AMRO offers to its customers. Online banking allows customers to make transactions and manage all of their banking affairs online. This thesis focuses on the incident management of this business process. When customers experience issues with online banking, the problem resolution follows the same steps as

(8)

described earlier in section 1.3. This process illustrates clearly that the person experiencing complex problems (internal/external clients) does not have direct contact with the people responsible for maintaining the crucial systems (Assignment Groups) and incident logs are the means with which those two essentially communicate. This also means that quality of the data being sent from the affected client to an Assignment Group becomes extremely important in fixing the client’s problem. Thus, upholding the data quality of the incident logs to a high standard could result in a fast and efficient problem resolution. This thesis explores the data quality of the incident logs made by the different groups in the incident

management process and the limitations that causes the data quality to be insufficient enough to cause internal problems. This results in the following research question: “What causes a decrease in data quality of online banking incident logs in ABN AMRO’s incident management process”. A possible hypothesis is that the parties involved in the incident management process have different views of what they consider to be good data quality. This inconsistency in what should be considered good data quality may lead to one party creating incident logs with data that they find useful and important, while the group who has to use that incident log can find the data in the logs to be useless and irrelevant, effectively making the data quality poor.

The reasons for scoping this thesis to the online banking incident management process has to do with online banking’s significance as a service for ABN AMRO. Widely used

applications of ABN AMRO, like the one used for online banking, receive a lot of attention when a problem occurs for a majority of the users that use these applications. For example, in March of 2018, ABN AMRO became the victim of a DDoS attack, which disabled ABN AMRO’s online banking, mobile banking and website. In addition, ABN AMRO experienced an outage in which customers could not log in to their mobile banking accounts and transactions via iDeal were not possible (Nu.nl, 2018). Both problems persisted for several hours and resulted in extensive media exposure, hurting ABN AMRO’s credibility and image as a bank. Preventing this would prove to be a big benefit for ABN AMRO.

1.5. Content of this thesis

This thesis will discuss a number of topics regarding data quality and ABN AMRO’s incident management process. In section 2, the literature review will mention essential topics, such as what data quality exactly is, and the leading causes of poor data quality. Section 3 will describe the research method used in this thesis. Sections 4 and 5 will discuss the results of

(9)

the research and how those results fit into the scientific background given in section 2. Section 6 offers a few possible solutions to the problems found in sections 4 and 5. Finally, section 7 offers a discussion in which the limitations of this thesis and future research topics will be mentioned.

2. Literature Review/Background

Data quality is a concept that has no concrete definition and it also does not have a single comprehensive method of measurement. Therefore, to effectively research ABN AMRO’s data quality, some topics must be discussed first. Those topics include: the definition of data quality, different approaches to judging data quality, the main causes of decreased data quality, and what to consider when executing data quality enhancement projects.

2.1. Defining data quality

The definition of the term “data quality” determines the possible dimensions to evaluate whether the data quality is insufficient and is important in finding effective interventions (Ballou & Tayi, 1999). Finding a fitting definition is not an easy task, however, as data quality is more of an abstract concept than a set of rules or requirements. The definition can be approached from two points of view. First, an academic viewpoint regards data quality as a measure of agreement between the data presented in an information system and the data in the real world (Orr, 1998). Second, a more practical viewpoint sees data quality as the extent to which data is able to fulfill its intended use, or “fitness for use” (International Organization for Standardization, 2015; Tayi & Ballou, 1998).

To illustrate the difference between the two viewpoints, consider the following example: You have to choose whether to implement a system or not, taking into consideration the initial investment of the system. You look at data of other companies that have already implemented the system. If the data only states that companies adopting the new system had an increase in revenue and that this data is true, the first definition of data quality would rate the quality of this data as high. This is because it only checks whether or not the data is in agreement with the real world. However, you are considering the initial

investment and not the revenue the system might bring in. This means that the data is not useful for the decision you are making and thus its data quality would be low according to the second definition.

(10)

Because this thesis is looking to get client incidents solved faster, the Assignment Group responsible for solving online banking incidents should have access to data that they deem useful for solving the problem. Thus, the second definition of data quality, data quality as the extent to which data is able to fulfill its intended use, is more fitting to use in this thesis.

2.2. Judging data quality

Wang and Strong (1996) have created a conceptual framework, which captures data quality aspects that are important to data users, shown in figure 2. This hierarchical framework divides data quality into four main categories, which include certain data quality attributes. The four data quality aspects are:

• Intrinsic data quality: This refers to the concept that data has a degree of quality in and of itself.

• Contextual data quality: This highlights the context of the task for which the data is being used.

• Representational data quality: This relates to the format and meaning of data.

• Accessibility data quality: This refers to how accessible the data is in regard to aspects like security or downtime.

(11)

Wang and Strong also offer three approaches which can be used to study data quality: intuitive, theoretical and empirical. An intuitive approach is taken when attributes for data quality are selected based on what the researcher deems relevant. The researcher has to be someone who has experience in the field of study and has an intuitive understanding of which attributes are important. A theoretical approach looks at the data manufacturing process and focuses on where in that process the data may become deficient (lacking in a specific data quality attribute). An empirical approach analyzes the data that is collected from the actual users of the data and uses the results to determine the data quality attributes that are important for the data users. Each of these approaches have their advantages and disadvantages and in choosing which approach to adopt, the researcher should take into consideration the context of the research (resources, etc.).

2.3. Causes of poor data quality

Data quality is the result of multiple factors, which can include people and the systems they work with. The causes for poor data quality are therefore usually not limited to only one factor, but an interaction between multiple factors. Research conducted by Lehmann, Roy, and Winter (2016) identified five of the most common causes of poor data quality in a business.

The foremost leading cause can be attributed to human error. People are prone to

distractions, which can lead to the incorrect input of data into any system that people work with. This leading issue will probably persist, even in the future.

The second leading cause of poor data quality has to do with data migration. When data has to be moved from one system to another, already existing bad data can become worse because of factors like incompatible formats or the incorrect handling of bad data. This can be especially problematic with large data sets where small pieces of bad data are hard to find and can worsen without being noticed.

The third leading cause is also related to human input, mixed entries by multiple users. This cause refers to how people input information differently into a system depending on the person. This ambiguity can result in some people incorrectly interpreting a specific field and inputting wrong data. Standardization should be standard practice when inputting data to minimize incorrect data input.

(12)

The fourth leading cause has to do with source systems and changes made to those source systems. When applications are built, the developers choose how that application handles data. However, applications are usually built using source systems, pre-existing systems that are used as the application’s infrastructure, from third-party companies. Changes to these source systems may inadvertently change the way the application handles data and may ultimately result in what can be considered as data with poor data quality.

The fifth leading cause are system errors. Modern applications run on and interact with multiple computers simultaneously. This makes the system as a whole increasingly complex and also makes the system and its data more prone to failure and corruption. Failure and corruption can be extremely disastrous if there is no proper mitigation system in place to keep the data from disappearing or becoming useless.

2.4. Improving data quality

Executing a data quality enhancement project involves a lot of factors and determining which factors play an important role is difficult to do, especially in practice. Ballou and Tayi (1999) have identified six key factors in trying to improve data quality.

• Current quality: The current quality of the data based on predetermined attributes of data quality.

• Required quality: The required level of the data quality attributes after a data quality enhancement project has been executed. The project has technically failed if the required level has not been reached at the end of the project.

• Anticipated quality: This factor is a prediction of the levels of the various data quality attributes after an enhancement project. It is possible that the project has improved some attributes while diminishing others. This factor takes that “collateral damage” into consideration.

• Priority of organizational activity / Weight: Some business processes, or organizational activities, are more important for the company than others. This factor represents the priority level of the different business processes to determine which ones should be supported more actively.

• Cost of data quality enhancement / Cost: This factor consists of the total cost related to executing a data quality enhancement project. The cost can be described in terms of funds, personnel, time, etc. This factor is also related to the factor “Priority of

(13)

that supports an important business process or undertaking a slightly less expensive project that supports multiple, moderately important business processes.

• Value added / Utility: This factor involves the changes in value or utility of data quality after an enhancement project has been undertaken. This factor can be positive, negative or neutral, depending on how much the project has increased or decreased the value/utility of the data quality (has it been improved or diminished). Two

important notes related to this factor. First, data quality does not have to be improved past the point of requirement, as this only increases cost but not the value/utility. Second, a data quality enhancement project need not, and probably will not, remove all data quality deficiencies.

All of these key factors should be kept in consideration when proposing possible options for a data quality enhancement project.

3. Method

This thesis employs a mixed methods approach, meaning both quantitative and qualitative research methods are used. The quantitative part is used to test the hypothesis by checking whether or not the parties of the incident management process value attributes of data quality differently. The qualitative part is used to determine what might cause this

difference and to find other factors of the incident management process that may have an effect on data quality.

3.1. Quantitative research

The quantitative part of this thesis consists of a survey about data quality attributes and aims to test the hypothesis in section 1.4. It uses an intuitive approach by aiming to determine the importance of each data quality attribute for each party involved in the incident management process. The role of researcher of the intuitive approach is replaced with the parties of the incident management process. The null hypothesis states that the means for all attributes and categories do not differ significantly between parties; μn = μm. The alternative hypothesis states that the means for all attributes and categories do differ significantly between parties; μn  μm. If a significant difference in important attributes of data is observed between parties, then that would prove that the parties consider good data quality to be different things.

(14)

The survey is sent via email to one person of the Helpdesk and to two persons of Customer Service, IT Run, and the Assignment Group. This means that seven initial emails are sent to the different parties. The email also asks the initial receiver to forward it to colleagues of the same party, because finding the names of each person in every party proved to be too time consuming.

The survey is essentially a shortened version of the two-stage survey used in Wang and Strong’s (1996) research. This is done because their research has proven to be of significant importance when it comes to data quality research and using their survey safeguards the accuracy, validity and reliability of this thesis’ survey. The 15 data quality attributes in Wang and Strong’s data quality framework are chosen for the survey because those attributes were deemed the most useful in performing one’s job by Wang and Strong’s research and thus fits well with the intuitive approach of the survey.

The survey consists of 15 data quality attributes and respondents have to rate each on a 9-point Likert scale in which 1 is the most important or useful attribute in performing one’s job (making and using an incident report) and 9 is the least important or least useful attribute. The scores of the different parties are then compared with each other to see if a significant difference exists between the scores of attributes and categories of the different parties. A significant difference would mean that the mixed user input cause of poor data quality is present in the incident management process. The survey questions can be found in Appendix A.

3.2. Qualitative research

The qualitative part of this research consists of interviews with one person from each party of the incident management process. The interviews adopt a theoretical approach, as it tries to identify a possible cause for the result of the survey and to look for other problem areas for data quality in the incident management process.

Unlike the quantitative part of this research, the way the interviews are done and the interview’s questions could not be adopted from a previous study or format, as the subject matter of this thesis is too specific and thus does not allow for pre-existing questions to be used effectively.

The interviews are semi-structured in nature, because the goal of the interviews is to find a possible cause for attribute importance difference between parties and to explore different aspects of incident management that may affect data quality. Considering the goal and seeing as incident management is a complex process that involves many parties, the

(15)

interviews have to have some flexibility to request for elaboration on answers that raised more questions. The main interview questions revolve around how each party does their job, the systems and applications they use, and how the parties relate to each other. The general structure of the interviews can be found in Appendix B. This template is altered per party to fit their respective work environments (e.g., different systems used).

The interviews have been transcribed by the author for research purposes, but due to their confidential nature, these have not been included in this thesis.

4. Results

The results of this thesis yielded several results and will be discussed in the next sections. The survey did not manage to attain a statistically useful sample size, but the interviews did expose some problem areas within the incident management process.

4.1. Quantitative results

From the initial seven people that were emailed about the survey, only three responded, one from IT Run and two from the Assignment Group. Not a lot can be derived from the results of the survey as the response rate was very low, n = 1 for IT Run and n = 2 for the Assignment Group. A short analysis of the results can be found in Appendix C. Because of the small sample size, statistical tests would offer no added benefit, as the results of those tests would suffer from problems commonly associated with small sample sizes, like low statistical power, inflated effect size estimation and false generalizations (Button, et al., 2013; Colquhoun, 2014). Thus, the results of the quantitative research are not used further in this thesis. Because the hypothesis cannot be tested, the rest of this thesis will assume the null hypothesis to be true; the means for all attributes and categories do not differ significantly between parties.

4.2. Qualitative results

Only the Helpdesk, IT Run, and the Assignment Group were able to supply this thesis with a specialist from their respective teams to give insights into their work. Customer Service was not able to provide an interview with one of their specialists. The interviews were held in random order, someone from IT Run was interviewed first, someone from the Assignment Group second and someone from the Helpdesk last.

(16)

The Helpdesk, IT Run and the Assignment Group all experience different kinds of problems related to data quality and the incident management process. The parties and their data quality problems are summarized in figure 3.

Data quality problems

Helpdesk • Human errors.

IT Run • Transferring incident logs from IBM Notes Database to ServiceNow.

• Logging an incident twice, but by different people. • The need to “translate” from functional to technical.

Assignment Group • Poor data quality as a result of problems of previous parties.

Figure 3. Parties and their data quality problems

The problems the parties experience all result in something called bouncing, which is what happens when an incident is continually being sent between different parties and different Assignment Groups. Bouncing is also noted by all parties to be a big problem as a result of poor data quality. The three parties’ experiences with incident logs and their data quality is discussed in the next sections.

4.2.1. Helpdesk

One of the Helpdesk’s biggest problem is human error, which causes data quality issues that impact the incident management process. Human errors can range from rather small things, like typos, to bigger issues, like inputting wrong data in specific fields. Especially the latter group of human errors can result in bouncing and thus also in longer solve times for incidents. Avoiding those human error may prove to significantly improve the incident management process.

An example of a human error causing bouncing could look like this: A Helpdesk employee accidentally clicks the wrong Assignment Group to send an incident to in a ServiceNow incident-form. The wrongly chosen Assignment Group receives an incident that they are not responsible for and are left with two options, sending the incident “back” to the Helpdesk so they can pick the correct Assignment Group, or send it to the correct Assignment Group themselves (assuming they know which Assignment Group is the correct one). Bouncing already has to occur at this point to get the incident to the correct Assignment Group. This could have been avoided if the correct Assignment Group was chosen in the first place.

(17)

4.2.2. IT Run

Logging and solving client incidents are not IT Run’s main objectives, but they still

experience problems related to incident management that hinder their main objective by costing time. IT Run mainly experiences what they call translation errors. These translation errors refer to the fact that IT Run has to “translate” the incident logs from the IBM Notes Database into incident logs in ServiceNow that are useful to the Assignment Groups. This is not always easy to do because of the difference in the two parties’ overall work situation. Customer Service is more human-centric, as they try to help out their client to the best of their abilities. Therefore, Customer Service describes the incident from a more functional point of view, e.g., what parts of an application works and what does not, as their main goal is to get whatever is not working, working again for the client. The Assignment Groups consist mostly of software engineers that are largely disconnected from the applications that their systems are supporting. They focus on the supporting infrastructure of the systems that ABN AMRO uses and do not know how specific ABN AMRO system work functionally, just on a technical level (e.g., they know what certain server specifications are supposed to be, but not how the application which it supports is supposed to work). Thus, Assignment Groups work in a more technical environment, which means that they benefit the greatest from an incident log that includes technical details, like affected servers and error codes.

One factor that may exacerbate this problem is the fact that Customer Service logs their incidents in Dutch, while ServiceNow as a system and the Assignment Groups use English. This language difference is especially felt when highly technical terms have to be used in the incident logs, as the person translating it may not know how to correctly translate those terms. These translations from functional to technical (and Dutch to English) is considered to be one of IT Run’s contributors to diminished data quality due to possible errors made during translation. These errors may also cause bouncing because the incident log has to be sent back in order to be correctly re-translated.

4.2.3. Assignment Group

Assignment Groups’ problems with data quality mainly stem from errors made by parties earlier on in the incident management process and their problems confirm the issues that the other parties talked about, like insufficient information to work with or getting an incident that they are not responsible for, both of which cause bouncing. As Assignment

(18)

Groups are ultimately the end user of the incident log, it is logical that they do not contribute to the diminishing of data quality in the incident management process.

5. Conclusion

This thesis tries to answer the question: What causes a decrease in data quality of online banking incident logs in ABN AMRO’s incident management process? The answer can be segmented into problems of the different parties of the incident management process. The results from the interviews expose a few problem areas within the incident

management process which are also previously known to be some of the leading causes of poor data quality. Figure 4 is a table which shows the different parties, their problem areas, and which leading cause of poor data quality the problem areas are related to.

Helpdesk problems IT Run problems

Human error Human errors

Data migration Transferring incident logs

from IBM Notes Database to ServiceNow

Mixed entries by multiple users

Logging an incident twice, but by different people

Source systems The need to “translate”

from functional to technical

Figure 4. Parties and their leading causes of poor data quality

The number one cause for poor data quality, as mentioned in section 2.3., is also one of Helpdesk’s biggest problems, human errors. Incident logs occasionally contain mistakes, like incorrect data or missing figures. Considering human error is the leading cause, it is not surprising that it occurs in the incident management process, where data has to be typed in manually at several points in the process.

Data migration usually refers to large scale migrations from one system to another, but the problems might still occur even though IT Run migrates data on a smaller scale, from IBM Notes Database to ServiceNow. One of the problems with data migration includes the passing on and worsening of already incorrect data in the previous system. IT Run can still

(19)

experience this problem, especially if it involves elements like complicated error codes. It may also happen because of human errors made by Customer Service. Even if IT Run

manages to catch the incorrect data during migration, correcting the data is an unnecessary extra step that can take a considerable amount of time.

Mixed entries by multiple users is also a big part of translation errors. IT Run essentially has to input the same incident in ServiceNow that Customer Service already logged in IBM Notes Database. One of the differences between the IBM Notes Database log and the ServiceNow log, besides language and technicality, is the person inputting the data. The actual data can change drastically depending on what each person thinks is important to include in the incident log. IT Run could exclude certain details that Customer Service might find essential to solving the incident, just because IT Run considers those details as

unnecessary.

Source systems also contribute to IT Run’s problems. ABN AMRO’s usage of source systems from third-party companies implies their applications have dependencies on external source systems. If something happens to one of the source systems without the owner’s

knowledge, the client is going to experience the consequences through failed online services. Getting the source system running again means that an incident log has to reach the Assignment Group through IT Run. Using source systems essentially creates the necessity for IT Run to have to “translate” incident logs.

In conclusion, ABN AMRO’s incident management process experiences many of the leading causes of poor data quality, under which human error, data migration, mixed entries by multiple users, and source systems. These leading causes are reasons why data quality decreases in the incident management process and mitigating these causes could possibly improve the overall data quality for the incident management process.

6. Possible interventions

This section will discuss some interventions that may improve the problems that the incident management process encounters. This thesis will also try to keep Tayi and Ballou’s (1999) six key factors, as described in section 2.4., in consideration when proposing possible measures. However, the three quality factors (current, required, and anticipated quality) cannot be kept into consideration with the proposed interventions because this thesis has not objectively measured the quality of the incident log data. The interventions are

(20)

summarized in figure 5 and 6. Figure 5 shows the interventions and the problems they try to solve. Figure 6 shows the interventions and how they affect some key factors.

Problem Single-person double

entry method

• Human error

Departmentalization • Transferring incident logs from IBM Notes Database to ServiceNow.

• Logging an incident twice, but by different people.

Figure 5. Interventions and the problem(s) they try to solve

Weight of business process

Cost of intervention Value-added by intervention Single-person double entry method High; important business process

Low to medium cost Less human errors; higher data quality Departmentalization High; important

business process

High cost Undetermined

Figure 6. Interventions and their effect on key factors

6.1. Single-person double entry method

Helpdesk may benefit from a data entry method that emphasizes checking the data for any errors before actually logging an incident in ServiceNow. One of the most effective data entry methods for this is the two-person double entry method (Barchard & Pace, 2011; Cobb & Barchard, 2013). This method consists of a single person inputting data and a second person inputting the same data and matching them to check for errors. Research has shown that this method is able to detect 88.3% of human errors. A variant of this method has one person filling in the same data twice, called the single-person double entry. This method has a slightly lower rate of detected human errors, 69%.

The weight of the business process that this intervention affects can be considered

reasonably high and growing, mainly because of the growing importance of online banking, customer satisfaction, and negative exposure, as mentioned in section 1.2. The cost

depends on which intervention will be put in place. The two-person double entry has a higher rate of detecting human errors, but it also takes considerably more time to log an

(21)

incident compared to single person double entry, around 37% more time (Cobb & Barchard, 2013), and it also requires an extra person. The utility of the data is one factor that is

generally positive, as this intervention can only improve the data quality of incident logs, making them more valuable and useful. The single-person double entry seems like a relatively simple and cost-effective method to decrease human errors at the Helpdesk. Decreasing these human errors will also partly solve the problems experienced by the Assignment Group, as described in section 4.2.3.

6.2. Departmentalization

IT Run’s problems are more numerous and more complicated than Helpdesk’s problem, which makes finding an effective intervention considerably more difficult. One possible intervention derived from a management principle could prove effective in mitigating IT Run’s problems. Functional structure departmentalization is an organizational structure concept in which an organization groups jobs based on similarity of function, such as marketing jobs, IT jobs, etc. (Organizational Structure and Change, 2010). Applying the concept to the incident management process would suggest that handling client incidents should be done completely by Customer Service, since their function is to help the client as best as they can. The responsibility of logging incidents into ServiceNow would become that of Second Line Customer Service instead of IT Run. This intervention would eliminate one party (IT Run), letting them focus more on their main objective, and application (IBM Notes Database) from the incident management process. Clients with incidents that need to be escalated can easily be redirected, while still on the phone, from the First Line to the Second Line, which means that the incident does not have to be logged in IBM Notes Database first and then into ServiceNow by two different people while also being translated in between logging. This may eliminate the problems arising from data migration and mixed entries by multiple users. Also, because IT Run is responsible for keeping IT services running, they should still oversee the incident management process, but not be directly involved in it. Implementing this intervention means that the Second Line has to be trained on a higher technical level akin to the level of IT Run, which may prove to be very costly and time consuming.

Weight can be considered the same as the previous intervention, already reasonably high and growing. This intervention is highly likely to be high in cost, as it includes a restructuring of a part of the incident management process and providing the Second Line with technical training. The utility also is not guaranteed to improve, as there is no empirical precedence

(22)

for such an intervention. Thus, this intervention seems like a risk that a business would not be quick to take.

7. Discussion

This section will discuss this thesis’ various limitations and suggest topics related to this thesis that can be researched further.

7.1. Limitations

Exploring a business process in a multinational large-scale distributed organization like ABN AMRO combined with a limited amount of time to execute the research proved to be very challenging and resulted in two main limitations. The first main limitation was that the motivation of the targeted experts to join the quantitative research was lower than

expected. This resulted in a critically low number of respondents for the survey, which does not allow for statistical tests and it also does not allow to either confirm or reject the

hypothesis on whether the views on data quality differ between the parties involved in the incident management process. This low response rate may be due to a few factors, like unfamiliarity and the survey itself. It has been made clear that the employees of the parties of the incident management process receive meaningless surveys about all kinds of topics very often, making them disregard this thesis’ survey as one of those surveys. Notifying the supervisors of each party also did not result in more respondents. The survey questions may also be part of the reason why the response rate was low. The survey used a 9-point Likert scale, the same as Wang and Strong used in their 1999 study. This large point scale may have deterred possible respondents from filling in the survey, as a 5-point Likert scale seems to be more effective for a higher response rate (Revilla & Krosnick, 2014).

The second main limitation was that working and communicating with the targeted experts that are distributed throughout the organization has proved to be challenging. Getting in contact with people from every party of the incident management process took more time than expected and maintaining contact for interviews and surveys also proved to be difficult. Asking employees from each party also implied the challenge that for each organizational unit, new organizational leverage needed to be created per party to justify their time spent on helping with this thesis. Some were interested and were ultimately able to help this thesis by doing interviews and filling surveys, but that group of people was very limited. This inconsistency also meant that interviews needed to be held whenever possible

(23)

and that made it unable to follow a logical schedule of interviews, like Customer Service first, then IT Run and lastly Assignment Group. Some questions, like asking IT Run if human error was also a large problem for them, also became impossible to ask because the human error problem surfaced during the interview with the Helpdesk, which happened after IT Run’s interview, and time constraints made it impossible to do a follow-up interview.

7.2. Further research

This thesis opens up a few areas for further research. First of all, the different leading causes can all be researched individually within ABN AMRO, as those causes can be applied to many of ABN AMRO’s business processes, like human/system errors and the use of source systems. Another interesting area to research is the possible interventions. The two

discussed in this thesis are examples of what can be done to improve data quality. However, their actual effectiveness is still yet to be determined. Testing different data entry methods seems like it could lead to effectively decreasing human errors. It would also be interesting to see how the incident management process could be restructured to find the most effective process configuration possible.

(24)

References

ABN AMRO. (n.d.). Integrated Annual Review 2017. Retrieved from

https://www.abnamro.com/en/images/Documents/010_About_ABN_AMRO/Annual_Repor t/2017/ABN_AMRO_Integrated_Annual_Review_2017.pdf

Ballou, D. P., & Tayi, G. K. (1999). Enhancing Data Quality in Data Warehouse Environments. Communications of the ACM, 42, 73-78

Barchard. K. A., & Pace, L. A. (2011). Preventing human error: The impact of data entry methods on data accuracy and statistical results. Computers in Human Behaviour, 27, 1834-1839.

Button, K. S., Ioanniddis, J. P. A., Mokrysz, C., Nosek, B. A., Flint, J., Robinson, E. S. J., & Munafo, M. R. (2013). Power failure: why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience, 14, 365-376.

Cobb, S., & Barchard, K. A. (2013). Which Data Checking Method is More Accurate?. Retrieved from

https://digitalscholarship.unlv.edu/cgi/viewcontent.cgi?article=1053&context=mcnair_post ers

Colquhoun, D. (2014). An investigation of the false discovery rate and the misinterpretation of p-values. Royal Society Open Science, 1, 1-16.

International Organization for Standardization. (2015). ISO 9000:2015: Quality management systems – Fundamentals and vocabulary. Retrieved from

http://mahdi.hashemitabar.com/cms/images/Download/ISO/iso-9000-2015-english.pdf Kwon, O., Lee, N., & Shin, B. (2014). Data quality management, data usage experience and acquisition intention of big data analytics. International Journal of Information Management, 34, 387-394.

(25)

Lehmann, C., Roy, K., & Winter, B. (2016). The State of Enterprise Data Quality: 2016; Perception, Reality, and the Future of DQM. Retrieved from

https://siliconangle.com/files/2016/01/Blazent_State_of_Data_Quality_Management_2016 .pdf

McAfee, A., & Brynjolfsson, E. (2012). Big Data: The Management Revolution. Harvard Business Review, 90, 60-68.

Nu.nl. (2018). ABN Amro kampte opnieuw met storing. Retrieved from

https://www.nu.nl/internet/5170535/abn-amro-kampte-opnieuw-met-storing.html

Organizational Structure and Change. (2010). Principles of Management. Minneapolis, MN: University of Minnesota Library Publishing.

Orr, K. (1998). Data Quality and Systems Theory. Communications of the ACM, 41, 66-71.

Pikkarainen, T., Pikkarainen, K., Karjaluoto, H., & Pahnila, S. (2004). Consumer acceptance of online banking: an extension of the technology acceptance model. Internet Research, 14, 224-235.

Powell, T. C. (1995). Total quality management as competitive advantage: A review and empirical study. Strategic Management Journal, 16, 15-37.

Qureshi, T., Zafar, M., & Khan, M. (2008). Consumer acceptance of Online Banking in Developing Economies. Journal of Online banking and Commerce, 13.

Redman, T. C. (1998). The Impact of Poor Data Quality on the Typical Enterprise. Communications of the ACM, 41, 79-82.

Revilla, M. A., & Krosnick, J. A. (2014). Choosing the Number of Categories in Agree-Disagree Scales. Sociological Methods & Research, 43, 73-97.

(26)

Tayi, G. K., & Ballou, D. P. (1998). Examining Data Quality. Communications of the ACM, 41, 55-57.

TheBanks.EU. (2017). Major Banks in the Netherlands. Retrieved from

https://thebanks.eu/articles/major-banks-in-the-Netherlands

Wang, R. Y., & Strong, D. M. (1996). Beyond Accuracy: What Data Quality Means to Data Consumers. Journal of Management Information Systems, 12, 5-33.

(27)

Appendix A. Survey questions

(28)

B. Interview structure

• What is your job within the incident management process and how do you perform this job?

o Ask further about different aspects of work environment, e.g., job objectives and limitations.

o Find out relations between party of interviewee and other parties of incident management.

• What systems/applications do you use and how do you use it?

o Ask further about different aspects of those systems/applications, e.g., experience and data used.

• Do you experience any issues related to data quality?

o Ask further about details of these data quality issues, e.g., who and what they affect.

o Find out where these issues come from and if other parties are involved with these issues.

• How do you overcome data quality issues?

(29)

C. Quantitative results

The following results are the mean scores of every attribute rated on a scale of 1 to 9, where 1 is most important and 9 is least important for performing a task. The mean scores are then grouped according to the attributes’ data quality category and the mean for every category is calculated. This goes up one level on Wang and Strong’s DQ framework and indicates which category of data quality would be most important for each group in the incident management process.

IT Run

Attributes and mean scores are given in the following tables. The scores give an indication of the degree of importance of each attribute and category for solving an incident from IBM Notes Database or making an incident log in ServiceNow.

Attribute Access security

Accessibility Accuracy Appropriate amount of data

Believability

Mean score 3 3 3 3 3

Completeness Concise Ease of

understanding

Interpretability Objectivity Relevancy

3 5 3 3 3 3 Reputation Representational consistency Timeliness Value-added 3 5 3 3

Category Intrinsic DQ Contextual DQ Representational DQ Accessibility DQ

Mean score 3 3 4 3

The results of this survey show a relatively equal importance for most data quality attributes with only “Concise” and “Representational consistency” (both μ = 5) being less important. This makes the mean score for the “Representational DQ” (μ = 4) the highest and thus the least important data quality category for IT Run.

(30)

Assignment Group

Attributes, mean scores and standard deviations are given in the following tables. The scores give an indication of the degree of importance of an attribute and category for solving an incident by the Assignment Group.

Attribute Access security

Accessibility Accuracy Appropriate amount of data

Believability

Mean score 2.5 2 3 1,5 3

Completeness Concise Ease of

understanding

Interpretability Objectivity Relevancy

2.5 2 2 2 2 2.5

Category Intrinsic DQ Contextual DQ Representational DQ Accessibility DQ

Mean score 2.5 2.2 2.25 2.25

The results of this survey also show a relatively equal importance for most data quality attributes. “Appropriate amount of data” (μ = 1.5), seems to be the most important, resulting in “Contextual DQ” (μ = 2.2) to be the most important category. However, both are only rated the most important by a slight margin.

Referenties

GERELATEERDE DOCUMENTEN

Als je arbeidsovereenkomst tijdens of binnen drie maanden na je verlof eindigt, moet je de vergoeding terugbetalen aan de bank.. Dit geldt niet als je direct aansluitend met

Voor medewerkers die vóór 1 januari 2022 met generatieverlof zijn gegaan geldt een termijn van drie maanden in plaats van een jaar.. 6.7.4 Pensioen

Zonder deze aanvullende dekking zijn mobiele apparaten binnen de woning altijd volgens de basisdekking gedekt tot € 7.500, dus nooit voor o.a.. vallen

De eerste lijn wordt gevormd door de Servicedesk en Mid-Office van ABN AMRO Pensioenen en heeft de primaire verantwoordelijkheid voor de operationele uitvoering van

Toch hebben ABN AMRO medewerkers zich 1.673 keer ingezet voor 6.414 kinderen en jongeren door het hele land.. Samen zijn zij via de foundation 10.735 uren actief geweest,

Er heeft een presentatie plaatsgevonden met werken uit de historische collectie in de ABN AMRO lounge op de TEFAF, de tentoonstelling All Paintings Are Uneven is gestart in

Gert Jan van Rooij; midden rechts, opening tentoonstelling All Paintings Are Uneven; links onder, openingstoespraak van Circl door Mark van Rijt, 5 september 2017; uitreiking 7e

Een risico-inventarisatie en -evaluatie (RI&E) geeft u snel inzicht in die zwakke plekken en daarmee in risico’s waar u aansprakelijk voor kunt worden gesteld. De RI&E