• No results found

Increasing the reliability of vehicles by combining data with the maintenance philosophy of M5 Metro Amsterdam

N/A
N/A
Protected

Academic year: 2021

Share "Increasing the reliability of vehicles by combining data with the maintenance philosophy of M5 Metro Amsterdam"

Copied!
126
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Increasing the reliability of vehicles by

combining data with the maintenance

philosophy of M5 Metro Amsterdam

Master Thesis Information Studies, track Business Information Systems

Ralph Mitchel Klaasse

Supervisor University of Amsterdam (UvA): Dr E.J. de Vries Supervisor Gemeentelijk Vervoersbedrijf (GVB): M. ten Have MSc

(2)

2

Ralph Mitchel Klaasse

Student number: 11116668

University: Master Information Studies, track Business Information Systems

University of Amsterdam

Internship organization: GVB Amsterdam

First Reader University of Amsterdam: Dr E.J. (Erik) de Vries

Second Reader University of Amsterdam: Dr M. (Mieke) Kleppe

Supervisor Gemeentelijk Vervoersbedrijf: M. (Mariëlle) ten Have MSc

Alstom Metropolis (M5) metro of GVB Amsterdam

(3)

3

Preface

Amsterdam, 9th June 2016. On this sunny morning I am working on the preface of this research. It is almost time to defend my thesis. Time flied since I started writing my thesis for the Gemeentelijk Vervoersbedrijf (GVB) Amsterdam in March. For me, it is almost like yesterday that I had my job interview at the organization. It all started in February this year. I did not, in contrast to many of my classmates, have a thesis topic yet and an organization at which I could write my thesis. Suddenly, in a lecture, a classmate of mine, Jeroen Meijaard, told me that he had spoken to someone at GVB Amsterdam. I told him that I was looking for such an organization to write my thesis for, as my interest concentrates in this working field. I would like to take this moment to thank him for the contact descriptions he gave me from employees of the GVB.

By finishing my thesis for the formation Information Studies, the master study track is also ending. The last year, I really enjoyed my master’s time at the University of Amsterdam (UvA). I consider this product as a continuation of that enthusiasm for IT and the organizational context in which it is having its relation. I would like to take this moment as well to thank my thesis supervisor from the University of Amsterdam, Erik de Vries. Without his comprehensive feedback and his enthusiasm, this thesis would never have been of this quality. He exactly pinpointed the weak points of my methodology and helped me to overcome those challenges.

Also, I would like to take this moment to thank all my colleagues at the GVB for input and feedback regarding this thesis. Their input, feedback and network contacts helped me to plan the interviews and to get all the information from the different sources. I especially would like to thank two people at the organization. Bas Sprangers, because without his enthusiasm to offer me a contract and his productive feedback, this product would never have existed. Also, Mariëlle ten Have, for her help as a supervisor from the perspective of the organization. Her expression of enthusiasm to make the organization future-proof can, in my view, be clearly seen in this master thesis.

Without the help from a stable home situation this thesis would never have been the same. By giving me the opportunity to start living at their house again, my parents shaped the right context to do the master thesis. I am really thankful to them for giving me that opportunity. Furthermore, the enthusiasm of the rest of the family, my sister, friends and all the others included, contributed to the spirit in order to do this study with a lot of motivation.

I wish all the readers of this research a lot of reading pleasure. With this thesis, I hope that I could have contributed a little bit to the development of that beautiful infrastructure, in its broad sense, that the Netherlands is containing.

Den Helder, 25th June 2016,

Ralph Mitchel Klaasse

“Is there intelligence without life? Is there mind without communication? Is there language without living? Is there thought without experience?”

Andrew Hodges,Alan Turing: The Enigma

(4)

4

Table of content

ABSTRACT 5 ABBREVIATIONS 6 1. INTRODUCTION 7 1.1. RESEARCH CONTEXT 7 1.2. RESEARCH GOALS 7 1.3. RESEARCH QUESTION 8 2. RELATED WORK 9 2.1. MAINTENANCE DEVELOPMENT 9 2.2. DATA USAGE 12 3. RESEARCH METHODOLOGY 15

3.1. DESIGN SCIENCE RESEARCH 15

3.2. DATA GATHERING DESIGN SCIENCE RESEARCHES 16

3.3. SET-UP FURTHER RESEARCH 18

4. DESIGN SCIENCE RESEARCHES M5 METRO GVB AMSTERDAM 19

4.1. INTRODUCTION 19

4.2. STUDY DOORS 19

4.3. STUDY PASSENGER CLIMATE CONTROL 24

5. RECOMMENDATIONS FOR MANAGEMENT GVB 27

5.1. IMPLICATIONS FROM STUDIED OBJECTS 27

5.2. IMPLICATIONS FOR ORGANIZATION 28

6. RESULTS 30 7. CONCLUSION 32 8. DISCUSSION 33 8.1. RELATION TO LITERATURE 33 8.2. LIMITATIONS RESEARCH 34 8.3. FUTURE RESEARCH 34 8.4. IMPLICATIONS MANAGEMENT 35 8.5. REFLECTION 35 9. REFERENCES 36 APPENDICES 41

(5)

5

Abstract

Different emerges in the IT-world are going to play a key role in the way of living and working, also in the railway industry. In this research, it is investigated how reliability of vehicles can be improved with the help of data from components in the context of risk-based maintenance. Corrective maintenance is the basis of maintenance concepts, followed by preventive, condition-based and risk-based maintenance. Regarding risk-based maintenance, for this concept data from sensors is needed to predict and to assess failures. In data science, the data needs to be prepared to make the data ready for business purposes. This happens with data collection, preprocessing and analysis of data. Furthermore, to assess if the data is accurate and reliable enough, a data quality framework is proposed.

In the results part, presented in the CIMO-logic, the challenge in the context of the working field lays in the misalignment between practice and theory regarding maintenance. The intervention part focuses on data that is coming from components of the vehicle. Maintenance support systems play, in this concept, a central role in this maintenance philosophy and may eventually even tell maintainers what work needs to be done on the product. For the outcome part, as the base for data driven risk-based maintenance is shaped, reliability of vehicles will increase as failures can be predicted and prevented. Recommendations from this research focus on the costs side of this concept of maintenance, on the level of application of data driven risk-based maintenance and on data gathering improvement concepts.

(6)

6

Abbreviations

DDU: Driver Display Unit MMI: Mens-Machine Interface

TCMS: Train Control Monitoring System MPU: Main Processor Unit

MVB: Multiple-Vehicle Bus MSS: Maintenance Support System RIOM: Remote Input-Output Module DCU: Door Control Unit

HVAC: Heat Ventilation Airconditioning

(7)

7

1. Introduction

At this part of the research, the research context, the research goals, the research question and the operationalization of the research are explained. In other words, it is explained why this research is relevant for both societal and scientific reasons, what the knowledge gap includes and how this research is going to fill (a part of) an answer of the defined knowledge need.

1.1. Research context

Time is an ongoing change process, especially in a timeframe which is defined as the fourth industrial revolution (Elliot, 2016). It is the time in which billions of people and devices are going to be connected to the internet. It is the time in which Artificial Intelligence (AI) introduces itself and in the time which will change the way humans work and behave disruptively (Schwab, 2016). This concept can be described as a part of computer science in which computers are given the sophistication to act intelligently (Nilsson, 1980). Moreover, the concept can be called as “solving problem that use heuristic of knowledge-based methods, planning, understanding and generating natural language, perception and learning” (Bond & Gasser, 1988, p. 3). AI will probably make a society in which in some situations people will tell the computer what to do, but in many situations it may be the computer telling the people how to cope with challenges.

Moreover, Internet of Things (IoT) may help us to monitor all kinds of activities. This concept is the context in which many of the objects in the environment are in some way connected to a network (Jayavardhana, Rajkumar, Buyya, Marusic, & Palaniswami, 2013 ). Another definition of this term explains that IoT are things or objects that, using addressing schemes, communicate with other devices in order to create mutual values (Daniel, Antonio, Giacomo, & Luigi, 2010, p. V). This technique may help us to improve reducing errors in machinery and probably also lowering costs of maintenance. At last, the term of Big Data can be described like the fact that the size of the data becomes a problem for traditional data techniques and therefore new techniques need to be created (Cavanillas, Curry, & Wolfgang Wahlster, 2016, p. 30) from (Loukides, 2010). Another definition of this phenomenon is describing that Big Data is data from such a big size that the current-existing methods are not universal to do their job in the same way anymore in the future (Cavanillas, Curry, & Wolfgang Wahlster, 2016, p. 30) from (Jacobs, 2009). Those changes in working, behaving and living because of those phenomena will occur in many, and probably eventually even all, organizations.

One of the sectors that has challenges in the area of AI, IoT & Big Data is the Railway Industry. From maintenance of the rails, development of new rail material or automotive driving, it may all make changes in the way of working in this sector. For example, AI can bring that vehicles are possibly all going to drive autonomously in the future. IoT will make that the deterioration of the vehicle can be monitored on a detailed level from a distance. Finally, the use of Big Data may lead to new forms of maintenance containing a planning on detail level when and where maintenance is needed, all prescribed by a computer system. This may help a lot in reducing of maintenance and development costs of trains, as 2/3 of the total cycle life costs are related to maintenance and further development costs, while only 1/3 is related to the actual purchase (Mooren & Van Dongen, 2013).

This research is done for the Gemeentelijk Vervoersbedrijf (GVB) Amsterdam. This organization takes care of the transport in the city and environment of Amsterdam (GVB, 2016). The organization provides transport by metro, tram, bus and ferry (Amsterdam weekly, 2014). This research will focus on the metro transport part of the organization, that is relevant to do because of the purchase of new metro vehicles. Namely, from 2013 on, 28 new metros of the French train builder Alstom have come into service in Amsterdam (Alstom, 2014). This metro, the Alstom Metropolis at GVB, which is also called M5, and replaces the old metro vehicles in the city. This metro is distinguishing from the other metros the GVB had and has in possession. Namely, this M5 metro is gathering sensor data from its components, like e.g. doors, brakes, compressors, and wheels. It can maybe even be stated that instead of a train that is going from point A to B, in more and more situations there is a driving computer making this journey.

With the above information in mind, on an infrastructural level, changes are occurring in the city of Amsterdam. At the moment, manufacturers in the city of Amsterdam are building a new metro line called the North-South line. This metro line, with a tunnel construction and a route through the city center, will most probably come into service in 2018 (Gemeente Amsterdam, 2016). The M5 metro will become the mode of transport on this new line. Apart from the fact that the GVB wants to make its service as reliable as possible, the organization also has to deal with the tunnel that is going under “het IJ”, the water splitting the north part of Amsterdam from the rest of the city. Especially in tunnels, the organization wants to prevent as much as possible that there will be metros, full of passengers, being forced to stop because of technical errors. One of the possibilities that might be interesting to investigate, is to use more data about the state of the metro vehicle in the maintenance process. Namely, the metro is gathering a lot of data, but this data is currently not the main driver for doing maintenance at GVB (H.t.M., 2016). The question is however: can data from the M5 metro be used in its maintenance and, if so, in what way?

1.2. Research goals

The goals of this research can be defined in two ways: internal and external. The internal goal, “the knowledge goal”, can be described as the direct answer on the research question (EUR, 2016). The internal research goal in this research can be

(8)

8 described in the following way: this research needs to clarify what the public transport organization GVB can do with the data they acquire from the new metro, M5, to increase the reliability and decrease errors in their transportation task. Looking to the research from a broader point of view, the external goal of a research, “the knowledge interest”, can be described as the contribution this research will have in both the societal as well as the scientific working field (EUR, 2016). For this research, the societal goal is to investigate how passenger transport can be improved with more focus on the condition of the vehicle. The other goal of the research, the scientific one, can be described in the way that this research tries to discover how maintenance philosophies in the transportation sector can be improved with the help of data as knowledge source.

1.3. Research question

With the challenges the organization has to deal with, but also the possibilities the areas of Information Technology and Big Data may bring, the central question of this research can be described as:

With the research question defined, an operationalization of the key terms and the analysis is needed. Therefore, the following sub questions can be defined in order to answer the research question:

1. What is maintenance and what different sorts of maintenance exist?

This sub question helps to define what different sorts of maintenance exist, how they developed in time and what kind of demands/characteristics the different philosophies of maintenance have. The considerations regarding this sub question can be found in the second chapter of this research.

2. How can data be used as knowledge in an organization?

In this sub question, it is researched what data usage means in an organization in order to create knowledge that can be applied. In order to create that knowledge, the right data sources, processing and analysis are needed. This sub question defines that path from raw data towards applicable knowledge. The considerations regarding this sub question can be found in the second chapter of this research.

3. What data is used by GVB and how is the maintenance approach of Metro M5 at the moment?

At this point, research will be done for the organization of GVB, the railway passenger transport organization in Amsterdam. It is analyzed what data is available and how it is used in the maintenance philosophy. In other words, at this point the current state of the combination of maintenance with data is examined. The design science researches that are conducted for this study can be found in chapters four and five.

4. What are possibilities for the usage of data in the maintenance process and to what extent can this lead to improvements?

a. For maintenance and reduction of errors of vehicles in general? b. For the M5 metro of GVB Amsterdam specifically?

This point of the research is meant to compare the current state of the combination of maintenance and data and what the possible desired situation is. From this, recommendations are constructed for this research. Next to this, new theory is provided for the scientific working field about data and its usage in maintenance in the working field of transportation. The considerations regarding this sub question can be found in chapters five, six, seven and eight of this research.

How can data in the context of a risk-based maintenance philosophy help to improve the reliability of vehicles, especially in the railway industry?

(9)

9

2. Related work

2.1. Maintenance development

In this sub chapter, the definition of the term “maintenance” will be investigated, next to an explanation of how the development of maintenance during the 20th and beginning of the 21st century has taken place. Also, different philosophies about maintenance are compared with each other, in order to get a clear picture about the definition and possibilities with maintenance.

2.1.1. Introduction

Maintenance is an important aspect in daily life. Maintenance of e.g. a vehicle keeps the object of high quality, of high reliability and prevents it from unwanted actions. This is because of the reason that components degenerate during time and even have the possibility of failure when nothing is changed regarding their situation (Canfield, 1986). Furthermore, a replacement of the entire object may be not necessary, expensive and even risk-increasing, as the state of the new object may be unknown and many components of the old object can possibly still be in a good state.

For a good understanding of the topic of maintenance, it is relevant to take a look at the development of the working field and see what has changed during time. An article focusing on this aspect comes from Arunraj & Maiti (2006). The authors state that, from a period of 1940 until now, the concept of maintenance can be put into four phases. In the first phase, 1940-1950, the focus laid on fixing something when it is broken, the so-called “corrective maintenance” approach. In the next phase, 1950-end of seventies, the focus laid on controlled preventive maintenance and time based planning & controlling. The third phase, end seventies-2000, focuses on computer-aided, condition-based-maintenance with proactive behavior as the main driver. The last phase, from 2000-now, also focuses on condition-based monitoring, together with risk-based monitoring, computer aided maintenance and an information system behind it.

2.1.2. Corrective maintenance

The ideal situation when using an asset is the situation in which the asset does not have any downtime. This is of course not something that often occurs in practice. There will always be downtime of equipment and unwanted errors that occur before, during or after operation. Referring back to the former sub chapter, from which can be derived that several maintenance concepts, as stated by Arunraj & Maiti (2006), are interesting to investigate. Moreover, it is interesting to do research about the meaning of the different maintenance concepts, and even more, doing research on their relationship with the other used terminology. The first maintenance term that is interesting to investigate is the concept of “corrective maintenance”. Adolfsson & Dahlström (2011) describe corrective maintenance as a return of the equipment to the working condition after a breakdown or deficiency of that equipment severe enough to stop that equipment from working. In turn, Kržan (2014) describes the maintenance characteristic as maintenance that is carried out after fault recognition. The difference that can be found between the two explanations are that the article from Kržan is focusing on the maintenance that is carried out after an error is discovered, while the article from Adolfsson & Dahlström is taking it one step further. In their explanation, the equipment needs to contain a deficiency that is deteriorating its functioning or even worse: the equipment is not working at all anymore. A third definition is provided by Charles Wasson (2015), who describes the concept as scheduled or unscheduled maintenance to restore the system back to the manufacturer’s specifications after failure, to restore a specific condition of the equipment. Here, the author is also specifying the restore to a predefined specific situation, as the other two authors do too, but Wasson is also describing that corrective maintenance can contain both unscheduled as well as scheduled maintenance.

Hence, when looking at the definitions of the three different authors, it can be concluded that corrective maintenance is done after a fault recognition, a deficiency of the equipment or after failure of that equipment. In other words, when no other information from a product is known, people start working with the product up to a point where something of the specifications of the product are changing/have changed. Next to those findings, it is important that the equipment needs to have some quality standards that needs to be met. After the maintenance, that quality standard needs to be provided again. Also, corrective maintenance can happen both scheduled and unscheduled, but never before an error in the equipment occurs, always afterwards.

2.1.3. Preventive maintenance

Corrective maintenance is having disadvantages that may be considered as unwilling for product owners. For example, failures of products are sudden, may be catastrophic, repair works cannot be planned as no certainty is on side when the product will stop working. Also, costs are two-sided: on the one side operational, as the system is not functioning anymore, and on the other side for the reparation of the product itself (Kržan, 2014). The counterpart of corrective, that is afterwards, is maintenance being done before a change of the situation, preventive. In turn, preventive maintenance is described by Kržan (2014) as maintenance that is done at predetermined intervals to reduce the probability of failure or the degradation of the functioning of the item.

Another author is mainly focusing on the degradation of the functioning of the item. Canfield (1986) is stating that preventive maintenance can be considered as a method to slow the degradation process and to extend the system life of

(10)

10 the product. What can be seen here, is that the author is not specifically pointing at the predefined planning in which the product is maintained, but mainly on the preplanning process from which the product can be used over a longer period than without this method. In the article, the author is also defining a hazard function. A hazard can be described as a “danger or risk” (Oxford University Press, 2016) or as “something that is dangerous and likely to cause damage” (Cambridge University Press, 2016). With a hazard function, the author is focusing on the degradation of the system in a time frame. In order to see what this degradation means, the author defines the hazard function within the concept of preventive maintenance. The author does this by using the Weibull distribution of failure, that is shown in the figure below:

Figure 1: Weibull analysis for doing preventive maintenance (Canfield, 1986). On the x-axis of the graph, the time variable can be found. On the y-axis, the hazard function.

The line of the graph is mounting during time, which means that the probability of a hazard is increasing when time continues. After every sub peak that can be found in the graph, maintenance is done, from which the probability of a hazard is increasing less than just before the maintenance. For the Weibull analysis with a hazard function, the following formula is used: Η(𝑥) = 𝛾𝑥^( 𝛾 − 1), in which H(x) is the hazard function, 𝛾 is the shape parameter, and x is the defined time (Engineering statistics handbook, 2016). A model that is extending the Weibull hazard-function analysis, as defined by Canfield (1986), is the theory from Grall, Bérenguer & Dieulle (2000), as visualized in the following figure:

Figure 2: deterioration model on maintenance based on Weibull failure analysis (Grall, Bérenguer, & Dieulle, 2000). On the x-axis of the graph, the time variable can be found. On the y-x-axis, the state of the system.

The above figure shows an operation process in which maintenance becomes important after a while. What can be seen, is that the state of the equipment is deteriorating during time (the x function in the Weibull hazard). After a certain moment, stochastic degradation is popping up (the H function in the Weibull hazard). After that state, preventive maintenance should be executed, because out of data becomes clear that the state of the equipment is needed to be maintained. It can even be stated that with the possession and usage of right, accurate data, that stochastic degradation might be reduced, but a statement regarding this degradation is for a big part out of scope for this research. However, from this theory, it becomes clear that when no preventive maintenance is used, up to some point, the equipment will stop working (failure), and at that point, corrective maintenance needs to be carried out. What can be derived from the articles of Canfield & Kržan is that they both emphasize the regular planned maintenance that is executed. However, Canfield’s article is more focusing on the extension of the life cycle of the product, while Kržan is more focusing on to reduce the probability of failure or the degradation of the functioning of the item. To be realistic, there will probably always exist corrective maintenance, as some errors and/or malfunctions of the equipment are happening unexpectedly, but most of the errors might be solved in an ideal situation before they become problematic.

(11)

11 One article specifically pointing at railway maintenance describes preventive railway maintenance as an execution of work. The derived goal of this is based on reducing of the occurrences of failure on the components of the equipment and/or to maximize profits from the system (Budai, Huisman, & Dekker, 2004). In addition, the authors define that preventive maintenance can be done based on different variables. Those variables can be based on calendar time, based on operation of the equipment performed in order to reduce the probability of the occurrence of a failure on the components of the railway infrastructure and/or maximize the operational benefits. The frequency of these works may be based on calendar time, operating time or depending on the actual condition of the infrastructure components. What can be derived from this description, is that those authors are making the definition of the concept more concrete. Namely, there is the case of the interval planned maintenance, as all the authors state, but Budai, Huisman & Dekker (2004) emphasize that this interval ratio can be based on calendar time, operation time or performance of the equipment. With this information, the Weibull x-axis, that is now only defined with the time variable, can be replaced with the other two variables as well. When comparing this to the theory of Arunraj & Maiti (2006) that defines the different phases of maintenance since World War II, it can be seen that preventive maintenance based on calendar time can be considered as maintenance in the “raw” form. This is in contrast to the other two theories, that are more focusing on the next phase of maintenance that was the status quo between the end seventies until about the year 2000, namely: condition-based maintenance.

What can be derived from this paragraph about preventive maintenance, is that it is always about a planning process that is defined before a component of the specified equipment is broken or before a certain degradation of the product is too bad to continue on working. During the lifetime of the product, the hazard rate of the product becomes higher, as can be concluded out of the Weibull analysis. What is interesting to investigate further, is that predictive maintenance is not only possible by doing it based on calendar time, but that it can also be handled with performance of components or operation time. What is interesting to see here, is that this approach is also in line with the development of preventive maintenance by Arunraj & Maiti (2006), because more condition-based maintenance comes into place there.

2.1.4. Condition-based maintenance

In this paragraph, there will be special focus on what literature describes and prescribes about the concept of condition-based maintenance, as this can be considered as a new form of preventive maintenance. Also, this also may be an entire new form of maintenance, so apart from the concept of preventive maintenance. All in all, a conclusion about being a concept of its own or a derivation of preventive maintenance can be made after the analysis of what literature is writing about this concept. A book that is focusing on condition-based maintenance (Zaal, 2013, p. 111) describes the concept as maintenance on the basis of a condition of an asset or based on the condition of the component of the asset. Moreover, the author writes that Operating Sytems (OS) become more and more intelligent, so every time it becomes easier to monitor the state of the asset with its components. Another explanation of the terminology comes from Raheja, Llinas, Nagi, & Romanowski (2007) from (Ellis, 2008). In their article the authors describe the phenomenon as a maintenance philosophy in which repair or replacement of the equipment is based on the state and the future state of the equipment.

It can be seen that the book from Zaal is mainly describing what prerequisites are needed to do condition-based maintenance. Namely, without sensors and data that is coming from the equipment, no condition-based maintenance will be possible, only preventive maintenance in its classic sense, that means based on time and/or kilometers. The article from Raheja, et al., though, is more focusing on what the philosophy is exactly meaning in its narrow sense. Not what is needed to make it possible, but what distinguishes this method from for example the philosophy of corrective maintenance. An addition to the article of Raheja et al. is the article from Horner, El-Haram, & Munns (1997) (Ellis, 2008). In their article, the authors describe that a change of the condition of the involved equipment makes that maintenance of that equipment is planned. This description is in strong relation with the statement of Raheja et al. though, because it assumes that there is some base point (P = 0) and a change during operation/existence of that equipment (towards failure: P =1). Somewhere between P0 and P1 the equipment normally needs to be maintained.

When focusing more on the concept of condition-based maintenance, Bloch and Geitner (1983) from (Ellis, 2008) suggest that certain indications of degradations of a piece of an equipment indicate 99 percent of all failures. This means that calendar time based preventive maintenance can be considered as ineffective. Moreover, according to those authors, with this technique, managers may focus on just- in-time (JIT) replacement, as it becomes clear where most of the failures come from and, as a consequence, what the state of the equipment is: does a component need to be replaced or can it continue on working for a while? The goal of this is to find an optimum in the tradeoff between reliability/quality and costs. Although those concepts seem to be very effective, other authors like Coetzee (1999) are stating that those concepts can only work when the organization as a whole is working in a holistic sense. A holistic sense is explained by Coetzee in his article an approach that includes the evaluation of the maintenance assets. This means that alerts or alarms from the equipment requires a disciplined maintenance organization (Ellis, 2008).

What can be derived from the theories from Bloch & Geitner, Ellis and Zaal, is that all of them state that information is needed in order to do condition-based maintenance. Specifically, there needs to be information available of course about the condition of the equipment. This eventually distinguishes the philosophy of condition-based maintenance from the traditional philosophy of preventive maintenance in its classical form, as the latter is only using a time variable to plan

(12)

12 maintenance. However, it can be stated too that condition-based maintenance is part of the bigger concept of preventive maintenance. The answer to this question is, in other words, that both views are correct, because of the fact that maintenance can be time-, kilometer-, or sensor value-based, but that all of them use the same concept. The next part of the related work part is about risk-based maintenance, that is focusing on the impact analysis of the condition about components.

2.1.5. Risk-based maintenance

A concept diving even deeper into the theory of maintenance philosophies is the concept of risk-based maintenance. A sensor calculating the state of the equipment and maintenance planning based on that sensor data can be considered as condition-based maintenance. However, with only measuring the state of an object without putting it in the context in which the object is functioning, is not wishful. Therefore, the concept of risk-based maintenance is introduced. This concept can be explained as maintenance in the sense that failure data, risk and quantitative analysis are the main driver to monitor the risk level of the equipment and, based on that data, it is decided when equipment needs to be maintained (Van Leeuwen, 2006). In the situation that thousands or even millions of subparts are placed together in an asset, there is a big chance that one or several of the parts are broken. However, it is the impact of the risk of the failure of that component that makes that there can be made levels of prioritization. Those considerations are put in a formula by Khan & Haddara (2003) and by Apeland & Aven (1999). The authors define the following formula: 𝑅𝑖𝑠𝑘 = 𝑝𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑦 𝑜𝑓 𝑓𝑎𝑖𝑙𝑢𝑟𝑒 ∗ 𝑐𝑜𝑛𝑠𝑒𝑞𝑢𝑒𝑛𝑐𝑒 𝑜𝑓 𝑡ℎ𝑒 𝑓𝑎𝑖𝑙𝑢𝑟𝑒. Furthermore, in order to put equipment in the asset in its context, there is the concept of risk assessment. Risk assessment tries to answer the following questions (Arunraj & Maiti, 2006) (Arendt, 1990):

- What can go wrong? - How can it go wrong? - How likely is its occurrence? - What would be the consequences?

When comparing the theories of Khan & Haddara (2003), Apeland & Aven (1999) and Arunraj & Maiti (2006), it can be discovered that the first two theories already introduce a combined way of research of a qualitative and quantitative approach. This in contrast to the theory of Arunraj & Maiti, whcih can be more considered as an overview of what theories can be found within the philosophy of risk-based maintenance. However, when looking at the risk assessment questions that Arunraj & Maiti from Arendt (1990) use in order to shape the context of their research, those questions are strongly related to Khan & Haddara’s and Apeland and Aven’s theories. Namely, the “probability of failure” from the one theory can be connected with the “likeliness” from the other one. Moreover, the consequence variable from the first theory can be one on one translated to the consequence question of the second one.

For this research, the questions about the occurrences and consequences of a single component can be used. Also, the occurrence and impact of a component can be compared to other components to see what implications this has for maintenance and actions from the organization. The concepts of preventive maintenance with condition- and risk-based maintenance do use sensor data from the vehicle in order to plan maintenance and to monitor the condition of an asset. In order to create a good overview of what the state of an equipment is, good quality data is desired. Because of this need of data, the next chapter is about the concept of data and how it needs to be considered in the working field of maintenance.

2.2. Data usage

Data quality is an important aspect when data is needed to predict maintenance. This subchapter is meant on how data quality can be assessed and how the path towards the final state of “possessing good data for prediction purposes” can be created.

2.2.1. Data collection

Before possible errors in an asset can be predicted and, as a consequence, maintenance can be planned and the Weibull hazard-function can be used, there first needs to be a process of data gathering in place. In the field of data and computer science, Business Intelligence can be considered as an umbrella term. There are many concepts that can be put under this philosophy. The concept of business intelligence can be described as “Computer-based techniques used in spotting, digging-out, and analyzing 'hard' business data” (Business dictionaries, 2016). In this case, data will be derived from equipment that is operating in the asset as well as for example reports within the organization about past maintenance that is executed. Data in itself is not useful. Data only becomes useful if it is connected to each other to see patterns, when it provides answers to "who", "what", "where", and "when" questions. At that stage, data becomes information. When, in turn, the information is applied in a business, it becomes knowledge (Bellinger, Castro, & Mills, 2004). However, before knowledge is created in an organization, data is needed for the input of the system. In order to get the knowledge, there needs to be data processed. The science behind this, namely extracting useful information from large data sets of databases, can, according to Hand, Manilla & Smyth (2001, p. 2), be defined as data mining. This concept is describing a process what the author of the books calls “extraction”, but what is exactly meant with this term? A definition diving deeper into what extraction can mean is from Aggarwal (2015, p. 1) who describes data mining as the study of collecting, cleaning, processing, analyzing, and

(13)

13 gaining useful insights from data. The author defines the following three processing steps in the line from collecting of data towards usage of data for business gains:

 Data collection: can be automatic from sensor, but also manual with e.g. collection of user surveys;

 Feature processing & data cleaning: Data that is collected is often not in a suitable form to do analysis. The most common data format is the one of multidimensional. In this format, different fields of the data correspond to the different measured properties that are referred to as features, attributes, or dimensions. In this step, cleaning of data happens too: data that is not useful or that cannot be converted to the right format will not be taken into considerations for the next steps;

 Analytical processing & algorithms: The final step of the process is to create analytical models (algorithms) in order to analyze the data that is found. The author describes two building blocks that can be used here. Those building blocks contain two feedback loops. In the building blocks of analytical processing, when the data seems not clean enough, the data goes back to the former state where the data is cleaned. The other feedback loop is started when errors are found in the data that cannot “simply” left out by cleaning of the data. When this is the case, the processing goes back to the begin stage where the data is collected. All steps can be visualized in the following model:

Figure 3: This figure is showing data processing pipeline, the process from data collection to data analytics (Aggarwal, 2015).

As can be seen in the model, when the data is all processed and no errors are found anymore, the output of the data is ready for analysis in the process, in this case for the risk-based maintenance process. On the level of data collection, Batini & Scannapieco (2006, p. 6) consider the different data types, that can be divided in three streams: structured, semi-structured and non-structured data (2006, p. 6). Structured data is for example data that is stored in relational tables. Semi-structured data are for example XML files, which are “scheme less” or “self-describing”. At last, unstructured data is data that is expressed in natural language in which no hierarchy can be found. The authors state that semi-structured data is having the advantage above structured data because of the reason that it is more flexible. An advantage of structured data above the other two options, however, is that structured data is the best readable for a human. Conclusively can be said that structured and semi-structured data contain advantages as well as disadvantages, but that they are preferred above unstructured data, which does not contain one of the advantages the others have.

2.2.2. Data quality

Before the data can be used for analytical purposes, the dataset needs to be prepared for the analysis. Doing analysis with wrong assumptions about the origin or about the usability of the data is not useful. Batini & Scannapieco (2006, p. 7) are considering the problem of data quality, as how it can be called. The fact that electronic data is nowadays sometimes so diffused, the quality of such data and their effects in society become more and more critical (Conachey, Serratella, & Wang, 2008). Data quality becomes more important as more and more data becomes available for business purposes (Kwon, Lee, & Shin, 2014). According to the same authors (Batini & Scannapieco, 2006, p. 17), data quality contains several side fields that come together and together create the data quality concept. According to the authors, data integration, data mining, statistics, knowledge representation and management information systems together create the concept of data quality. A theory that is describing how data quality can be assured for further analysis comes from Strong, Lee & W.L. (1997). The authors are writing about data quality in the sense that data should be intrinsic, accessible, contextual and representational. Intrinsic quality is about accuracy, objectivity, believability, and reputation. Accessibility is about accessibility and access security. Contextual on its turn is about relevancy, value-added, timeliness, completeness, amount of data. At last, representational is about interpretability, ease of understanding, concise representation and consistent representation. Another theory that is assessing data quality and that is providing a framework to test the data quality, comes from Pipino, Lee & W.L. (2002). Two of the three authors from the former theory were participating in the creation of this theory too. In this theory, the authors also define a table of variables of which needs to be thought about when assessing data quality. The assessment includes accessibility, believability, appropriate amount, completeness, concise and consistent representation, ease of manipulation, free-of-error, interpretability, objectivity, relevancy, reputation, security, timeliness, understandability and value-added.

The authors Batini & Scannapieco do three proposals for frameworks of data quality: theoretical, empirical and intuitive (Batini & Scannapieco, 2006, pp. 36-40). In the theoretical approach, a model is created in which the dimensions are put. In the empirical one, research is done after which the model of dimensions is created. In the last concept, the intuitive one, the

(14)

14 dimensions are created out of “common sense” and “practical examples” (Batini & Scannapieco, 2006, pp. 36-40). As the empirical approach suits best when considering research methods, validity as well as reliability purposes, this framework is preferred over the other two frameworks. The 15 dimensions, that are included in this method, are created after the filtering of 179 dimensions. In this empirical framework, coming from a paper written by W.L. & Strong (1996) the authors distinguish intrinsic, contextual, representational and accessibility data quality. Intrinsic is the data quality on its own, contextual is the context in which the data is used, representational is the interpretability of the data and accessibility is the level of security of gathering of the data. When comparing the different concepts of considerations of data quality, some differences and equalities can be discovered. It can be discovered that the authors differ from each other on detail level, but that the general concepts of data quality are the same. The concept of Strong, Lee & W.L. (1997) is adopted in the book of Batini & Scannapieco (2006). In that book, that explains the empirical data quality model, it can be considered as the most reliable and valid model over the former discussed theoretical and intuitive model. Namely this framework is created after filtering and considering many more dimensions that could have been relevant. That is also the reason why this framework is used for this research. The framework contains the following parts:

1. Intrinsic quality: accuracy, objectivity, believability, and reputation (Batini & Scannapieco, 2006); 1.1. Accuracy: data is correct, free of error and reliable;

1.2. Objectivity: data is unbiased and impartial; 1.3. Believability: data is true, real and credible;

1.4. Reputation: data can be trusted regarding source and content;

2. Accessibility: accessibility of the data and access security (Batini & Scannapieco, 2006); 2.1. Accessibility: data is available or can easily be retrieved;

2.2. Access security: access to data can be secured;

3. Contextual: relevancy, value-added, timeliness, completeness, amount of data (Batini & Scannapieco, 2006); 3.1. Relevancy: applicable and useful for the task;

3.2. Value-added: data is beneficial and provides advantages; 3.3. Timeliness: age of the data is appropriate for the task;

3.4. Completeness: data is of sufficient depth, breadth, and is having a good scope for the task; 3.5. Amount of data: volume of data is appropriate;

4. Representational: interpretability, ease of understanding, concise representation and consistent representation data (Batini & Scannapieco, 2006);

4.1. Interpretability: data is in an appropriate language and definitions are clear; 4.2. Ease of understanding: data is clear and can easily be comprehended; 4.3. Concise representation: data is presented compactly;

4.4. Consistent representation: data is always presented in same format and is compatible with previous data. Conclusively, it can be said that for the data preprocessing part, the theory from Strong, Lee & W.L. (1997) will be used, because of the holistic and proven approach of considering data quality of a dataset. From this data quality model, the later to be-conducted studies will describe the found data from the components based on this model. The next part of this subchapter is about the analysis that follows the prepared data from components, to see how interesting knowledge can be created out of the data.

2.2.3. Data analysis

When the data is assessed and possible improvements are made about the quality aspect of the data, the data is ready to be used for further analysis. The former described theory about the Weibull analysis, as derived from Ellis (2008), has proven to be a good tool in order to predict maintenance and to take decisions based on outcomes of this theory. Because of that, the Weibull hazard-function will be used in the study and in the results part of this research in order to see how well the chosen equipment is functioning and when it is time to replace a component. Input for this will come from the data preprocessing (data quality assessment) from the theory of Strong, Lee & W.L. (1997). As seen before, the concepts of maintenance as stated in this literature review cannot work without each other’s influences. Risk-based maintenance needs the input from data as the input for decision-making. Data processing as a working field needs the shield of risk-based maintenance in order to translate data to useful knowledge and application.

From the considered theories, it can be seen that the development of maintenance is going from corrective maintenance to risk-based maintenance. For this, there is the stage of condition-based maintenance in between. In the risk-based maintenance philosophy, occurrence and consequence of components are evaluated. When one or both of the variables are high, it is relevant to do more analysis on that component. For that, data is collected for the component. At this stage, it will be assessed what data type it is and if it is useful for preprocessing or if more data is needed. For the data preprocessing Strong, Lee & W.L.’s theory of datasets (1997) will be used. After that, data analysis is considered with the available data. When the data is having a decent quality, the Weibull will be created. Otherwise it will be argued why the creating of the Weibull is not interesting to do at this stage and what steps should be followed in order to create the graph. When all steps are followed, risk-based maintenance for the desired component is done and decision based on assessment of the data with Weibull can be done.

(15)

15

3. Research methodology

From the former chapter, it has become clear that the risk-based maintenance philosophy is the concept in which the impact of components with decent data, that is translated to knowledge, can be used. In this chapter, the method of “design science research” is explained for this research. It is considered how this method can suit to find out to what extent the conducted studies meet the requirements of risk-based maintenance and how this can be developed. Also, the research methods of the studies are explained, that include different set-up interviews and study-specific literature screening. A word of attention/caution is on place for this research too. In this research, from the document analysis and from the interviews, that are described in a later stage, the words “metro” and “train” are used alternately. This is done because of the fact that a metro is a type of train and not a technique on itself, so no conclusions based on word choosing is desired for those two alternately used words.

3.1. Design science research

The study that is executed in this research is not only to evaluate the current philosophies of maintenance that exist, but also to develop solutions for problems that occur in the practical field. This kind of research, in which not only a description and declaration of problems is given, but in which the found solutions are also developed and tested, is important in this field (Peffers, Tuur, Rothenberger, & Samir, 2007) & (March & Storey, 2008). This kind of research is called “design science research”. This study is chosen because of its proven functionality in the world of Information Systems (Hevner, 2007). This kind of research can be described as “not only a description of the field problems in a sector, but also the development of a framework that helps to develop generic solutions for that field problem” (Aken & Andriessen, 2011, p. 15). This type of research contains several elements (Aken & Andriessen, 2011, p. 17):

- The study is driven by finding a solution for field problems, not only knowledge problems; - The study is from the perspective of the professional;

- The study does not only describe the field problems, but also give a solution for the found problems;

- The research results are proven by “pragmatic validity”. In other words, the proof of the research proves if the interventions have given good results in the chosen context.

A challenge in doing design science research is the fact that this type of research is done on a specific component and in a specific context. The challenge, in other words, is to generalize the created concept for the field problem, so that the research is generalized and the solution can be used in other situations too. In academic terms, the challenge is to increase the reliability of the study. Before this problem can be operationalized and “solved” in this research, it first needs to be made clearer what components this research contains. Therefore, the results part of the research is written in the CIMO-logic. This CIMO-logic stands for “contextual, intervention, mechanism, outcome”, that handles the “generalization” of the conducted studies (Aken & Andriessen, 2011, p. 65).

The context of the logic is about the people that are involved, the type of organization and the infrastructural system. The intervention is about the change in which the organization is in need and/or interested in. The mechanisms are about the interventions in the defined context that can help to achieve the right outcome. The last one, the outcome, is about how important the outcome is for people, how the outcome can be measured, how relevant the outcome is and what the primary/secondary outcomes are (Denyer & Tranfield, 2009). For the contextual part of the research, it means that the organization is considered and what difference between theory and practice exist regarding concepts of maintenance. For the intervention, it is considered how data about components is gathered at the moment and what developments in that area are available. For the mechanisms part, it is considered how this goal of risk-based maintenance with the help of data can be achieved, along prerequisites from the working field. For the outcomes part, it is considered how eventual data from components can help the organization in the need/wish of further preventive maintenance philosophy working and how this can improve the reliability of the vehicles.

This type of study is known about its abductive way of reasoning. Three main ways of reasoning can be distinguished: inductive, deductive and abductive (Minnameier, 2010). Induction goes from “constructs” to “frameworks & typologies” and finally to “models”, while deduction goes the other way around, id est from models towards constructs (Christensen & Carlile, 2009). In addition, abduction is another form of reasoning. In this form, the reasoning contains a so-called “creative jump” (Aken & Andriessen, 2011, p. 50). The result of abduction is a hypothesis that can be verified, a result of its functioning in a certain context. In this method, the goal of the researcher is to discover things (Anna & Dubois, 2002) and to contribute to the conceptual understanding of an aspect (Ho, 1994). It is stated that, in order to prove if the solutions work, the method of design science research is suitable (Aken & Andriessen, 2011, p. 58). However, as the authors are stating, this method is only suitable if the solutions are also tested in the considered practical situation. This is covered in this research in the sense that it is considered how the created framework will work with the door and the climate control as base components of the train. This is also coming back in the research question, which is starting with the term “how”.

(16)

16

3.2. Data gathering design science researches

The gathering of data is the base of this thesis. Because of that, it is needed to make sure that this process of gathering of data is well-explained and considered, so that no gaps can be found in this process. This subchapter is meant to explain the data triangulation of this research and explains how the interviews, that are conducted, are set up.

3.2.1. Data triangulation

When looking at the research methods that are used, several techniques can be distinguished. This is done because of data triangulation, which can be described as the usage of multiple sources to investigate a phenomenon (Burke Johnson, 1997) in order to increase the theoretical validity. This theoretical validity states that the research theory needs to fit with the found data (Johnson, 1997). The advantage of data triangulation is that with this technique, more information about the investigated subjects can be gathered (Thurmond, 2001) from (Banik, 1993).

Several methods are used for this. The method of interviewing is used to see what kind of maintenance is applied in the field of railway industry and what different visions/experiences exist regarding this topic and the topic of data usage in the field. A set up interview that is used to make clear what the status quo is and what the different development approaches are, can be found in appendix C. With this, the opinions of several experts are covered in the research, which makes that the research becomes more reliable. Namely, as those experts have their expertise from different fields, their input differs as well, which makes the framework more contextual/generalizable. Furthermore, an interview will be conducted at the door controller’s manufacturer to see what development from the door’s data about its status can be used in order to predict maintenance. Regarding the climate control study, documentation research is used to create data triangulation. With this, opinions from the transportation organizations as well as from the manufacturer are involved in the research. This, in combination with theory about data science, maintenance science, artifacts (like IT applications) and documents, makes the research more reliable, valid and applicable for the railway industry.

3.2.2. Document and maintenance support systems analysis

In order to get the information on how the train is functioning and how all the components in the train are cooperating, documents as provided by the transportation organization are used. Those documents are semi-public documents: they are available for people within the organization and provided by the manufacturer of the train, Alstom. Those documents are semi-public as for some information sources it would not be wise to share this information publicly when there is no need to. Also, those documents are semi-public as they are often not interesting for people outside the context of train management. In this research, those documents are used. They are however only used to serve the goal of this research, and several people have made considerations on what the information would contribute to the goal of this research vs. the security of the train. The purpose of this research is of course to make this research public in the library of the university, and that is where is strived for and fortunately achieved, without any lack of information due the security prerequisite. Next to those semi-public documents from the organization itself, other documents are used. Those documents, mainly for the theoretical framework and the methodology part of this research, are public sources from the library of the university and from Google Scholar.

The Maintenance Support Systems (MSS) that are involved in this study are MSS that are used in the context of the maintenance of the M5 metro. Those systems help the GVB to plan its maintenance based on the actual state of components and based on prescribed maintenance by the manufacturer of the train. For this study, those systems are used to see what information is coming from those sources and, as a consequence, consider what contribution those systems have for the development of maintenance with the help of data. Those systems are internal systems of the organization, but the information from those sources is not restricted in the sense that they are containing information that may not be shared publicly. Because of that, those MSS and their information can fortunately be freely used in this research. Moreover, in order to analyze the data that is gathered from the MSS, several analytics tools are used to serve this goals of data acquisition and analysis. The programs that are used are Qlikview (2016) for data acquisition, Microsoft Excel (Microsoft, 2016) and Minitab (2016) for the analysis. For the analysis of data, it is analyzed whether a group of data is attracting notice based on time or based on classification of the error.

3.2.3. Interview creation

The structure of the interviews is derived from the interview structuring modeling technique of Frank Nack (2015). In this interview technique, an introduction from the side of the interviewer is followed by a warming up of question, the main body of the interview, a cool-off and a closure. The main body is the central questions part, the cool-of part is meant to ask the last question and the closure is meant for the purpose in which the interviewer thanks the interviewee for the reserved time. This technique is translated in this research context in the sense that the interviews will start with a general introduction on the topic this research is about. After that, the interviewer will introduce himself and tell something about his/her experience and current job. After that, the interviewee is asked to give his philosophy on maintenance and how it is organized from his profession. The next question block is about the data the interviewee has to deal with and the relation with the profession of the interviewee. After this block, the interviewee is asked to give his opinion about development of maintenance with the help of data and what possibilities lay in that field. At the end, the interviewer will ask the interviewee if there are any more topics the interviewee would like to discuss. If this is the case, the topic(s) will be discussed. If this is

(17)

17 not the case, the interview will end at this point. A visualization of the interview process model and set-up can be found in appendices B & C.

For both the internal as well as the external interview sessions that are planned, the interviews that are conducted can be considered as expert interviews. An expert can be described in the sense that it is a person experts that is having special knowledge that is related to their professions (Littig, 2013). An interview with an expert has the purpose to explore the knowledge of the expert for the research subject (Meuser & Nagel, 2009). The interviews can be described as in-depth interviews, because from a general start the interviews are focusing on someone’s experience and the expertise coming from that experience. An in-depth interview can be described as an interview method in which the participant’s perspective on the research topic is asked, in this case on the topics of data and maintenance (Mack, Woodsong, MacQueen, Guest, & Namey, 2005, p. 29). The interviews that are set up can be considered as a semi-structured interview. A semi-structured interview can be featured in the sense that the interviewer is asking neutral questions and, based on the interviewee’s answers on those questions, continues on questioning more about the topics the interviewee states in his answers (Mack, Woodsong, MacQueen, Guest, & Namey, 2005, p. 116). Central in this method is the exploration of created perceptions and the search for more information and clarification (Barriball & While, 1993) & (Galltetta, 2013, p. 2).

The interviews are performed with employees from the technical management department of GVB with mechanical technicians as well as IT-vehicle specialists. Furthermore, interviews are conducted with a maintainer from the manufacturer of the M5 vehicle, Alstom, with two data scientists from NedTrain, the technical management department of the Dutch Railways. When a decent amount of information is gathered about the topic, another interview is planned with employees of Wilee Techniek Leeuwarden, the manufacturer of the door controller. This is treated to have a more specialists based-interview regarding one component (see appendix D). For the interviews that are used in this research, the interviewers are anonymized. This is done because the interviewers are speaking from the organization’s perspective instead of on personal ground. Furthermore, this is done because people might be conservative in the sense that they might be cautious to say things which are negative about the organization or about partner organizations. In this research, the factor of information gathering is considered as more important than the transparency of the sources. Of course, names of persons can be given after accreditation of the author of the research, but the author has chosen, together with its sources, not to make the interviewers’ names public.

From the interviews that are conducted, an interview coding scheme can be created. Some of the possible interview coding approaches are the thematic analysis, grounded theory and discourse analysis (Woods, 2011). Thematic analysis is about the categorization of the interview transcripts in order to make a generalization of the interviews conducted from specialists. On its turn, grounded theory is mainly about the construction of a conceptual framework based on the given answers. Furthermore, discourse analysis is mainly about the social and cultural context in which the interviews are conducted, in order to get clear what this context is and what implications this has for the research. For this research, the thematic analysis method is used, as the conceptual framework is already defined from the found literature (grounded theory purpose) and the social and cultural context is not the main purpose of the interviews, which is the purpose of the discourse analysis. Namely, it is more important to get familiar with the technical construct of the vehicle and its context, so more a “technical context”. Because of this, the thematic analysis is the most applicable method for the interview coding of this research. The interview coding is organized in the sense that at first, the transcript of the interviews is created. After this, the keywords from the conducted interviews are stated, from which a summary of the main message of the interview is created. From here on, a generalization is created to see what the main topic of the summary is. At last, a connection with the theory from this research is constructed. The coding of the conducted interviews can be found in appendix II.

3.2.4. To be studies objects within GVB

The Alstom Metropolis is a train passenger vehicle from the French train builder Alstom. The train is divided in two parts with each three compartments, so in total six compartments can be found. The entire digital communication system of the train is called the “Train Control & Monitoring System” (TCMS). The Main Processor Unit (MPU), that is a component within the TCMS (B.J., 2016), can be considered as the “heart” of the system, as the MPU is translating all the events to human-readable text and is partly deciding, next to the driver itself, what decision should be taken. The figure in appendix E shows the above described TCMS that can be found in the train vehicle. The critical systems in the train are built redundant, which means that when one system fails, there is always another system running in the background to take over the failed one (Lyons & Vanderkulk, 1962). This is e.g. done for the MPU, the network around the MPU and for the Driver Display Unit (DDU), the driver’s dashboard. Furthermore, the Multiple-Vehicle Bus (MVB) bus of the train is the network that is created especially for trains for communication among components. This can also be seen in appendix E.

GVB Amsterdam developed several classifications for their public transport vehicles. With those classifications, it becomes clear what impact malfunctioning of components have on the traveling schedule of the vehicle. The malfunctions can be classified as an A, B or C-error (GVB, [1057] Overzicht storingscategorieen M5). In the base, everything that is done with a vehicle, is corrective. Namely, when the vehicle is driving and no more knowledge about the vehicle is present in the organization, the vehicle will continue on driving until it is going to malfunction. When knowledge is gathered about condition and impact of components, further stages of maintenance can be developed. In order to create an onboard

(18)

18 standardization of communication protocols, the International Electrotechnical Committee (IEC) was created (IEEE, 2001). This IEC group created standard number IEC 61375. In this standard, the MVB became the standard in order to become the vehicle bus of the train, that can be considered as the node of the train’s system. The application software in the TCMS itself is defined according to IEC-61131-3 (Dubus & Dehours, 2006).

The first design science research is about the doors of the M5 train. The M5 train is having 48 doors, 24 doors on each side. The counting system of the 48 doors can be found in appendix I. There are several reasons for considering data that is acquired about the functioning of a door. Firstly, the doors of this train are the most erroneous factor (H.C., 2016). Also, malfunction of a door is having a great impact, compared to other component errors (GVB, [1057] Overzicht storingscategorieen M5). For example, in a row of 24 doors (one side of the train), when one door is broken, a C-level error is announced, that means that the train can continue on driving until the next scheduled maintenance. However, when a second door in the row of 24 is broken, it directly becomes an A-error, that means that the train should be evacuated. In the second study, it is chosen to do research about the vehicle’s passengers’ climate control (from here on called “HVAC” (Wikipedia, 2016)), as those components are for sure having a great impact on the vehicle’s reliability. HVAC stands for “heating, ventilation and air conditioning” (Wikipedia, 2016). As can be derived from this, the HVAC is a multifunctional device. Namely, it can execute three tasks for three different purposes. It is out of scope for this research to exactly explain how a HVAC system works. Therefore, the HVAC system will be explained up to the level in which it is clear which components of the HVAC might have a possibility to start malfunctioning and which of them are having a great impact on the HVAC system’s functioning. It is chosen to monitor the train’s HVAC components because of the fact that, without considering the amount of errors that this component creates, this component is having a great impact on the passenger’s comfort. The M5 metro is containing six HVAC systems in the passenger’s area, one in each compartment of the train (Dehours, 2012). No classifications of errors are found in the documentation error (GVB, [1057] Overzicht storingscategorieen M5). When looking at the remarked errors in the SAP and SITRAP systems, the classified errors that occur, can be seen in the graph in appendix GG. What appears from this graph, is that more than 50% of the HVAC errors are classified as a B-error, followed by C-errors. In contrast, A-errors are almost not logged for the HVAC system.

3.3. Set-up further research

The next chapter starts with the view from the different experts, that are interviewed, regarding maintenance concepts. Those views are compared to each other, after which the first design science research is conducted. This design science research consists of the different programs that are used that help in logging of errors and prediction of failures. Those programs are assessed on the data quality framework as derived from the literature review. After this, the data that is found is analyzed and compared in order to predict possible failures. This is done in the way that different sources of data are compared to each other and in the way that the program with the most accurate and reliable data is used to analyze the system’s functioning. After this first design science research, the second study is conducted in the same way. After those studies, implications from the conducted studies and implications for the organization are described. From here, the next part contains the results part, that is written according to the CIMO-logic. In this part, the results from the conducted studies is generalized and compared to the theory about maintenance and data. The last two parts of the research contain the conclusion of the research and the discussion part, that contains the limitations, further research, implications for management and reflection of this research.

Referenties

GERELATEERDE DOCUMENTEN

Monsternr Type Herkomst monster Opm (cv, leeftijd etc) Uitslag 1 plant Stek van moerplant Cassy, gepland w46, tafel 11, partij 1 negatief 2 plant Stek van moerplant Cassy, gepland

Freddy van Nieulande vraagt eocene Marginellidae van het Bekken van Nantes (Bois Couêt)en Cotentin ter bestudering, of eventueel als

Skinner also had the ability to select resourceful, committed research assistants over the years, to help not only him and co-workers, but at times also students

Hospitals exchanging data among themselves is not considered, since this has been already widely researched (Gordon & Catalini, 2018). I will look into the different

This is similar to the explanation of Geraerds (1992); the total of activities serving the purpose of retaining the production units in or restoring them to the

Different ways of discussing the world can now include imaginary or false understandings: the point is not their relation to the reality they describe - this is properly the job

First, suggest how product branding can be transformed into city branding as a powerful image-building strategy Second, defines city branding, as it is currently

From this perspective, Heidegger’s thinking shows an important shift in western philosophy, since it changes the theme and the framework of philosophy, making a turn from the